Multiple Exposures, HDR and the D90

Discussion in 'Nikon' started by drjoder, Mar 24, 2010.

  1. Ok, I have just searched the threads and found some discussion, but nothing that completely satisified my curiosity. I have a D90 on the way from B&H (can't wait!) and one of the things I want to try is using the multiple exposure capability to create a sort of in-camera version of an HDR image. This would be with higher contrast landscapes and cityscapes mostly. Has anyone done this extensively with the D90 or any other DSLR? Is it worth it? Would you see a significantly better exposure with more tonal range as compared to simply taking one well-exposed image and adjusting curves, levels, etc. in Photoshop? I would think if it worked well I would see a lot more discussion about it as a landscape photography technique, no? Would such an image look more "natural" than a true HDR image? I am somewhat of a beginner with this digital stuff so maybe I am missing something here...???
     
  2. While you can take multiple exposures with your D90, there is no option to create an in-camera HDR with them. HDR involves creating a 32-bit HDR image followed by tone-mapping - no such capability in-camera with the D90.
    Using multiple exposures too essentially lower overall contrast and dynamic range of a scene could be rather tricky in camera too - usually graduated ND filters are used for the purpose. You certainly have more flexibility by bracketing your shot and blending them later in post processing.
     
  3. Yes, I knew that what you would get would not be an HDR image...I just wondered if it would be significantly better (a "poor man's HDR"?) than simply one well-exposed image on which you later worked curves/levels. From what you say, Dieter, the much better and easier answer, then, is to work the filters and/or blend several images later in Photoshop. So, multiple exposure technique not worth the effort? Maybe that's why I haven't seen much discussion of it... : )
     
  4. I think it is a much better idea to do the combination in photoshop or dedicated program.
    But, I did have an idea I wanted to share. You could use multiple exposure (record menu) in combination with exposure bracketing, except that I just found out that nikon blocked that. No bracketing in combination with multiple exposures. Another way that might work (it works, but will the results be nice) is to use the retouch menu and there you combine two pictures into one (raw works best) and then use some exposure bracketing to take the pictures. i think you have to dial the exposure down a bit, because the exposures add up. Don't know if the result will actually contain more dynamic range. good luck.
    Sjoerd
     
  5. Daniel, I am not quite sure I understand what you are after. You can certainly use multiple exposures to build up one well-exposed image - but the sum of - let's say four exposures - will be identical to the one taken with a single exposure (motion effects excluded). With multiple exposures, you can't make selective changes to the overall images - or in other words, you can only add light but not subtract. If four exposures added together give a well-exposed image - but you'd like to get more light/detail in the shadows - then with adding a fifth exposure, you just waved your highlights good bye.
    Lowering overall scene contrast while shooting requires graduated ND filters (or fill-flash). You can duplicate the ND filter effect by bracketing your shots and blending them later in post processing. Personally, I prefer the latter option as I don't like to handle filters in the field and feel I have more options and control doing it in post processing.
     
  6. Ok, maybe I asked a question that was too advanced for my newby photo knowledge to allow me to understand! However, Dieter, your comment about "you can only add light" seems the key to the answer. So, I think I'll just keep working on getting the best possible exposures AND I'll especially keep working on my post-processing skills (a bit lacking at this point), as both you and Sjoerd recommend. Thanks for the help!
     
  7. Dieter, I'm puzzled by your reference to a "32 bit HDR file" above. Surely the only legitimate saveable file formats are 24bit or 48 bit? And all file formats are automatically converted to 24 bit when displayed on an RGB monitor anyway, so I don't see any reason why an HDR image can't be tone-mapped to 24 bit and saved as a Jpeg (Yukkk!).
    I've created pseudo HDRs from 14 bit RAW files, which can easily cover a 12 stop SBR, and never had to work with 32 bit files. Simply create two "exposures" in 16 bit mode using ACR, open them in Photoshop, adjust the tone curves and then combine them using the layer value blend mode. The image mode can then be changed to 8 bit depth without a problem. Job done!
     
  8. Here's a quick demo of what can be done even starting from one jpeg exposure, using just an 8 bit application like the GIMP. The image was split into two superimposed layers, the top layer was "brightened" in the shadows using the curves tool and then the two layers were blended using "value" mode.
    The two pictures side-by-side are the individual layers, and as you can see the highlights have been "blown out in the righthand layer, while the shadows are too dark in the left layer. The combination brings back the detail in the highlights as well as lifting the shadows. And all this was quickly done at 8 bit. Much better results and far more drastic shadow lifting can be done from a RAW file.
    00W4rt-231833884.jpg
     
  9. And here's the combination.
    00W4s3-231835584.JPG
     
  10. Daniel, you've walked into some interesting possibilities.
    1) Using multiple exposure in-camera -- multiple underexposed images: there is some slight noise reduction possible when stacking up multiple (underexposed) exposures, but they don't add up to much more than the results of one good exposure at a correspondingly lower ISO setting.
    2) Using multiple exposures for averaging: This involves taking multiple exposures (all "properly exposed") and using an averaging function with them in post processing (eg., Photomatix). This will reduce noise somewhat even at lower ISOs, and it will in fact improve your shadow response quite a bit. The numbers will in fact be robust enough to begin working in HDR formats (32 bit floating point per pixel). From there you can do a lot with tonemapping.
    3) Pseudo-HDR: Don't bother trying to fake out an HDR program with artificially bracketed exposures. It's barely worth the effort.
    4) Real HDR: Bracketing even as few as 3 exposures (+-2EV recommended) will give you enough to create an HDR dataset that will allow good tonemapping.
    It's a good time to learn some new things. For example, on your camera, ISO1600 will give you better shadows than ISO200. An image made from ISO200 (highlights) and ISO1600 (shadows) is much better than an ISO200 exposure, with the appropriate crossover function.
    I'd strongly recommend picking up the demo version of Photomatix when you get your camera, and putting it through its paces. It will do averaging as well as HDR, and both have their uses. Welcome to the land of absolute light magnitudes, as opposed to the relative magnitudes of traditional photography which put black and white relative to a fixed exposure window.
     
  11. Rodeo Joe: Dieter was right. There are many uses for a 32 bit (per pixel) floating point representation. For one thing, it assigns an equal number of bits to every light magnitude, from the lowest to the highest. This is something very important you cannot do in any 24/48 bit fixed-point representation, and it's a prelude to tonemapping. This is what the "HDR file" is all about.
     
  12. There are several programs that can create a HDR image from a single image. Some are stand alone programs while others are Photoshop plug-ins. Some of them do a really nice job. I have tried several and find Topaz Adjust to be pretty good (sold by Topaz Labs). Photomatrix also makes a pretty good program.
     
  13. What a number of programs, like Topaz Adjust and other plugins, as well as sliders on ACR, is tonemapping. They do not create HDR images, but they do create a look that is characteristic of tonemapping, which itself is often used on HDR images. The tonemapped look is often confused for HDR, which has no particular look that is essential to it.
     
  14. Rodeo Joe - what you are doing is exposure blending and has nothing to do with HDR. High Dynamic Range imaging starts with usually three bracketed images (-2, 0, +2) EV and as a first step creates a 32-bit (per channel, or 96 bit per pixel) radiance file. Here is a small tutorial http://www.cambridgeincolour.com/tutorials/high-dynamic-range.htm
    There currently is no output device that can display this 32-bit radiance file properly - so the second step in HDR imaging is tone-mapping which maps the 32-bit file into something that can be displayed. Tonemapping is what creates the typical "HDR look" that some like and some despise - especially when applied with abandon and not handled careful. If applied with moderation, images can look very natural though and the technique applied in this fashion is an alternative to exposure blending.
    The pseudo-HDR created from a single RAW (or JPEG) is simply a tone-mapped file and at no point in the process was a 32-bit radiance file created.
     
  15. There are a few key things to understand about the HDR representation
    * It assigns an equal amount of precision to every light value (within the limits of binary numbers). By contrast, the fixed point binary representation degrades as the values become lower.
    * The levels are more absolute and less relative. HDR black is a theoretical zero degrees black, and HDR white is astronomically hot. With the appropriate sunblock, one could photograph a black cat on the surface of the sun without blowing any "channels". The ground rules are very different.
    Even if you don't bracket exposures, the process of averaging multiple exposures of the same scene at the same exposure level will add greater definition to the near-black region. Multiple samples of the same pixel location will yield fractional numbers when averaged together. Let's say you take 4 exposures, and at a given pixel location, the red photosite records <1,1,2,2>, which averages to 1.5. As you take more exposures, the numbers will converge to a fractional quantity. Since there is increased information content to be represented, you have to allocate more bits.
    If you do bracket exposures, you can extend the dynamic range of the scene indefinitely, with time allotted/available being the significant restraint. The more time you have, the more extensive and precise a photographic record you can make.
    In the end, with these information-rich photographic records, what you do with it will be more a matter of choice. Imagine if the darkest tones in your print were as robust as the mid-tones? These HDR files are able to withstand an amazing amount of processing without falling apart. The methods for working with floating point 32+ bit representations is only in their infancy. It's barely there in CS4. But just wait. It will become much more important as time goes on.
     
  16. Hi Daniel have a look here http://www.youtube.com/watch?v=1EoRbVX7Drk
    Regards Aad
     
  17. Thanks for the discussion everyone--even though a lot was a bit over my head at this point. Adrian--interesting tutorial. I'll definitely play with the multiple exposure idea on the D90 and see what happens...I'll also start getting smarter with HDR and post-processing...so much to learn, and only 30 or 40 more years to do so (if I have my grandfather's longevity).
     
  18. When you use D90's multiple exposure, here's how it's done by you and by the camera. You pick and set an exposure value, the number of exposures (N), and release the shutter N times. The camera would *divide* the exposure value by N for each shutter release, *add* the N shots and save as one. Typically, a ME shooter would *change* the compositions of the N shots for artistic creativity, and not *maintain* the compositions of the N shots to be identical (for purpose like yours). As many have pointed out, maintaining the composition will only end up with the camera summarizing the N shots to produce one with the same exposure as if you shoot without ME.
    ME works differently between a film and a digital camera. On a film camera the exposure of each N shot is completely under your control and can be changed shot by shot, and is not automatically *divided* like on a digital camera. But changing the exposure while maintaining the composition on a film camera still will not produce the HDR you are looking for.
    The obvious HDR solution, for both film and digital, is to shoot different frames with varying exposures, and merge them in Photoshop or other post processing sw.
     
  19. Hmmm. I don't see how a shift of 4 stops (+/- 2 stops) can possibly extend 14 bit capture to 32 bits. This adds 4 bits at most, the equivalent of 18 bits per channnel capture. And what possible real life scenario would need a 32 bit dynamic range? Sure your subject brightness range might well approach 18 stops, but after camera body and lens flare are taken into account, the brightness ratio at the sensor is going to be, what?, half of that possibly.
    The fact remains that 32 bits per channel CANNOT be displayed on any real monitor, print, projector or direct laser optical injector (I just made that up, BTW), and that means that a gamma (tone) curve MUST be applied, which somewhat defeats the technological jiggery-pokery of creating a 32 bit file in the first place.
    Smoke and mirrors, smoke and mirrors.
     
  20. >3) Pseudo-HDR: Don't bother trying to fake out an HDR program with artificially bracketed exposures. It's barely worth the effort.
    I wouldn't say so. With careful exposure, you get around 10 stops in D90 NEFs (well, depends on how noise-sensitive you are... DxO even claims 12...). Nikon only processes around 8 at once, in-camera or NX raw converters. You may get those 10 stops out using pseudo-HDR, which helps a bit in harsh light. An alternative is the DxO OP raw converter that is able to take more input DR and also helps you squeeze it a bit to a viewable DR range in a manner similar to ADL but more efficient. Of course you get a better result with exposure bracketing (less noise, richer shadows), but bracketing is often infeasible - moving subjects...
    >If you do bracket exposures, you can extend the dynamic range of the scene indefinitely, with time allotted/available being the significant restraint. The more time you have, the more extensive and precise a photographic record you can make.

    One should be aware that the internal reflections of light in the lens and camera limit the useful DR. With veiling flare, the shadows get brighter than they actually are, thus overall DR gets reduced. With ghosting, one gets light blobs of reflected highlights in the shadows areas. This depends a lot on the lens and its coatings; exceeding 12 stops is tough.
     
  21. Rodeo Joe - you are not understanding correctly what 32 bits are for. A couple of things require it:
    1) A low level value such as RGB=2,2,2 requires 2 bits on your sensor A-D. But if you take ten exposures of the same scene, and then average them (with or without normalization), you would end up with something like RGB=2.5454,2.4856454,2.3134323. You need a new representation to store such numbers.
    2) Not only is there some added bit depth and precision, but in this dogma, ALL RGB values are assigned an equal number of bits. This is because all light values are treated equally in ABSOLUTE magnitude space. It is a different way of working entirely. 32 bits are needed because of the astronomical range between HDR black and HDR white. Remember our black cat on the surface of the sun.
    I hope you will investigate the facts a little further before labeling the technology as "smoke and mirrors".
     
  22. Sem, anything you can do with fake bracketing can be done just as well if not better without it. Nikon NX will deliver all the bits that are available.
     
  23. Nikon NX will deliver all the bits that are available, but not at once ;) It has a DR window of around 8 stops that you can shift up or down - but if you have more in your NEF and you want it all, you'll have to do HDR from multiple conversions at different EC. You can test that even if you lower the contrast a lot, it still blows either highlights or shadows or some of both although the DR becomes less than . U-point stuff does not let you change the exposure compensation value locally.
    Mind that this is not really fake, just NX does not care much for high DR, that's all. Of course expect no miracles - what you can achieve by this compared to single-exposure NX is not so much, for example you can bring out the shadows the same but compress rather than blow highlights. For anything more you need multiple exposures. What I consider fake is if you start from a JPG or a NEF thas has less than 8 stops.
     
  24. Sem, I'm not sure what you're experiencing. If you capture to a 16-bit TIF file, Nikon NX will deliver all 12 or 14 bits depending upon what you have your camera set to record.
     
  25. Luke, I'm not the first person to notice this. NX does /have access/ to all info in the NEF. But it is not designed for anything near HDR processing - I guess the intent of Nikon is to provide most pleasant tones from moderate-DR images (those made in decent lightning). It seems that getting all input DR out /at once/ was not considered the most important objective.
    Some camera review sites make people think that the DR is one of the most important things on Earth and that the camera manufacturers are competing to achieve the highest; in fact it seems that the manufacturers mostly have a "healthy" attitude towards it and that they're actually not really doing their best to help folks easily make best pics in harsh light (within current sensor capabilities).
    It is no wonder this phenomenon has not achieved wide attention among serious photographers. Most of them will take care of proper lightning first. If this is not feasible (landscape...), multiple exposures and HDR is the best alternative. This approach is left for the remaining cases where multiple exposures is an option (motion etc.), but a loss in quality compared to true HDR is inevitable: you still cannot cover a "serious" DR, you have more noise in the shadows, and the colours in the shadows are not as good.
    You need a well-exposed sample to see this; you're likely to have them if you use UniWB, or if you try to make use of the highlights headroom otherwise. It needs to be around 10 stops and hardly recoverable at both ends (shadows/highlights) by applying substantial exposure compensation (positive/negative) in NX. This is best verified using Rawnalyze. The catch is when trying to recover both shadows and highlights at once. This should go by lowering the contrast substantially; however, this doesn't work in NX: the histogram does shrink sufficiently, but it is still blown on one or the other end.
     

Share This Page