Why no automatic exposure to the right w/o blown highlights?

Discussion in 'Casual Photo Conversations' started by jacob_ha, Mar 24, 2009.

  1. Hi everyone,
    why can't current Nikon DSLRs (to my knowledge) be set to automatically use the full dynamic range of the sensor (by exposing to the right) while avoiding blown highlights. Many photographers will tell you to always set your camera to do just that in order to achieve the best file quality and then adjust exposure in post processing. If this is true, wouldn't it be a great feature to have the camera do this for you, e.g., automatically chose the correct amount of exposure compensation? What am I missing? Thanks.
     
  2. Why? I don't know but the D300 does it better than the D70 did.
     
  3. What you're missing is "Active D-Lighting" - a feature on my D300 that I'm just starting to truly appreciate. Seems to do a remarkable job of keeping those highlights under control. Whether and when to use it (and how aggressively - "normal" vs. "high") is taking some time to build into my mental workflow while out shooting... but so far I've seen some dramatic improvements in how I'm recording scenes in tricky, contrasty light when I don't have time to chip and study the histogram.
     
  4. .
    Jacob, you seem to be saying that the old film capture criteria of zone previsualization is inappropriate for the digital Raw capture world where all of that zone placement stuff can be done after capture, and instead, you want the camera to capture everything possible in an endless dynamic range without regard for photographer's eventual criteria for zone placement on screen or on print output. Have I got it?
    I'm not saying that's good or bad, just different than it used to be with film.
    Also, you seem to be saying that your camera does not already automatically do that, and you wonder why. Have I got it?
    I think they all try, just some capture a wider dynamic range signal over noise than others (you get what you pay for?). What dynamic range and signal to noise ratio do you want that you are not getting?
    .
     
  5. Jacob, my best guess at this would be that shooting conditions vary so widely that it's better for the photographer to decide when to use exposure compensation to push your histogram to the right. The D300 does a superb job of metering but some situations, like a lot of bright snow, still "fool" the camera and are better adjusted individually. Would we all use exposure compensation for all scenes? I don't think we would and I don't think most of us would want the camera to do it for us. Could the camera's electronics be designed to recognize when to adjust it and do it? I don't know. Interesting to think about, though.
    Dick
     
  6. The matrix meter has (was it) 1005 elements; the imaging sensor has 10 to 24 million. Thus a highlight may appear not blown according to the matrix meter while it is blown in the actual data recorded by the sensor. Therefore, to do ETTR, the camera would need to be in live view mode and adjust the exposure so that it satisfies the criteria just at the moment you take the actual picture. I don't see how ETTR could be done without iterating the exposure which is slow, therefore it may not be realistic in a fast-changing situation.
    In any case a lot of people would prefer the image to look decent out of the camera with straight raw conversion, so that they don't have to adjust every single image. There may also be situations where a highlight should be blown to improve other parts of the image. I don't do ETTR at all, I aim to produce an exposure which requires minimal post-processing, since life is short.
     
  7. .
    Jacob, more to the point, my Raw "developer" software offers + / - 2 stops exposure compensation after the fact .
    If this is what you are after when trying to print the tone/chrome/zone placement and dynamic range and signal to noise ratio that you are after, then maybe your inquiry is not so much "why don't cameras blah-blah-blah " as much as you're really asking, "how do I find blah-blah-blah in my Raw developing software? ".
    My point is that once we have Raw capture in our years and years of archives, camera exposure controls and smarts no longer matter, and we're gonna revisit our Raw image files themselves over the years and re-"develop" them to our newest criteria and savvy.
    My Raw "developer" permits that. What Raw" developer" are you using, and what printer/ink/paper is your target under what lighting? This is your target, yes?
    .
     
  8. Jacob is correct, ideally you would want to expose your picture, so that the histogram is being pushed to the right without clipping the highlights. The resulting RAW file might look pretty bad as captured, but processing that RAW file will yield the highest quality print. It would be nice if a camera could perform that "push to the right" automatically - but only for RAW files!
    I practice that type of shooting by checking the histogram after every shot and compensating exposure accordingly.
     
  9. .
    So, Juergen and Jacob, am I correct in assessing your comments that zone placement previsualization a la Ansel Adams is dead, and Raw capture is merely all about grabbing the absolutely most original scene dynamic range over noise as possible, and then we can whimsically place tone/chrome/zones in the output, according to our artistic satisfaction, as a separate step after capture?
    And anyone who tweaks exposure during capture is shortcutting the Raw capture accuracy process for their own convenience, but they are compromising "total Raw capture accuracy" of their stored Raw?
    Do we have two different processes here to hit the same initial output target, the old film-way compromises revisiting the Raw after capture with as much flexibility as the new histogram-way because too many film-like decisions were made during capture, and therefor the resulting Raw is compromised, compared to a maximized Raw that can be uncompromisingly reinterpreted on demand any time after capture?
    "Let the snow fall where it may -- adjust it to your satisfaction later it in the lightroom"?
    .
     
  10. Peter: yes, if you use expose-to-the-right (ETTR) then the zone placement is done in the raw processor, not in the camera. It is similar to the old saw, "Expose for the shadows, develop for the highlights" (or was it the other way around? I could never remember). So when you expose, you optimize for maximum dynamic range, to keep as many options open for later as possible. When you process, you "place" the zones, realizing of course that you can alter the tone curve as needed.
    There's even a raw processor called LightZone that lets you think in terms of zones when you process.
    The problem with ETTR is that it then gets out of hand and becomes another technical cult. All sorts of processes are discussed on how to really get the maximum dynamic range, and are you getting the maximum dynamic range, and so on and on. Like many other technical discussions in photography, it has an allure that leads photographers into discussing it rather than practicing it.
     
  11. I didn't read all the other replies, but I will say this.
    It is quite acceptable to include specular highlights in photographs. If the algorithm always preserved detail in the brightest regions of the image, then we would lose too much in the shadows. In my experience, the Nikon metering does an great job of balancing these competing desires. If it's no quite right for a given situation, I just dial-in some exposure compensation or go to manual exposure.
     
  12. If your Nikon system automatically meters for optimal dynamic range I'd encourage its use together with a check on the histogram. Clipped highlights are out of workable information range, and unless they're spectral highlights, should be avoided. To others posting on this thread in favor of intentionally overexposing RAW capture just to rescue it in post processing, are you the ones who love to spend lots of time on the computer, in order to justify shooting RAW?
     
  13. .
    Earlier: "... To others posting on this thread in favor of intentionally overexposing RAW capture just to rescue it in post processing, are you the ones who love to spend lots of time on the computer, in order to justify shooting RAW? ..."​
    Howard, I don't read anyone as advocating that anyone should carelessly or otherwise mis-expose any capture.
    I see our exploration here as whether or not to previsualize our end-zones (so to speak) for immediate Raw capture as we did for immediate film capture, or instead to switch our capture thinking to NOT previsualize our end-zones, and instead capture maximum dynamic range over noise for immediate Raw, and then later use our Raw development software for expressing our output previsualization end-zones.
    In other words, capture snow with detail because the camera can, but then later we can develop with washed out snow if that was our previsualization intention.
    There is no right or wrong here, only choices.
    If we want specular highlights or snow without detail, do we do that during capture as we did with film, or do we now let digital capture get all the detail in Raw because it can, then wash out that detail after capture to satisfy our original artistic expression whims?
    There seems to be a voice calling for "because the camera CAN capture everything, we SHOULD let the camera capture everything". Hence the remark that if we don't want to PRINT everything, then we'd tune out zones after capture -- a new photographic opportunity provided only by modern digital Raw capture.
    I'm not expressing a vote one way ot the other, only identifying our choices -- to be more aware.
    I think I've got it. I think Kuryan expressed it well, also. What do others think, and more importantly, what do others DO?
    .
     
  14. Do you really want to use the "expose to the right" rule on EVERY photo?
    How about a photo of large piece of navy blue fabric. If you "expose to the right" it's going to turn from navy blue to sky blue. Is that what you want? How about a photo of a dark-skinned lady in a long black gown against a dark background. Do you want her gown to look light gray and her skin to turn beige?
    "Expose to the right" is a technique. As with any technique, you have to understand its limitations so you'll know when to apply it and when to use another approach. If photography were about blindly following rules, we could program robots to go out and take our images while we sleep in and enjoy a leisurely breakfast.
     
  15. .
    Dang robots stole my copyright while I was sleeping! =8^o
    .
     
  16. [[To others posting on this thread in favor of intentionally overexposing RAW capture just to rescue it in post processing, are you the ones who love to spend lots of time on the computer, in order to justify shooting RAW?]]
    It must take a lot of work to so completely not understand the subject, the discussion and how it applies to photography.
    It doesn't take a lot of work to be a troll though, and that was demonstrated very well.
    Bravo.
     
  17. .
    Now, now ... I think Howard just falls into the category of:
    "digital photography okay,
    -- computer geek life not okay"​
    ... and I GET that!
    The thread really begs us to become aware of the additional challenge of becoming a slave to the technology ... or becoming master of it.
    Without exploratory discussions where we find everyone's fears, we've never gonna find how to climb out from under the technobabble and new edicts (ALWAYS EXPOSE TO THE RIGHT) and learn how we would decide for ourselves, and why. Hence, I can see reasons to ignore the histogram ETTR rule, and instead expose in ways that others would see as too far to the left or too far to the right -- but that's where I want and see my photographic communication!
    Harvey, I imagine you'd like everything after the shutter release action to be out of your thoughts, to simply accurately reflect the decisions you made before shutter release, and to not have to revisit and tweak your captures afterward, right?
    Others, including myself, I imagine we wonder if there might be an advantage to having an "accurate", unflavored Raw capture that does not reflect artistic exposure decisions yet , and then only later handle those artistic decisions after capture in post processing, whenever we want to, including making a variety of opposing decisions we could never had tried years ago when stuck with film that could only be developed one way, and one way only.
    Harvey, I think you'd see that post processing "demand" as painful and unrewarding. Right?
    So, in response to Jacob's opening question, we have at least two camps:
    - all artistic decisions happen before shutter release, all subsequent processing and print out steps just accept and presume defaults, any reinterpretations that cannot get what they want form the original capture require reshooting
    Versus:
    - all capture decisions are for broadest technical capture accuracy, and then artistic decision making can come after capture , whimsically, on demand, whenever, differently with each revisiting ... though "defaults" might be okay, also.​
    Have I got it? Yes, no, maybe? Other?
    .
     
  18. How about a photo of large piece of navy blue fabric. If you "expose to the right" it's going to turn from navy blue to sky blue. Is that what you want? How about a photo of a dark-skinned lady in a long black gown against a dark background. Do you want her gown to look light gray and her skin to turn beige?​
    They won't "look" anything because it's just the raw image that's exposed that way. They'll "look" like whatever you choose to make them "look" in the raw processor. Default raw processor settings may well make them "look" very light, but you adjust the controls until they "look" they way you want them to.
    How is this any different from zone placement in chrome exposures? If you shoot chrome and decide to place the black gown in Zone V, it will look medium grey. That's your choice - who am I to say you absolutely must place it in Zone II? All that ETTR does is defer the zone placement to the processing stage rather than the exposure stage.
    By choosing ETTR, you can decide to place the dark gown in Zone II and you will not get posterization. If you had chosen an exposure that favored a "left" histogram, placing in Zone II might cause blotchy shadows.
     
  19. Oh, and about always using ETTR? I don't. When I'm doing street shooting, I don't, because it slows me down. I just accept what the matrix meter decides. I do use ETTR when I'm shooting from a tripod and I have the time, because it improves shadow noise.
    Another way to think of ETTR is that by increasing exposure, it's basically equivalent to decreasing ISO. That's why most ETTR practitioners only use it on the base ISO setting. It doesn't make sense to use it on a higher ISO, because you can achieve the same result by decreasing the ISO.
     
  20. One more thing and I promise I'll shut up. To answer, or at least address, the OP's question: I agree with the many other posts that have pointed out an autoexposure system that always suppresses blown highlights won't be optimal. You have to let some highlights blow. It's a case-by-case decision.
     
  21. I think some posters are making this more difficult than it needs to be. I agree with the OP and have suggested this sort of thing a number of times before. It should be a relatively simple process to have a setting which allows you to capture ETTR or non-ETTR (i.e. situation normal) for those who whine that they don't want to do ettr all the time (or those who have no understanding of ettr - such as the blue fabric poster). On the matter of specular highlights, you could have a menu setting which allows the user to set a percentage cutoff for highlight clipping. That is, say, 1% of the scene clipped, or whatever. You could also simply get the in-camera processing to jpg to scale back the exposure by the appropriate amount so your jpgs wouldn't be washed out.
     
  22. They won't "look" anything because it's just the raw image that's exposed that way. They'll "look" like whatever you choose to make them "look" in the raw processor. Default raw processor settings may well make them "look" very light, but you adjust the controls until they "look" they way you want them to.​
    It's a great technique to use when you need it, but it's not an approach that I have the time or desire to apply on EVERY exposure. Who has that kind of time? In most cases you can "get it right" in the camera.
    Further, digital sensors are not all-powerful. You might have to overexpose some highlights in order to save some shadows. By following ETTR as a rule, there will be situations where you LOSE shadow detail because you refused to clip a highlight or two.
    How is this any different from zone placement in chrome exposures? If you shoot chrome and decide to place the black gown in Zone V, it will look medium grey. That's your choice - who am I to say you absolutely must place it in Zone II? All that ETTR does is defer the zone placement to the processing stage rather than the exposure stage.​
    I think it's more similar to zone placement on black and white film, for two reasons. First, as you know, positive film has such a narrow latitude that you pretty much have to expose it where you want it. It's not really practical to overexpose slide film by two stops in order to capture shadow detail. You'll either need to bracket exposures or use split ND filters in order to register detail in a high-contrast scene. Second, the narrow latitude of positive film is an much a tool as it is a limitation. Black shadow areas look perfectly beautiful in a chrome; there's no posterization effect to worry about!
    By choosing ETTR, you can decide to place the dark gown in Zone II and you will not get posterization. If you had chosen an exposure that favored a "left" histogram, placing in Zone II might cause blotchy shadows.​
    I guess I'll have to admit some ignorance here, but when you darken the image in the RAW file are you not losing some bits in the process?
    It's too bad that Rembrandt didn't practice ETTR. He could have eliminated those distracting shadows! :)
     
  23. I guess I'll have to admit some ignorance here, but when you darken the image in the RAW file are you not losing some bits in the process?​
    Not really. You may end up with 'empty' bits in the highlight end of the image, but this is what you would have had if you had exposed "normally" instead. ETTR when done correctly will give you the exact same image but with far less noise in the shadow regions.
     
  24. None of this is strictly relevant to the Nikon Forum and the thread is showing symptoms of turning into yet another endlessly circuitous argument in which respondents debate each other rather than replying to the original poster. This discussion is being transferred to the Casual Conversations Forum.
     
  25. If this is true, wouldn't it be a great feature to have the camera do this for you, e.g., automatically chose the correct amount of exposure compensation? What am I missing?​
    As Dan Brown already noted, if there are specular highlights (e.g. bright reflections off glass or other polished surfaces) or actual light sources within the frame, then you would seriously underexpose the rest of the image by adjusting the exposure to not blow those highlights.
     
  26. I think this is why we film refugees who have been dragged kicking and screaming into digital, wish for the good old days!
     
  27. .
    "Getting it right " in camera is a photographer's artistic decision, and cannot be automated as an option "ETTR Exposure To The Right" rule since no programming foresight can anticipate what light coming through the lens is subject light the photographer wants, and what light is miscible and can be allowed to saturate. Is UV and IR important or not? How much? Does the photographer want detail in the water's reflection of the sun, or detail in the water's reflection of the sky? Hence the paucity 1,000:1 or 100,000:1 dynamic range sensors to match everyday scenes. If we're stuck with recording scenes with today's limited dynamic range sensors (~5-10:1), then we still have to be there as photographers making artistic decisions.
    "ETTR when done correctly " can't be done "correctly"! ETTR can't be locked down as exactly only one way of doing ETTR, considering in-camera live histograms are JPG based, and who knows what metering variables versus sensor variables don't match, not just number of points used in metering versus points in the sensor, but also differing response curves, and so on. And if specular highlights are "empty", does one ETTR specialist discard them while another ETTR specialist includes them? ETTR is "just" another artistic photographer tool to interpret and apply by experience of one's own camera and metering and choices. Hey, we're back to artistic choices again!
    "ETTR defers the zone placement to the processing stage rather than the exposure stage " is pretty much the promise and problem here, and still, the photographer must choose their compromise:
    - capture highlight detail and bring up shadows and noise later,
    - capture shadow detail and fake the highlight detail later.
    Decisions, decisions. ARTISTIC decisions, and that's why the copyright goes to the PHOTOGRAPHER, not to the gear or the gear designers.
    "Film refugees dragged kicking and screaming into digital, wish for the good old days " -- what's wrong with turning our camera around, slapping on our http://www.expodisc.com/ and setting our exposure for our interpretation of the incident light and our assessment of the scene, then turning the camera around, taking off the ExpoDisc, and shooting? Digital has nothing to do with it EXCEPT the misuse of immediately available little tiny JPG UFHOs Unidentified Flying HistOgrams.
    Photographer -- it's a tough job, but somebody's gotta do it!
    .
    PS - "transferred to the Casual Conversations Forum " -- cool tool!
    "Admin" -- it's a tough job, but somebody's gotta do it!
    Thanks, Lex.
    .
     
  28. "ETTR when done correctly " can't be done "correctly"!​
    Of course it can, and it IS done correctly numerous times a day by photographers around the world. The principle of ETTR is this: 'Overexpose' a scene by a X stops over what you the photographer would consider a 'correct' exposure, and then in post-processing, reduce the exposure by X stops to 'normalize' the image. All this worry about tone curves and gamma and whatnot mentioned by posters above is irrelevant. As long as you have a capable raw converter which works directly on the raw sensor data in a linear fashion (eg. Lightroom or DCRaw; but not Canon's DPP), your 'normalized' image will look exactly the same as your 'correct' exposed image, except that it will have significantly better shadow noise and detail.
    ETTR can't be locked down as exactly only one way of doing ETTR, considering in-camera live histograms are JPG based, and who knows what metering variables versus sensor variables don't match, not just number of points used in metering versus points in the sensor, but also differing response curves, and so on. And if specular highlights are "empty", does one ETTR specialist discard them while another ETTR specialist includes them? ETTR is "just" another artistic photographer tool to interpret and apply by experience of one's own camera and metering and choices.​
    No argument there. That's the beauty of digital however. Have an automatic ETTR setting on your camera with some user defined preset parameters. If you don't like the ETTR it spits out at you, then you can do it yourself via the lcd histogram. But I would be willing to bet that the camera processor and the exposure sensor would do a much better job of it than the camera operator trying to do the same thing on possibly a luminosity histogram of a gamma encoded, white balanced, JPG no less!
     
  29. To the ORIGINAL POST:

    Nikon DSLRs already do this.

    This is what the blinking highlights in image review are all about. All you have to do is relegate to chimping once and a while.
    [​IMG]
    Chimpers
     
  30. .
    Okay, for our archives, let's define the word "correct":
    ... and it seems we agree regarding ETTR even if we think we disagree -- since there is no ONE "correct" way, then all ways are equally correct, and all ways are equally in correct, and therefore, the use of the word "correct" is useless in this thread when trying to define how to use ETTR Expose To The Right "the correct" way -- unless we mean "according your your own artistic choices and whims", in which case, "correct" is not an absolute term. After all, what does saying "use ETTR correctly according you your own criteria " mean other than "there is NO "correct" way to use ETTR, but pick a way that serves you appropriately and don't get in your own way following rules you don't understand or that you misunderstand, rules that then work against your photographic goals". Argh!?
    Define ETTR by Bernie West : "... The principle of ETTR is this: 'Overexpose' a scene by a X stops over what you the photographer would consider a 'correct' exposure, and then in post-processing, reduce the exposure by X stops to 'normalize' the image ... " Got it. Whatever PLUS you did in-camera over camera-recommended exposure, do an equivalent MINUS in Raw development. Uh-huh.
    ----------
    Define "ETTR" by everyone else:
    John Shaw http://www.johnshawphoto.com/tips/workflow_outline.html
    JPEG

    RAW
    Centered
    histogram; no clipping.

    Histogram weighted to the right side; no clipping​
    ... hmmm. Does "weighted to the right" mean ETTR?
    ----------
    Andrew Rodney : http://www.digitalphotopro.com/technique/camera-technique/exposing-for-raw.html "... You must [ MUST! YOU HAVE NO CHOICE, RESISTANCE IS FUTILE!] attempt to fit the dynamic range of the scene within the dynamic range the capture device can record ... Expose to place as much data within this linear-encoded RAW image without losing highlight values you wish to reproduce [ = PERSONAL WHIM/ARTISTIC CHOICE ] ... ETTR presents a few problems, one being that the LCD camera preview, including the histogram and clipping indicators, isn't based on the linear RAW data. Instead, this preview is based on the rendered gamma-corrected JPEG your camera is set to produce, even if you don't save that JPEG and only shoot a RAW file! If your goal is to produce the best possible exposure for RAW, using the ETTR technique, the feedback on the LCD could steer you in the wrong direction ... " So Rodney has yet to incorporate artistic whim in the calculation, instead using the camera as a presumed scientific recordation device, not an artistic expression device.
    ----------
    Rags Gardner http://www.rags-int-inc.com/PhotoTechStuff/ETTR/ "... ETTR ... promoted ... as a replacement for traditional exposure metering ... premise is that you can validate camera metering by simply reading the histogram in the camera’s preview window ... based on some basic misunderstandings about digital photographic technology.
    - the premise that each bit level in a digitally encoded image represents an exposure stop
    - the premise that all digital cameras capture light in a perfectly linear fashion
    - the premise that the histogram represents the raw image data captured by the camera
    ... briefly address each of these [misunderstandings] ... "
    See also: http://www.rags-int-inc.com/PhotoTechStuff/
    Oh, you all can Google search for more and more and more on ETTR than it makes any sense to quote here!
    Me? I've been reading and re-reading the 30-or-more pages that make up a small book here, by Emil Martinec : http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/ I won't even try to pull a quote.
    It's really fascinating, and I got the point in exploring the scientific versus artistic use of our cameras . If we are not on the same page here, it may look like we disagree when in fact, we're just on different pages, and probably agree, even if we use different exposure techniques to accomplish our own personal whimsical artistic -- and scientific -- photographic goals.
    .
     
  31. Speaking of blown highlights, I find it exceedingly funny or silly when I hear about digital photographers avoiding blown highlights at any cost, and for all their compositions. They will see another's great landscape, but pooh pooh it because 1/10th of a cloud is blown...how rediculous!
    This policy of "no clipping" is at best foolish.
    The dirty secret is: Blown highlights are a reasonable compromise in order to get the subject exposed propely, and provided the blown highlights are not distracting, and not taking away from other elements of the composition. You got to be judicious about it, and never, never annal.
    Often, a good comp is ruined because the photographer was too annal about making sure not even one pixel is blown. Often these types show under exposed flat images, lacking detail in the shadows.
    If you're going to shoot digital, then accept the fact right up front that exposure is going to be compromised. What part of it is going to "pay"? The shadows? The highlights? A little of both? Until the DSLR makers provide us much wider DR, this is compromise we all have to make.
     
  32. Peter....
    I said
    what you the photographer would consider a 'correct' exposure​
    You said
    the use of the word "correct" is useless in this thread when trying to define how to use ETTR Expose To The Right "the correct" way -- unless we mean [aha!] "according your your own artistic choices and whims"​
    We are in agreement. You are making this more complicated than it needs to be. The question was about a setting on the camera to have the camera do ETTR automatically. It would be easy to implement, and could be turned off if one doesn't like the results, therefore, why NOT have it? Simple, really.
     
  33. .
    The reason ETTR cannot be automated is because ETTR requires choices only an informed decision maker can make because YOUR ETTR is not MY ETTR is not the next person's ETTR and so on. There is no "correct" ETTR, so if we agree that there is no "correct" ETTR, then there is no "one" ETTR that any so-called photographic scientist can program into a camera system. G'luck tryin ' though.
    Anyway, for blown highlights, I often fill them with a small percent noise so when printed, people think there's detail there. Works for me.
    .
     
  34. The reason ETTR cannot be automated is because ETTR requires choices only an informed decision maker can make because YOUR ETTR is not MY ETTR is not the next person's ETTR and so on. There is no "correct" ETTR, so if we agree that there is no "correct" ETTR, then there is no "one" ETTR that any so-called photographic scientist can program into a camera system. G'luck tryin ' though.​
    The point is the individual can set parameters through the menu to customize the camera's implimentation of ETTR. It's still an artistic decision because you can review the results straight after.
    Anyway, the real point that you are missing is this: The user can't make an informed decision about ETTR from looking at a histogram (often only luminosity) of a jpg on a tiny lcd monitor. The jpg is white balanced (the major reason why jpgs blow highlights that aren't blown in the raw data), and the blinkies on the screen are usually a green weighted luminosity reading (i.e. you could be totally blowing the red channel, but the blinkies won't show it). So attempting to do ETTR this way is basically just guess work and experience. And this isn't anymore artistic or accurate than having the camera processor do the assessment on the LINEAR raw data.
     
  35. But Bernie, parameters on a menu often mean the processing is for the ENTIRE image. This is why JPG and in-camera settings like sharpening, contrast, saturation, noise reduction SUCK. It is big bang. All or thinging. Bad.
    Peter nailed it. It is far better to let the human being decide how far to the right to expose. And a trained human can do this far "better" then any camera ever made. This is the same argument for going with raw over jpg....raw means the humand decides and NOT the machine.
    And I realize the in-cam histogram is based on the teeny-tiny jpg gen'd for the purpose of the LCD, but it is a good approximation, and over time one can "calibrate" their perception for a particular camera model to ascertain how blown it really is.
     
  36. .
    Earlier: "... It is far better to let the human being decide how far to the right to expose ... "​
    I don't think manual photographer's decisions are "better" than automation ever could be per se , I just think ETTR can't be predicatively or accurately automated to everyone's or anyone's criteria or satisfaction -- we have do many differing and competing definitions of ETTR right here in this thread to consider any one of them "the one".
    .
     
  37. Peter nailed it. It is far better to let the human being decide how far to the right to expose. And a trained human can do this far "better" then any camera ever made.​
    The point I am trying to make is that the human can't make an informed decision because they don't have access to the linear sensor data. The camera does. Anything you do to get the "best" ETTR for yourself, will be at best a 'guess'. Therefore, I say you have little to lose letting the camera have a go at it itself.
     
  38. Disagree Bernie...the in-camera histo is more then enough to ascertain good exposure, even if based upon the jpg image on the back LCD is not representative 100% of the raw image.
    Peter, the human made manual choices are exceedingly better then the machine's.
    How can a machine know the elements of a composition? It doesn't, and it never will.
    See what I mean?
     
  39. .
    Dan, you don't mistakenly thing we disagree, do you? I'm the one suggesting that an auto-ETTR setting on a camera would mean nothing without a detailed explanation in the owner's instruction manual as to what ETTR choices of the many available that camera's programmers chose.
    More important to me would be COMPLETE and ACCURATE information provided to the photographer so they could decide for themselves, and sadly, the data we now get is so approximate, and unique model to model, that all we are learning is the quirks of one model, and we cannot take that experience to our next camera. Oh well.
    However, the camera marketers do disagree, and want us to believe there is an image database inside our cameras that new scenes get compared to, so really, we don't have to think, and we don't have to be afraid to buy a DSLR.
    I won't share all the details of my historical exploration of fuzzy logic patents as applied to other cameras since the mid 1990s that successfully used metering pattern dynamic range information, and focus point information, and lens focus distance information to come up with a fuzzy solution on the fly, also incorporating high frequency movement in the auto focus video stream to adjust shutter speed for subject movement, and low frequency movement in the auto focus video stream to adjust shutter speed to avoid camera shake, although already incorporating lens focal length in the equation -- dynamic programming that anticipated non-specific groups of complex subject situations have been around for years!
    On a pixel level, the baton has been passed on to Apical Imaging, UK, see
    http://www.apical-imaging.com/autoexposure
    and browse their web site for some amazingly efficient chip-level exposure toys, some as small as 90 kb!
    .
     
  40. .
    Earlier: "... human made manual choices are exceedingly better then the machine's ..."​
    This is where I disagree in principle, though I agree in fact, sadly, because I find many cameras to be designed by marketers, not photographic engineers.
    MY joy in lerarning any new camera is in my (sonetime arduous but always rewarding) reverse-engineering the mind of the photographic engineers who brought their genius and skill to me through the mechanical and elctronic programming choices in their camera designs. I marvel as I finally figure out the benefits of first one feature, then another, and eventually there is no camera, no gear here, just seamless photography, vision recording and playback.
    Some call it ergonomics, but it's more than just putting a few controls at hand, er finger. It's also including and coordinating controls whose features and benefits make immediate sense to photographic situations. On my camera, for instance, I've set the back thumb dial for plus/minus compensation and the front forefinger dial for program shift, essentially giving me aperture or shutter priority and exposure compensation at my finger tips on demand without moving my eye from the viewfinder or using my other hand. Then "click".
    And so on to the next visual experience to record and playback. Hence my call for total system consideration, rather than one feature or trick like ETTR.
    .
     
  41. Peter, do you really want a machine to make decisions on saturation, sharpening, contrast, noise abatement, curves, levels, B&W conversion, exposure compensation? If you do, then never shoot raw.

    Clearly I am not talking about ergonomics here. Remember, machines are not creative. Only humans are.

    What if you want those tweeks to be surgically applied? How does the camera know what and where are the elements of a composition? How does the camera know that you what the model's right and NOT left cheek saturated? How does the camera know that you want to sharpen just the eyes and teeth and not the hair? I can go on and on with millions of "what ifs". A camera often applies tweeks "big bang" over the whole comps.
    As to ETTR, well that is one of the most important "tricks" one can do with DSLR's. You minimize it too much. Until DSLRs with 3-4 more stops of DR are available to us, ETTR is your best friend (assuming you shoot raw of course). I would never want the camera to figure out exposure beyond standard, or in other words, I would never want the camera to ETTR for me automatically. This is not to suggest that I find no use for metering modes...but when it comes to adjustment of exposure, it is far better for the human to do this.
     
  42. How's this for an option? Learn how to use an exposure meter, either the one in the camera and/or an incident meter.
    An exposure meter tells you more information than a histogram and in skilled hands it is faster and more accurate when it comes to determining the exposure/dynamic range of your subject.
     
  43. How's this for an option? Learn how to use an exposure meter, either the one in the camera and/or an incident meter.​
    I'm with G.V. on this one. Expose the photo how YOU want it to look, not how the camera deems best. If we all let our cameras pick exposure 100% of the time, people looking at photos would think there was no such thing as white snow.
     
  44. Disagree Bernie...the in-camera histo is more then enough to ascertain good exposure, even if based upon the jpg image on the back LCD is not representative 100% of the raw image.​
    I don't think you know how unrepresentative the jpg is of the sensor data. If you did, you would understand that "creative" considerations in the application of ETTR come a distant second to "technical" considerations.
     
  45. .
    Dan I think we disagree about different things, and you have misunderstood me, perhaps I have not been clear, let alone succinct.
    1 - I never suggested that I think any camera can (or should) make (or take credit for) artistic choices. However, I am suggesting that any camera's automated choices be well documented, and or turn-off-able. No camera has ETTR documentation, and ETTR itself is not unambiguously documented "out there" as a science, but, rather, ETTR is an ersatz "feel good" target of DnR/SNRs (pronounced "dinner sinners": Dynamic Range over Signal to Noise Ratio maximizers) who can't agree with each other how to get there. ETTR has nothing to do with tone/chrome/zone placement in the final print or display image, nor does it assist in knowledgeably managing tone/chrome/zone placement in each system module getting there.
    2 - I don't underestimate the general principles and usefulness of ETTR-type decisions IF that is supportive of a photographer's goals. However I also don't overestimate any camera's ability to accurately support any photographer's intended use of ETTR as assessed on any in-camera histogram. None do. Each is whimsical, and experience gained in surmounting ETTR to one's own personal standards on one camera may or may not be experience that is transferable to the next camera.
    Great thread, Jacob. This is a subject that needs careful and exhaustive scrutinizing. Google should lead many web searchers to photo.net because of this thread. I also thank everyone for their patience as curmudgeons wake up and try to get verbal at all, let alone try to bring clarity to our sharing what we know in ways that are useful and referential.
    .
     
  46. The main problem with ETTR is you can't consistently fit or gauge accurately the dynamic range of any given scene to the capabilities of any given camera sensor by pushing exposure. All you have to go by is the flashing blinky's on the LCD. And sometimes they're too small too see on the screen that by the time it's examined in the raw converter the interpretation of the data by that converter (ACR in particular) has blown out more data than realized.
    I've been trying to ETTR shooting outdoors and it's just a big PITA readjusting exposure to gain what little benefit ETTR brings that it slows me down considerably to the point that I've lost quite a few shots especially those of white birds or other animals that move in brightly lit scenes futzing with the exposure adjusts.
    I think what I'll do is take the advice given to me here about a year ago and just set the camera to Auto Exposure/or Av @ f8 and metering to Center Weighted and just shoot. Even though my camera's metering tends to underexpose, I don't see any of those 4096 levels, half of which are supposedly captured in the first stop, editing the image whether I ETTR or not.
    Frankly, I get better shots making sure the incamera histogram shows the highlight tapered end point just off and away from the right by 1/8th if dividing the entire graph into eight sections. This is of course allowing for spectrals to spike, but even that is hit or miss especially with clouds and chrome metal work on cars. Some you want blown but some you want retained and it's damn near impossible how far you've gone viewing the LCD with or without flashing blinky's and histogram.
    And practicing until familiar with the DSLR's exposure latitudes doesn't work either because of the linear nature of the sensor. One scene seems to fit nicely using one exposure setting going by what's on the LCD preview, blinky's and histogram. Turn just slightly changing the makeup of scene within the frame and BANG! all of a sudden you've got blown highlights or severe underexposure with the same exposure.
     
  47. .
    Earlier: "... white birds or other animals that move in brightly lit scenes futzing with the exposure adjusts ... "​
    =8^o THAT's the problem - keep those birds and other animals from futzing with your camera, and then you'll be okay! PITA or PETA, you choose! I'll have mine on wry. ;-)
    .
     
  48. I don't think you know how unrepresentative the jpg is of the sensor data. If you did, you would understand that "creative" considerations in the application of ETTR come a distant second to "technical" considerations.​
    Actually I do know very well how the histogram and the corresponding raw image correlate.

    You can learn this over time. It's about learning one's camera. How the jpg based histo corresponds to the actual raw, even if that histo is not based on that raw.

    In application, and practice, ETTR's creative aspects come first, over the technical. And because one is shooting raw, one usually has sufficient wiggle room during raw conversion and post processing.
    Tom, thousands use ETTR all the time, and see the differences and benefits over just going with the camera's own metering. The reason you can't see it is because you are too new to it, have not mastered it, and perhaps need to fine tune your post processing procedures. ETTR means more details in the shadows, yet the highlights don't have to be sacrificed.
     
  49. News Flash!

    Nikon announces new generation of DSLRs that not only automatically set ETTR photos without blown hilights, but also go out AND TAKE THE PHOTOS FOR YOU!

    Here is a sneak peak at the new camera!
    [​IMG]
     
  50. Dan,
    I respectfully disagree with you and my name is Tim, not Tom. I've been researching digital imaging and photo restoration for ten years, studying, shooting and processing raw a little over a year. I'm not a newbie and do know my way around making an image look good processing an image. I base my conclusions about ETTR on pure observation shooting real world scenes and processing in ACR and other raw converters. One of the main problems shooting ETTR is not knowing how a raw converter will interpret blown detail.
    ACR is one of these converters that interprets the data differently over other converters because its tools don't really show through ACR's odd truncated previews the effect of editing all those levels. I'm assuming it renders previews this way for the sake of speed because a competing raw converter actually renders pixel for pixel the sensor's linear data but is painfully slow editing in showing all those levels.
    Another cause of not being able to see all these levels is that the wider the dynamic range of a scene captured the less levels there are available to be distributed in rendering this amount of detail which brings me back to my original point about not being able to gauge the dynamic range of a scene that allows knowing how far to push exposure to the right. It's just not that simple determining this out in the field and knowing for sure what data has been captured exposing this way.
    If you've ever underexposed a wide dynamic range scene or any scene for that matter and increased exposure gradually in 1/3 stop increments and watched the right end point of the incamera histogram gradually move closer and closer to the right but abruptly and unexpectedly spike to the right with the next 1/3 stop increase, you are seeing the linear behavior of a sensor. And you'll never know just by looking at a scene at what exposure setting will cause this abrupt jump toward full saturation. You can try to calibrate your exposure for this, but it's like predicting the position of the gauge on a dwell and tach meter. Measuring photons in outdoor sunlight in any given scene is impossible to gauge and predict for.
    I've included some screenshots of a scene I shot using ETTR whose dynamic range was beyond the 12bit A/D converter processor of the camera. I posted a topic a while back on this subject over at Luminous Landscape and was told by a very knowledgeable member using a raw processing analyzer that I exposed this image quite well.
    The first screenshot is of rock detail and the lack of levels demonstrated by posterization using ACR's curve tool. Note that ACR's mandatory noise reduction has nothing to do with the posterization. The second is the same rock detail using another raw converter, Raw Developer, which shows pixel for pixel preview and curve tool which DOES show more levels making up the detail in the rock than ACR. It's still not the amount of levels you'ld expect in the first stop of brightness for such a scene which causes me to conclude that the dynamic range of that scene contributed to the lack of levels seen in the preview. It's for this reason I don't trust ETTR shooting outdoors.
    00StWt-119941584.jpg
     
  51. Here's the entire image to give an idea of the dynamic range of this scene which was determined by Raw Analyzer to be around 8 stops. Examining the original Pentax K100D PEF shows people very faintly in the blackness of that cave which supports the dynamic range claim.
    00StXK-119945584.jpg
     
  52. Here's an 8 bit minilab jpeg scan of a Colorado scene taken with an $80 Olympus P&S on Kodak HD 400 negative film and and a similar curve applied in Photoshop to bring out the same level of detail as the rock in the shot of the cave. THAT'S 8 BIT data jpeg preview not the 16 bit data ProPhotoRGB preview seen in the raw converter screenshots!
    To be more clear please note the dynamic range of the Colorado scene isn't as wide as the cave image and all video previews are in 8 bit anyway but that still doesn't explain why I'ld get posterization in the rocks. I welcome any correction or explanation as to how I'm wrong or misunderstanding about this.
    00StY5-119951684.jpg
     
  53. Tim, I think you're making way too much out of this. I've been processing raw with Canon DSLRs going on 8 years. And let me tell you, I know I have a great comp when the raw is flat, drab, lacking punch, lacking contrast...that means I got the most information possible from the sensor and the ONLY way to do this is to ETTR with judicious blowing on highlights on elements of the comp that matter the least or not at all. And it is these types of raws, which I decribed here, that really come alive with vigor and punch during the post processing phase.
    A master violin player can place his fingers on the neck in PRECISE positions. Every time, without error in tone. And he does this WITHOUT frets, as a guitar has.
    Raw and the jpg histo is analogous to a fretless violin. When I shoot raw, the histo based on the corresponding jpg is helpful because over many years, I have learned to know how far beyond the right most edge I can safely go. Same for the left most edge.
    In digital the worse thing anyone can do is intentionally under-expose. Doing this might be nice for the highlights but kiss the shadows good by. And since there is exceedingly more information in the right most stops of the histo, it is far better to over expose with judicious highlight clipping.
    I shoot raw ETTR and I very often see an improvement in details over conventional shooting.
    I think you need to approach this not from a technical angle (quantitative) but rather from a qualitative angle, just like the violin master that plays his fretless instrument. Don't get lost in the physics, the diagrams...
    The histo is not perfectly accurate, however one can calibrate one's camera workflow to work with it, and to astounding results, to be sure.
    Shooting ETTR for weddings means I can often get detail in the brides white white dress, yet see the fibers of the grooms black black tux. You can not do this unless you ETTR, and only if it is done right, with proper histo interpretation.
    With Raw one must never shoot to the image on the LCD because it lies, although the histo back there is exceeding helpful The other thing one must never do when shooting raw is get the exposure "perfect". Often if the raw pre-mastered image is flat, drab, blah, then the exposure probably was done right. In other words, when shooting raw it is often not good to get the exposure "perfect" at the time of shutter release...shooting jpg the opposite is true.
     
  54. MOREOVER, Tim, don't make the mistake of correlating dynamic range with bit depth. There is no correlation other then bit depth simply determines how thin or think each slice of the DR pie is. In other words, the pie is the dynamic range and the size of this pie does not change when the bit depth changes. When bit depth changes, the size of each slice changes but the DR is constant.
    For example, for 8 bits/channel, the darkest tone is value 0. Same for 12,14, 16, 500 bit depth. The brightest tone for 8 bits/channel is value 255, and for 12 bit it is 4095, yet tone 255 in 8/bits is equal to tone 4,095 in 12 bits. Again, bit depth does not define dynamic range. And think about it, the sensor is an analog instrument that outputs an analog signal, and a particular DR to the A/D converter. This data is not even digitized until the A/D does it's job, but before it gets to the A/D converter, the DR has already been defined. In other words, DR is an analog term and not a digital term.
     
  55. .
    Earlier: "... One of the main problems shooting ETTR is not knowing how a raw converter will interpret blown detail ... "​
    Ahhh, there's another problem. ETTR "should" be about the UNINTERPRETED Raw, not the interpted Raw. I thought the whole point of ETTR was to maximixe storing the scene dynamic range abover noise into the Raw file itself, and so ETTR has nothing to do with subsequent interpretors. Does anyone think ETTR is aimed at satisfying different or specific Raw interpreters? If so, that opens up yet another whole random mess of ambiguous targets for ETTR: "I use Bibble, so I ETTR this way ..." versus "I use DxO so I have to ETTR that way ..." versus "I use ..." and so on. OMFG NOOOOOooooooo!
    Amazing thread, this -- Luminous Landscape opened a can of worms when proposing ETTR! I wonder what the next fashion fad will be in "new" photographic techiques? April IS just around the corner, get ready ...
    .
     
  56. .
    Earlier: "... a scene ... whose dynamic range was beyond the 12bit A/D converter processor of the camera ... "​
    Another misaprehension.
    Bit depth renders curves/tone-transitions smoothly, or not, depending on the number of bits.
    Dynamic range is unrelated to bit depth, and a 2-bit image can have more dynamic range than a 16 bit image. "Can."
    Try it -- make a 2 bit image with black and white parts, and make a 16 bit image with gray parts only.
    See?
    Bit depth and dynamic range are unrelated.
    Otherwise, we're hammering out this ETTR thing quite well, wouldn't you say? =8^o
    .
     
  57. I hate to be the one that is responsible for turning this thread into a poo-storm, but I am going to have to do it. Captured dynamic range is definitely related to A/D converter bit depth. Before flying off the handle, please carefully consider what I am about to say:
    When a scene contains a DR (dynamic range) greater than the sensor can handle, the MAXIMUM number of stops that can be captured by the camera and converted to an image (or raw) file is the bit-depth of the ADC. That is, a 14 bit ADC can theoretically represent a scene DR of 14 stops. In reality, the bottom stops are full of noise and it is impossible to distinguish the signal from the noise. So in reality, you end up with something like 8-10 stops of recordable DR. THe reason this is so is because of the linear nature of the sensor and the ADC converter. I don't have the energy to go through all the details, and in reality, you mightn't even care. So if you want more details let me know and I will discuss them. But if you really want to know all the nitty gritty, you can check this thread out. Last I looked it was just short of 1000 posts! (hehehe:). Towards the end is where I give some actual examples using values. Cheers.
     
  58. .
    Though a 2-bit AD converter (Sanyo, right?) should still produce a black and white image of whatever dynamic dynamic range is presentable in the final display medium, and a 39.7-bit AD converter (Leica, right?) of a graycard image will still result in a presentation image with a low dynamic range, I think I understand your point. I also think we are starting to wander waay off on different points (starting to?), and ETTR is no longer on the original poster's mind (Jacob may have left photography altogether by now). I'm gonna read the reference:
    ... but that's LAST YEAR'S thread, so how up-to-date could it be? =8^o.
    And I'm not in it, so how relevant could it be? ;-)
    (Hurry up April!)
    .
    PS - How'd you COUNT the number of posts in the other thread?
    .
     
  59. (Jacob may have left photography altogether by now)​
    LOL.
    Though a 2-bit AD converter (Sanyo, right?) should still produce a black and white image of whatever dynamic dynamic range is presentable in the final display medium​
    It will produce a black and white image, but it will still only capture 2 stops of scene dynamic range.
    PS - How'd you COUNT the number of posts in the other thread?​
    check here towards the bottom of the page. Just checking now I see that it has 899 posts. I was confusing it with the other 'marriage ending thread' I was in which got to 990-odd ;)
    The two common 'threads' you will see between those two threads are: Mauro Franic and Rishi Sanyal. I'm not sure if they've popped up in this thread yet, but if you ever come across them, let 'em know I hold them responsible ;)
     
  60. .
    So, Bernie, d'you think saving as 8-bit JPG is unrelated to AD conversion dynamic range versus number of stops wise?
    I imagine, regardless, the digital bit level is more than the mechanically analog captured image dynamic range in stops, so even an 8-bit JPG CAN hold a 5, 6, 7, or even 8-stop dynamic range, right?
    Help us zero in on what we are talking about if all bit level digital images -- 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ,14 ,15, 16, 17, and so on (TIF, by the way, CAN encode any bit level, but no one uses the non-"standard" encoding available) -- can produce a bright white and a dark black, and maybe more or fewer gray steps in-between.
    Just because an image is 2-bit does not mean it has a 2-stop recordation. Dynamic range is kind of up to the presentation medium, and if the output black is rather light, say only 1/2 the intensity of the presented white, or conversely, if the output white is only twice brighter than the black, then, yes, it's a 2-stop dynamic range -- regardless of the bit depth of the recordation.
    But a recorded image is no more limited in it's dymanic range by the test tools that measure it than a real life scene is. Just because I use one meter versus another does not limit my photographic subject's dynamic range accordingly -- it remains the same even if I never even measure it!
    ----------
    By the way, I wonder what's the photo.net thread post-count record?
    .
     
  61. HOLY JESUS CHRIST ON A CRACKER, BERNIE, THAT IS A "MARRIAGE ENDING" THREAD! I had no idea. Couldn't finish downloading it.
    Guess I shouldn't have posted the film shot. It wasn't meant to start a film vs digital flame war and I certainly don't want this thread going 1000 posts. Just to be clear about the film shot's dynamic range width, the tree foliage shadows not shown in that Colorado image plugged to black. But to be even more clear it's the placing/shifting toward the highlights of the dynamic range of that scene within 255 RGB levels was the point I was trying to make.
    When shooting ETTR with DSLR's, the wider the dynamic range the less wiggle room you have shifting highlights toward the right to get it to fit within 255 gray levels. This is why I find ETTR difficult and impractical to put into practice on a consistent basis because it's impossible to know or judge the size of dynamic range shooting outdoors because it's constantly changing.
    All that's been said so far are compelling arguments with some I disagree with and some that make no sense, but no one so far has explained what's happening in those screenshots which is typical in discussions of this sort. Despite all that's been said either for or against ETTR, for me, the techno jargon just hasn't panned out. I just don't see 4096 levels of editable detail. Those screenshots show it and prove it.
    And according to Adobe ACR engineers all raw data is interpretive as soon as it passes through demosiacing algorithms in use in all competing raw converters. And there's no getting past demosaicing when editing a raw image whether you have 12, 14 or 39.7 bits of captured sensor data. It's the interpretation of mapping those levels and showing it in a preview I have a problem with when editing.
     
  62. So, Bernie, d'you think saving as 8-bit JPG is unrelated to AD conversion dynamic range versus number of stops wise?​
    An 8 bit jpg could contain anything from 0 stops to N stops (where N is the bit-depth of the ADC). This is where people get cranky about what I am saying. The final image format fits the common description of dynamic range, that is: Two ladders (eg 8 bit tiff and 16 bit tiff), both the same length, but one has more steps (bits) than the other. They both cover the same dynamic range, but one contains finer tonality. So, once the ADC has encoded the scene, it can be represented by any number of bits after the fact. But the bit-depth of the ADC limits the total number of stops of scene dynamic range that can be encoded.
    From memory of that thread (ha! I'm not going back to read through that all again!) the point you raise of the 2-bit adc was a contentious one. I can't remember what we resolved about that, so I will have to have a bit of a think on that. By the way, here's the calculations I did in that thread. These where based on real numbers I got out of my 5D raw files:
    So if we incorporate the noise signal into the output value from the ADC we get something like this:
    input : output
    stop 12 : 3700 (3573 signal + 127 noise)
    -1EV : 1914 (1787 signal + 127 noise)
    -2EV : 1020 (893 signal + 127 noise)
    -3EV : 574 (447 signal + 127 noise)
    -4EV : 350 (223 signal + 127 noise)
    -5EV : 239 (112 signal + 127 noise)
    -6EV : 183 (56 signal + 127 noise)
    -7EV : 155 (28 signal + 127 noise)
    -8EV : 141 (14 signal + 127 noise)
    -9EV : 134 (7 signal + 127 noise)
    -10EV : 130 (3 signal + 127 noise)
    -11EV : 129 (2 signal + 127 noise)
    -12EV : 128 (1 signal + 127 noise)

    Now I know this is highly simplified and there are other types of noise present other than read noise, but I am guessing read noise is the major type of noise. So you can see, 12 stops of data are recorded, but clearly the lower stops are awash with noise (this isn't something we didn't already know). And this goes back to my point about usable dynamic range. Clearly some of the lower stops it will be impossible to distinguish signal from noise. They technically have been captured, but they are unusable.​
     
  63. but no one so far has explained what's happening in those screenshots which is typical in discussions of this sort.​
    Tim, i'm not sure exactly what you are describing. Are you saying that the ACR preview of your current rendered raw settings is showing posterization, but the actual raw file isn't posterized?
    On the issue of ETTR and outdoors, I find that usually by the time I struggle out of the house, the dynamic range of most scenes is well beyond the sensor anyway. So there's not much opportunity to 'push' the exposure to the right, as it fills the whole histogram anyway. Maybe if I could get out of bed early enough I might find the light more amenable to ETTR :)
     
  64. Bernie,
    If you examine the ACR preview I posted it shows that there isn't a lot of tonal levels making up the majority of the texture in the rock. Now compare that with the 8 bit minilab neg scan and notice how fine the transitions are in the clouds. This crop section of the neg scan makes up about 1/4 inch wide area of the actual negative with the clouds being very far in the distance compared to the rock which is closer to the lens at far greater resolve with a resolution of 3000x2000 in the DSLR capture. Now the minilab scan of the clouds is about the same resolution maybe even less considering the negative is full frame and DSLR APS-C sensor is smaller.
    Don't you find this at odds with the claims of 4096 levels being recorded by the DSLR sensor? Something doesn't visually add up here. That rock detail should have far more differentiating tonal levels making up the texture of the rock more so than shown in the neg scan, but it doesn't.
     
  65. Tim... Is the posterization visible in the converted and saved tiff? I am just wondering, because I know in my version of photoshop (PS7) the curves dialogue shows a lower resolution preview. So I can often get posterization in the preview dialogue, but once the curve adjustment is accepted, it renders un-posterized in full resolution.

    The other point to make is that virtually none of the dslr's today capture the full amount of levels available. Eg. my 5D only captures about 3690 odd levels. That is, the sensor saturates at a voltage short of the upper voltage of the ADC. When the raw is rendered to a tiff, the raw converter clips the raw file to saturate at the appropriate level (3690 odd in the 5D).
     
  66. Bernie,
    The posterization even shows in the converted tiff version and even more so when applying an additional curve adjustment in Photoshop to get more differentiating tonal levels. I just get more posterization. In fact when I rendered the same cave image in Raw Developer that shows a better, more detailed representation of the rock texture without ACR's default noise suppression which I hope you can see in the posted RD screenshot, I get the same posterization converting to tiff and applying more curve edits in Photoshop.
    I've been trying to come up with an analogy that explains a little of what's going on in layman's terms, no complicated math to obscure things. It goes like this.
    A DSLR sensor acts just like a mirror. A mirror bounces photons back to our eyes in extremely fine microscopic detail off the fine grains of silver painted on the back of the glass. It has a finer resolution, a much broader ability in reflecting back a wider dynamic range and bit depth so to speak than a DSLR sensor which doesn't reflect back photons but captures them in photocells/pixel wells/photosites, whatever the term used, counts the photons, measures the voltages, converting the readings to one's and zero's by the ADC and then onto another round of interpretation with the converter software.
    Keep in mind photocells=pixel squares are not the same as microscopic grains of silver. We don't know the true size and shape of these photocells because the pixel grid system isn't a physical construct like microscopic grains of silver, it is a mathematical interpretation of voltages taken from PHOTON COUNTING.
    I believe where things fall apart or bring into question the quality and fineness of this interpretation is through the integrity of the ADC. Can anyone test the integrity and quality of an ADC visually in what it does with these photons captured and counted in a sensor's photosites and converted to digital information through its hardware? A digital sensor is just silicon and that's where the physicality stops. The rest is Bayer filtration and demosaicing interpretation.
    With negative or positive film the fine grain of silver and dye reacting together that forms the lights and darks are already locked in chemically without ADC interpretation. However, film's dynamic range combined with the scanner's is smaller and doesn't tax the ADC voltage conversion and Bayer filtration that gets turned into one's and zero's allowing more levels and resolve to be packed into making up a smaller range of tones in a 255 RGB gray level, 8 bit video editing environment.
    I think DSLR's expanded ability in capturing a wider dynamic range is not truly being matched up precisely with the capabilities of the software and video editing system in use today and the only culprit must be the quality of the ADC or how the software treats/interprets the converted ADC data.
    Just my observation.
     
  67. Bernie, the death to your arguments is that you correlate DR with the bit-depth of the A/D chip. It's like you're trying to convert ounces to square-feet. Apples and Oranges, to be sure.
    If you have a sensor that produces a range of tones (DR) of X, and you run that signal through 5 different A/D converters, each having a different bit-depth, the DR is constant.
    DR is NOT a digital range. It is an analog range, and is defined by an analog instrument: The Sensor.
    And bit depth does NOT correlate to stops. I really don't know how you make this tie, but it is like converting square miles to temperature, frankly.
     
  68. Bernie,
    I went back and re-edited two versions of this cave image in tiff format to bring out more of these levels in the rock. The following two posted images will be screenshots with the first being ACR with added tweaks using its curve tool, turning off sharpening, reducing exposure from -2.00 to -2.5 and adding a simple Sharpen filter and additional curve in CS3.
    The second one will be added tweaks of the Raw Developer tiff version with only a curve edit to match the same density appearance as ACR's version which will follow in the next post.
    00SuTi-120361784.jpg
     
  69. And here's the Raw Developer version. See if this is the amount of levels that should be seen in a supposedly high bit capture.
    00SuTl-120363684.jpg
     
  70. Dan,
    Think of a DSLR sensor as a stamp of a latent image provided by the lens that collects photons in a pattern we discern as reality. Reality contains a ton of data beyond the capability of the electronics of a DSLR sensor to capture all of it and render faithfully to pixels.
    By the time this latent image of photons goes through processing it is difficult to determine if the sensor is throwing away data by its own limitations or if the AD converter is throwing away data through the electronics.
    Data IS getting thrown away. You can't deny this. Just compare the cloud detail in the jpeg that is far away from the dime size lens on the P&S camera I used to capture those clouds on a 1/4 inch space of a negative film frame and compare to the level of detailin the rock that is way much closer to the gigantic lens of a DSLR.
    Explain the differences in the appearance of the detail between the film and DSLR shot. It doesn't make sense.
     
  71. Tim, to be honest, I have also been disappointed with the detail available in dslr highlights. Given that about half the captured image data is sourced from the brightest stop in the scene, we really should get very detailed highlights. Two possiblities come to mind:

    1. Are our raw converters TRULY working on the linear raw data when we pull the exposure slider back? It's my understanding the ACR/LR and DCRAW do, but I know Canon's DPP seems to do some non-linear things. But maybe ACR isn't either. I guess the easy way to test would be to output a truly linear raw in dcraw, and then normalize it in photoshop and see if it contained anymore detail in the highlights than the ACR conversion. Actually, I just thought of another point. You know how I said that modern dslrs clip at a level short of the full bit-depth, well, sometimes they clip TOO short. Each unit is different, and I have seen the guys on Luminous Landscape posting that they can get clipping when I shouldn't be clipping. I believe Guillermo convinced Dave Coffin to put in the Saturation Clipping switch into the dcraw code to take account of this behaviour.

    2. Is some sort of sensor blooming or photosite overflow happening at the highlights? This might explain why detail is lacking in these areas.
     
  72. And bit depth does NOT correlate to stops.​
    It does, and this is the key to my argument. The sensor and the AD converter are LINEAR devices. That is double the input, you double the ouput. This is the key. What is one stop increase in brightness? A DOUBLING of light input. It is as simple as that. If you expose so the brightest thing in a scene just clips the sensor/adc, and you work back from there in stops (i.e. HALVING the input), calculating inputs and outputs like I did in my above example, you will see they are definitely linked.
     
  73. 1. Are our raw converters TRULY working on the linear raw data when we pull the exposure slider back? It's my understanding the ACR/LR and DCRAW do, but I know Canon's DPP seems to do some non-linear things. But maybe ACR isn't either. I guess the easy way to test would be to output a truly linear raw in dcraw, and then normalize it in photoshop and see if it contained anymore detail in the highlights than the ACR conversion.​
    Seeing that I had to pull ACR's exposure slider to the left to -2.5 on those previous shots is an indication that I pushed for the highest possible highlite detail without clipping. But I went and zero'ed out all settings in ACR and applied a -.60 to the Exposure slider so all possible levels would be accessed by the curve adjusts which was the only edit I did to bring out distinction between levels as shown below.
    00Sud9-120423584.jpg
     
  74. And here's Raw Developer's version which is loosely based on DCraw set to Linear Output. All input profiles are disengaged as well as sharpening and a Highlite Recovery was applied to pull back the histogram to look the same as ACR.
    The curve edits shown are the internal base curve applied to the linear preview. It shows a little more differentiation between levels than what's shown in ACR, but note the curve points expand the entire length of the tonal curve, not just the first stop of brightness where half of the 4096 levels are suppose to reside.
    00SudL-120425584.jpg
     
  75. Bernie, doubling the light brightness has no correlation whatsoever to bit depth.
    For example, the newest sensors that work with 14 bit A/D chips have little or no better DR then older sensors that work with 12 bit A/D converters. If a camera provides wider DR, it is only because it's sensor provides the wider DR and not the A/D converter.
    For example, it is theoritically possible for a sensor of DR X to feed it's output to a 12 bit A/D converter, OR feed it's output to a 14 or 16 bit converter, yet in every single case, the DR is exactly the same range of brightness.
    In other words, you take an old Canon EOS 20D, and rip out it's 12 bit A/D converter and replace it with a 14 or 16 bit A/D converter, and doing so cause no change in DR, none, zip.
    Or you can take sensor (A) that feeds a 12 bit A/D converter, and take sensor (B) that feeds a 16 bit A/D converter, and yet the camera with sensor (A) could provide wider DR then the camera with sensor (B), yet (B) feeds an A/D converted having more bit depth.
    See what I mean?
     
  76. Bernie, doubling the light brightness has no correlation whatsoever to bit depth. For example, the newest sensors that work with 14 bit A/D chips have little or no better DR then older sensors that work with 12 bit A/D converters. If a camera provides wider DR, it is only because it's sensor provides the wider DR and not the A/D converter.​
    Dan... I didn't really want to get sucked back into another one of these arguments. I've already endured a 900 post thread on this topic, and I don't really want to do it again. But I'll try and give a few explanations.
    The newer 14-bit ADC's do have a little benefit over the old 12-bit adc's, but you are right it is not 2 stops. The reason for this that the dynamic range of dslrs today are most limited by NOISE. A higher bit ADC still as to deal with this noise. The amount of noise usually won't change much unless there has been a radical improvement in design. So if we are mainly talking about more bits as the improvement, then the newer adc still has to deal with the same amount of noise. Now it can chop up these lower levels into more packages, but the majority of the noisiest levels remain just as noisy as they were with a lower bit adc. At some point the smaller packages pay a dividend and return a distinguishable signal.
    Now you didn't read carefully enough what I initially wrote. This is why I asked for a careful reading. Perhaps I should have been more clear too. The sensor/ADC COMBINATION is what limits the MAXIMUM number of stops that can be recorded. Whichever is the most limiting of these two will limit the MAXIMUM number of stops recorded. As I stated, in modern DSLR's the sensor clips before the ADC. And this is another reason why you won't get 14 stops out of a 14-bit ADC. But if you could take a sensor that saturates at higher than the ADC, then you would be limited to the number of stops equivalent to the bit-depth of the ADC.
    And once again, the number of bits does relate to the number of stops recordable, through the DIRECT CORRELATION between the change attributes of both. I.e. Double brightness, you increase 1 stop. Double ADC input, and you increase the usage of bits by 1.
     

Share This Page