sparkie Posted December 14, 2005 Share Posted December 14, 2005 Can anyone explain why my in-camera histograms look different to the histograms in DPP? I shoot in RAW almost exclusively. One previous poster mentioned the in- camera histograms are based on jpeg with less dynamic range. Is this true? -- if so, whats the point of having an in-camera histogram which doesnt reflect the format you are shooting to get an accurate reflection of histogram information. Is there anyway of matching/getting accurate histogram parity between both platforms ie from in-camera on the LCD to DPP ? below is some unanswered questions from a previous thread. got any aswers for these also? thanks for any light you might be able to shed : > "Expose to the right" is a thumbrule fix for the fact that the camera histogram is based on the JPEG settings. Compared to the RAW capability, the JPEG has somewhat less dynamic range, thus the histogram shows a limited view of your actual range if you're shooting RAW. Kirk, is that way my in-camera histograms always look different to the histograms in my RAW conversion programs (such as DPP & CS ACR) ? > Wasted range on the highlight end loses much more capacity than waste on the shadow end, so "exposing to the right" --overexposing (according to the histogram) makes use of the RAW highlight capacity that the histogram does not show you. > This can be dangerous to do blindly, however, unless your shots are rather bland. The camera histogram is so physically small that in many cases (low key portraits, especially), small but important tones may not even register at all. Because you don't even see those tones on the histogram, you may push them to the right completely off the scale. Intelligent use of an exposure meter is better than depending on shoving the histogram curve to the right side of the screen. yeh, I had a gut feeling about this also. Sometimes the shots I take look fine VISUALLY on the LCD screen, yet the histogram tells me theres room to the right to "push", espcially a shot of a high contrast scene, such as a black circlular graduated pattern on a white wall. In this instance would pushing to the right be any use, if the scene looks correctly exposed, but most of the histogram sits in the centre? Link to comment Share on other sites More sharing options...
chiswick_john Posted December 14, 2005 Share Posted December 14, 2005 The histogram represents the in camera jpeg and not the RAW data- sadly. Usualy when you set the contrast to minimum in the parameters it gives you a good idea of the dynamic range even though the shot looks flat but hey you have the VF to look at what you are shooting for making those decisions. With my 5D and 1ds2 any clipping of the highlights seen this way means lost data but with my 20D there was still a little that could be recovered in ACR. This does not mean the 20D had mre dynamic range - it just had a different parameter setup. You need to have the big screen and 3 channel histogram on the 5d to be able to see the clipping of small areas. Link to comment Share on other sites More sharing options...
fourfa Posted December 14, 2005 Share Posted December 14, 2005 if it's a question of whether the histogram represents the RAW or JPG output, just take some RAW+JPGs and compare both. Link to comment Share on other sites More sharing options...
ted_marcus1 Posted December 14, 2005 Share Posted December 14, 2005 Two factors make the in-camera histogram only a rough approximation. First, the camera uses the embedded JPEG "thumbnail" for both the histogram and the LCD screen (it's much faster that way given the limited processing power in the camera). Second, the size and resolution of the LCD screen necessarily limits the resolution of the histogram. You'll see the broad outlines of it, but not small details. In practice I don't think the rough histogram is a problem. It's good enough to tell you whether the exposure is correct. Combined with the flashing indication of clipped highlights on the thumbnail image (very helpful for "exposing to the right"), it's more than adequate, and probably the best you can do with a small LCD. The only real improvement would be a split histogram that shows each color channel separately to spot clipping that might not be readily apparent on a combined RGB histogram. But to get that you need to spend thousands of dollars for a professional DSLR. Link to comment Share on other sites More sharing options...
beauh44 Posted December 14, 2005 Share Posted December 14, 2005 I believe a lot of cameras use the the data from one channel (green maybe? I dunno) instead of a composite of all three in addition to it being a jpeg histogram. They're really not very accurate but sometimes it works out. You may get the "blinkies" when the camera thinks you've over-exposed something but in fact only one channel is blown. Photoshop's RAW converter may be able to get detail back when you slide the exposure slider to a negative number. Link to comment Share on other sites More sharing options...
andrew robertson Posted December 14, 2005 Share Posted December 14, 2005 Beau is right on this one, but I always though the in-camera histogram represented an average of the R,G, and B histograms. Higher end EOS bodies offer per-channel histogram readouts. But having the review set to On (info) and setting it up to have the blown highlights blink, should save you from blowing out some data that appears good in the histogram. Link to comment Share on other sites More sharing options...
rdkirk Posted December 15, 2005 Share Posted December 15, 2005 Most (if not all) camera histograms are presenting a "luminance" view, which is kind of a green-biased average. It can frequently miss red clipping, which isn't a problem in sensor design THEORY because nature overall doesn't have that much red in it...unless your specialty is portraits, in which case most human skin tends to have a lot of red. When you look at an RGB histogram of a portrait, you can see how often the red channel runs farther right than the other channels; that's also why skin with any greater than normal redness can blotch up pretty quickly without you ever understanding the cause. When you pull the image into DPP and look at the RAW, you'll see some extra tones represented simply because the graph is physically larger. You can optimize the JPEG settings to give you a histogram that's closer to what you're getting with RAW (lower the contrast); that sacrifices the appearance on the LCD, but it's probably a worthwhile tradeoff. But the problem of the physically tiny size of the histogram still means you can miss smaller--but important--tones completely. Link to comment Share on other sites More sharing options...
jonglass Posted December 15, 2005 Share Posted December 15, 2005 I also wish we knew what data was used for the histogram, whether one channel, all three, or just the luminance. However, for me, the most important part is that blinking hilight. Personally, I would wish I knew if it was blinking when only one channel was blown. In my experience with three cameras (Canon A60, Nikon CP 990, and my D30) the red channel seems to be the most prone to going over the crucial 255 level (out of 256 shades of gray, 255 is the highest, and 0 is the lowest--a value of 255 means you have hit 100% and no data is left--like empty film on a slide) I don't know why that is. My hunch would tell me that, because green gets twice as many sensor spots as red and blue, that it is least likely to get blown out, but why red most frequently? Now, as to the reduced dynamic range of jpegs over RAW. I've read in more than one place (disclaimer:I'm not an expert, but I did stay at a Holiday Inn Express a couple weeks ago-- and yes, I did take a shower there) that jpeg images interpret the data logarithmically, while RAW data is stored linearly. The practical meaning of this is that the dynamic range capable in a RAW is contained in the jpeg, only that the least significant bits are thrown out in the act of compressing the dynamic range to the 256 levels. The practical advantage of RAW is that you can use other means _of your own choosing_ to later pack and pick the bits you want. Whereas the jpg converter in the camera is limited to pre-determined formulae. As I said, I'm no expert in this, but from all I've seen in my experience, this seems to hold true. In other words, you must eventually throw out something when converting to a printable image (8 bit). The advantage of RAW is that you get to pick which ones, and how to do it. This is a fascinating discussion to me, across the threads. I sure hope that somebody who truly understands all this stuff will comment. ;-) In the mean time, here is one article on the subject. I've added the anchor for RAW conversion to the url: http://www.normankoren.com/digital_tonality.html#Raw_conversion and some other articles, just to add confusion to the subject. ;-) http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html#linear And of course, the requisite Ken Rockwell article: http://www.kenrockwell.com/tech/raw.htm And a refutation: http://www.prime-junta.net/pont/How_to/o_RAW_workflow/_RAW_workflow.html -Jon Link to comment Share on other sites More sharing options...
awindsor Posted December 15, 2005 Share Posted December 15, 2005 In the classic Bayer (Kodak) algorithm the green channel is used for luminances at all sites (interpolated between green neighbours at red and blue sites). I doubt that even in camera algorithms use this algorithm. It acts badly at high contrast boundaries. There are other simple algorithms that use average luminance information. In this case you get something like 0.2R+0.7G+0.1B for average luminances. In all cases the green channel is used most heavily. More advanced agorithms use more advanced interpolation and incorporate algorithms to discount outliers when computing luminances. This helps at high contrast boundaries. <P> There are lots of possible demosaic algorithms, which is why different RAW converters can yield different results. The algorithm used for in camera processing in the DIGIC II processor has never been released as far as I know. In fact I don't even know whether the arrays are truly Bayer or not. This must be known since everyone is writing demosaic algorithms. I suspect thay are from observed moiré.<P> I sometimes expose to the right when in high contrast scenes where I know I will be raising the level of the shadows. If the contrast is bad enough and I have a tripod I will take two exposures otherwise two different RAW conversions will do the trick. Lowering exposure doesn't increase noise so as long as you captured the highlights you are fine (even if they look hopelessly washed out). To get the shadows you may have to raise the exposure. The less you have to do this the better hence the expose to the right. <P> Exposing to the right is always correct from the perspective of capturing more bits of information but is not necessary for every scene. Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now