Jump to content

Exposure latitude and reciprocal failure on 10D, and are all digital sensors (different brands and models) the same in this regard?


linden_l

Recommended Posts

Hi, just curious about a couple of things with digitals. Being an

amateur, I don't know how to run a simple test. So if someone knows,

please share the knowledge with me. Most chrome slides have an

exposure latitude of 5 stops. Does this apply to digital

sensors/films as well? If not, is it narrower or wider? In addition,

do they also suffer from reciprocal failure found with films? Thanks.

Link to comment
Share on other sites

Most digital cameras have a latitude of about 5 stops, approximately the same as slide film. Since sensors are something completely different than film, I don't think they suffer from reciprocal failure (at least I've never heard about it...).

<p>Here is an interesting article from Luminous Landscape about exposure with digital cameras: <a href="http://luminous-landscape.com/tutorials/expose-right.shtml">Expose (to the) Right</a>

Link to comment
Share on other sites

From the Expose to the Right article:

 

"Because of this difference, setting up the exposure using an 18% gray card (as is typically done with film) does not work so well with a digital camera. You will get better results if you set your exposure such that the whitest white in the scene comes close to, but not quite reaching, the full digital scale (255 for 8-bit capture, 65535 for 16-bit capture). Base the exposure on the highlight for a digital camera, and a mid-tone (e.g. 18% gray card) for a film camera."

 

 

So, if on film, I set exposure for an 18% card to the middle of my meter, where do I put a white card or a white highlight on the scale? How many stops over the middle?

Link to comment
Share on other sites

Does dias have an exposure latitud of 5 stops?????????????

Maybe 800 ASA does, but all dias I've been working with in the last years, around 100 ASA, has a maximum of 1 stop (+/- 1 stop) latitud. And if you came to 50 ASA, it gets even smaller. Photo negatives have +/- 2 stops, B&W I think have +/- 4 stops (I think!) and I have read that digital kan came to +/- 6 (???)

But what is very clear is that dias has no more than +/- 1.

If it isn't in this way, you have destroit my life, because I have tons of dias throught away because og failure exposure og +/- 2 !!!

Link to comment
Share on other sites

<p>The term latitude is being misused here. Latitude means that you can shoot a picture at a variety of different exposure values and they'll all turn out OK (not identical, but all usable; the definition of usable would be a personal preference). Chromes certainly do not have 5 stops of latitude; you can prove this by shooting the same scene at whatever you feel a correct exposure is, then 2.5 stops under and 2.5 stops over. The last two shots will almost certainly be unusable. Most colour negative films, which are far more forgiving, don't even have 5 stops of latitude.</p>

 

<p>What we're looking at here is how many stops there are between the deepest shadow and the brightest highlight. Anything outside that range either renders as pure black or pure white. It varies from film to film, but most people suggest that slides can handle about 5 stops and colour negative films can handle about 7. I don't shoot digital, but most sources usually say that digital cameras have about the same range as slide film.</p>

 

<p>I'm not sure if digital sensors suffer reciprocity problems; someone with digital experience will have to answer that one. One problem they <em>do</em> tend to have with long exposures is noise. If anything, that's worse than reciprocity failure. Given the relevant information, you can adjust your exposure and maybe use a filter to correct for reciprocity failure, but you can't do that to get rid of noise. You'll have to try to clean it up in the digital darkroom.</p>

Link to comment
Share on other sites

The dynamic range (if that's what you mean) of the 10D's sensor at ISO 100 has been experimentally measured by me and others to be approximately 6 stops (give or take a third to a half stop). This is comparable to colour slide film, but much narrower than print films.

 

CCD/CMOS sensors do not suffer from reciprocity effects, neither with exposure value nor with colour shifts. They do suffer from increased digital noise at longer exposure though. Some cameras are better than others at suppressing noise.

Link to comment
Share on other sites

That Luminous Landscape article, if correct, is absolutely crucial.

 

I've just done a quick test and taken a handful of pictures out of my bedroom window

of my back garden and the houses beyond - a classic english early autumn scene,

fading lawns, trees beginning to get quite golden, all under a blue sky with puffy

clouds scudding along.... I found that I had to dial in between 2/3 and 1 stop of

compensation to shift the histogram sufficiently to the right. That was shooting raw

and examining the histogram in-camera; I don't have the tools yet to process raw

images in-computer. At +1 stop there were a few strips in one image that were

blown, but none of the other one - the histogram dropped nicely to the origin point

right at the r/h end of the graph (no pure whites in that shot). At +2/3 of a stop the

first image wasn't blown but there was also a small amount of space beyond the end

of the histogram plot, and the second shot showed very clear space beyond the r/h

end of the plot.

 

This is quite a surprise to me - I'd previously left the camera on 'Evaluative Metering',

with no compensation dialled. What are other people doing? And do you find that

different types of shots require different settings?

 

Oh, I'm using a D60 set to ISO 100.

Link to comment
Share on other sites

Coming at this from a machine vision background (where all my work was done in 8-bit gray scale), I seem to recall that every f-stop change in brightness was a doubling in captured photons. If the sensor is simply counting photons, given a nominal 8-bit A-to-D, then you can say that the light output various from 0 to 255, or (2 ^^ 8) - 1. Now, most 8-bit A-to-D's don't really produce a true, noiseless 8-bit conversion. It's really more like 6 1/2 bits given +/- 1 1/2 bits noise. So, you really only have about 64 levels (2 ^^ 6) to work with. I'm also ignoring the fact that in my case, we were working with TV-level signals, where the black and white levels were not strictly the A-to-D, among other things.

 

Anyway, given 2 ^^ 6 levels, would in my mind, equate to 6 f-stops of range. And that's just for a crappy 8-bit (with +/- 1 1/2 bits noise) conversion. What am I'm missing, here?

 

To further make myself look foolish, I'll put forth the notion that if you have a sensor that is tied to a nominal 12-bit A-to-D (and let's just assume the same +/- 1 1/2 bits noise just for the sake of example), then you would have at least 10 bits of range, equating to 10 f-stops.

 

Obviously something is wrong, here. Perhaps putting color into the equation wrecks these assumptions, by summing in additional noise for each color channel. I dunno.

 

And just to get the conversation really moving, I've thought about how you can supposedly 'adjust' the exposure +/- 2 stops after the fact with a RAW capture. TANSTAAFL, you know. I would assume that there really is a 14-bit A-to-D tied to the sensor array, and normally the extra bits are 'guard bits' so to speak so you get a truly noise free 12-bit sample. Then, you could theorectically extract a +/- 2-stop exposure change from a RAW file, if it actually contained all 14-bits, perhaps at the expense of increased noise. Since I've not seen any noise creep into such 'after the fact' exposure changes, perhaps the sensor array harbors the equivalent of 16-bit A-to-D's and we are normally presented with only a 12-bit sample, especially for compressed JPEG frames.

 

I've not really delved into trying to find any discussions on the web specifically about this, so I may be running over old, trampled ground. If so, I'd appreciate a pointer or two, since I'm truly curious about the actual science behind what determines why camera manufacturers are only claiming 5 or 6 f-stops of dynamic range, and also what kind of black-box trick is being done to 'after the fact' change exposure on RAW images.

Link to comment
Share on other sites

Hey dave, your analysis is (I believe) correct. The luminous

landscape article defines the "base stop" as having a 128 or 2^7 bits

in it. The 12 bit sensor is giving the additional 5 stops. From

this assume that standard 24bit 8bit/color sensors must do some

sort of interpolation to compress the image. Alternatively its

literally still 256 shades from a linear sensor response which makes

the g3 in non-raw mode a 1 stop camera ;).

 

<p>

 

I'm willing to accept whatever plausible definition people give as

long as it is clear. That said, my understanding (I forget what I saw

this but I seem to recall that eyes don't see photons linearly

either)

<p>

Actually I found an interesting undergraduate <a

href="http://ww2.coastal.edu/kingw/psyc450/logs.html"> psychology

site</a> on this. Like sound apparantly the eye is perceiving "stops"

i.e. the log of image. So extending your reasoning, why not have a

file format that is logarithmic? This in fact "fixes" the issue the

the luminous landscape addresses and keep the "shades" per stop

uniform? (A logarithmic "monitor" would also be very cool!)

<p>

-avi

Link to comment
Share on other sites

Actually I think the math would work out more like this: The sensor only captures maybe 6 stops reliably. On the other hand if there was only one bit per stop the lowest stop would be pure binary, the next 2-tone, the next 4, then 8, then 16... at 8-bit A/D conversion you're a little better off with 4 times as many steps at each stop but still 4 notches per channel in the bottom stop can hardly be called useable. Now with a 12-bit converter you are much better off with about 64 steps in the bottom stop you can actually record something in the shadows and have it NOT be posterized, and in the top you have a whole lot of steps to work with. In the end each stop has as much bit depth as the sum of all the preceding stops.
Link to comment
Share on other sites

I should continue: The reason we don't run into this constantly, and the reason the g3 is not a 1-stop camera in non-raw mode is because of gamma. Way back in the early days of computer graphics they realized this and this is where the curse of gamma came from. The 8-bit value recorded in a PROCESSED image is actually a mapping of that 12-bit raw integer from the A/D converter by the equation V = I^gamma (the number is treated as a fraction from 0 to 1). For example in that 6 stop range the bottom 3 bits holding 7 steps and zero that represented the bottom stop (I oopsed on my math above slightly sorry) ends up having about 96 levels under a 2.2 gamma. Admittedly for this case 2.2 is a little extreme and the top stops suffer a bit, but nowhere near to the extent the bottom ones did at linear gamma.
Link to comment
Share on other sites

Yes, I was simplifying quite a bit, assuming that having only 2 bits, etc. would give a useable image. It's much more reasonable to assume that you need some sort of 'base' range of values, say at least 64 or 128 to record a useable luminosity scale. Extra bits above that point give additional fine steps or wider dynamic range, depending on how the brightness range of the recording scene goes.

 

Also, in my machine vision work, we normally assumed a gamma of 1.0

(linear), and only adjusted the gamma tables on a per-camera basis (we sometimes had up to four cameras hooked to the same A/D, muxing the inputs in sequence, to give multiple views) to account for variances between camera sensors, to fine-tune them. We were using Sony industrial cameras (XC-75) for the purpose, and were mostly concerned with pixel jitter, and repeatability (within 1/10 pixel).

 

In that case, doubling the light would double the value returned from a particular pixel. We found that lighting was the most crucial aspect of machine vision. If the subject was over- or under-lit, or you got specular highlights, it would make the vision problem much more difficult. We were looking for things like high-contrast edges, to find the centroid of various shapes, and had to deal with things like small voids along solder edges, or poor masking, allowing ragged edges, etc. Having an excellent exposure of the scene was most of the battle, it seemed. This was for applications like wire, die, and tab bonders, die, wafer, lead frame, and IC package inspection.

 

You might say I'm kind of 'backing into' color digital photography, having shot with a film SLR for about two decades, and working with digital gray scale machine vision for half that time, I'm finally getting into digital color.

Link to comment
Share on other sites

  • 2 weeks later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...