Jump to content

Dynamic range comparison D4, D800, 5DMkIII


paulie_smith1

Recommended Posts

<p>The data are explained in the links below. An ideal sensor is one which records every photon and adds no noise of its own (in the electronics), so the only noise in the images is due to photon noise (SNR = sqrt(N) where N = number of photons hitting the photosite). It's not something that can actually be made but it shows the theoretical limit beyond which no sensor can go.</p>
Link to comment
Share on other sites

<blockquote>

<p>It would be nice to know what and who is the source and how the test was measured for those that care...</p>

</blockquote>

<p>The data were compiled by Bill Claff. Clicking on the links will take you to his site. I have worked with Bill and can vouch for the fact that his data are carefully acquired and generally quite conservative. He has no commercial connection to any camera maker.</p>

Link to comment
Share on other sites

<blockquote>

<p>"...so the only noise in the images is due to photon noise (SNR = sqrt(N) where N = number of photons hitting the photosite)."</p>

</blockquote>

<p>I'm not following how a signal-to-noise ratio can be self-referred to the signal. Of course there's increasing randomness of photon distribution as the luminous flux level drops, but how can this be classed as "noise" when it's the actual signal itself that's discontinuous? So we need to specify a timeframe for the photon sampling. I also see no account taken of the fact that every pixel is actually a summing of 4 photosites (assuming Bayer patterning), but then some photons will be rejected because of their frequency. And what of the variation in photon energy with frequency? What illuminating spectral distribution has been assumed? What filtration banding and efficiency is assumed for the Bayer patterning? Taking these necessary considerations into account probably digs a big hole in those "perfect sensor response" calculations.</p>

<p>On a purely practical note. Low light photography rarely takes place under equi-energy or full-spectrum illumination, and so the illuminant quality will have a big effect on the SNR too.</p>

Link to comment
Share on other sites

<p><em>I'm not following how a signal-to-noise ratio can be self-referred to the signal. Of course there's increasing randomness of photon distribution as the luminous flux level drops, but how can this be classed as "noise" when it's the actual signal itself that's discontinuous? </em></p>

<p>Well you could look at photography as a measurement of the reflectance of the subject. In such a case the fluctuations in the number of photons detected can be considered "noise". This is called "photon (shot) noise" in optics literature.</p>

<p><em>So we need to specify a timeframe for the photon sampling. </em></p>

<p>Yes, that is the exposure time.</p>

<p><em>I also see no account taken of the fact that every pixel is actually a summing of 4 photosites (assuming Bayer patterning), but then some photons will be rejected because of their frequency. </em></p>

<p>I think ideally the analysis should be carried out separately for each wavelength (say, every 5-10nm) to get an idea of how well the sensor detects different colours and how well it can separate them. But doing this kind of testing is quite a lot of work and requires some equipment.</p>

<p><em>And what of the variation in photon energy with frequency?</em></p>

<p>Quantum efficiency in a real sensor is a function of the wavelength of light but again I think people assume QE = 1 irrespective of wavelength when discussing an "ideal" sensor. </p>

<p><em>Taking these necessary considerations into account probably digs a big hole in those "perfect sensor response" calculations.</em></p>

<p>It's a simplified "ideal sensor", that's why it's called "ideal" ;-) </p>

<p><em>Low light photography rarely takes place under equi-energy or full-spectrum illumination, and so the illuminant quality will have a big effect on the SNR too.</em></p>

<p>Right, at least if you attempt to get accurate colors. But as you get into more detailed analysis you have to publish more and more data and then it becomes more difficult for the reader to grasp what differences are important. With a simplified analysis there is always the risk that someone takes it too seriously and doesn't consider the limitations of the analysis performed.</p>

<p>What will be interesting to see is how well the D800 sensor copes in indoor available artificial light vs. the D4, and how the images look after color correction. This is not something that can be easily deduced from the data produced by the online test sites. But at some point it's best to just try out the cameras and see how they do in real world use. ;-)</p>

 

Link to comment
Share on other sites

<p>Hmmm! These "idealised" versus measured sensor responses also give no indication of the cluster size of photosites measured. In order for camera (and lens) flare not to be a significant factor in dynamic range figures, the measurement size (illuminated area) has to be as small as possible. OTOH, an average photosite response requires as large a sample as possible. I'm sure that if a tiny cluster of, say, 16 photosites was measured (with all other photosites in total darkness), you'd get a different result from illuminating the whole sensor with the same nominal luminous flux. Even worse if you used a real lens with an image circle that considerably exceeded the frame size.</p>

<p>Measurement methodology is everything, and unless the details of measurement and any weightings applied are fully specified, then it really is pretty much a pointless exercise. DXOmark please note!</p>

Link to comment
Share on other sites

<p>I'm a little confused. I assume I'm reading this graph correctly that the D700 stops returning reliable 24-bit data (8 bits per channel) at around ISO 800. Is it really only returning 8 shades of intensity at ISO 25600 (although I'm guessing that averaged out over a lot of pixels, this looks better). Again, if the D4 is only returning four shades per pixel at ISO 204800, that's more of a marketing number than I'd thought (although averaged over the increased pixel count it may suffice in disaster). That said, I failed to work out from the linked paged whether SNR is per pixel or per fixed fraction of the sensor (I assume something like the latter given the DX crop versions for sensors differ).</p>

<p>Interesting that we only seem to get 9 bits of useful data out of the D700 at best - it suggests that 14-bit RAW mode really is superfluous, except to give a more accurate average over multiple samples. It's another reason to want a D800, although it would be interesting to see similar results for medium format sensors.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...