12 bit vs. 14 bit

Discussion in 'Nikon' started by joe_cormier, Feb 27, 2011.

  1. I have read that when shooting for HDR imagery that 14 bit produces a greater tonal range than
    12 bit. If so, why not shoot in 14 bit all the time? I have a Nikon 300s. What are the disadvantages
    of shooting in 14 bit?
    Thanks as always,
    Joe
     
  2. Slightly bigger NEF (RAW) files. Doesn't do anything if you record JPEG files only. The time to process the image in camera is increased.
     
  3. I generally shoot (with D300) in 14-bit, for that extra bit of latitude. But if I'm doing action stuff, I go to 12-bit for the speed gain. In practical terms, the 12-bit files are rarely distingsuishable from the 14-bit flavor. But if you're dealing with highly contrasty scenes, every bit of dynamic range can help you in post.
     
  4. I've pixel peeped at 12 Vs 14 bit.
    To my eyes, I can't see a difference.
    Agreed, with heavy PP editing, 14 bit may give you a slight edge.
    Then again, if I am that concerned over dynamic range, I'll use a medium format w/ digital back.
     
  5. Use 14 bit for Landscapes , where you will be crafting your image in post.
    14 bit = more info in the file= more tunability in PP.
    I have found that things like your blue skies will appear smoother if you actually "pixel peep"
    If your not selling very large prints or dont have people taking a microscope to your images, then you wont
    see a difference.
     
  6. In digital image processing, for a given Dynamic Range you will need a correspondig bit depth. So if you have a 13 stops DR you will need at least 13 bits.
    It does not work the other way around, so if you increase bit depth above the DR, you just have useless bits (or noise is higher than the extra bit levels)
    According to DXOMark, the D300s has a dynamic range just above 12 bits at base ISO, so it could make sense to use 14 bits, but only at base ISO (100 - 200).
    The answer might be different for other cameras, as an example the D7000 is closer to 14 stops DR (13.9 according to DXOMark) at base ISO. So it definetly makes a difference.
    At present, it makes no sense in any camera to use 14 bit at high ISO.
     
  7. What are the disadvantages of shooting in 14 bit?​
    Larger image files/less card capacity, longer write times to the camera's memory card, longer in-camera operations like long-exposure noise reduction, takes less shots to fill the buffer with burst shooting, some operations take longer in post.
    I shoot 14-bit lossless almost exclusively. The only exception is for burst shooting if I need the extra shots before the buffer overflows. I'd rather have too much data than not enough. If it saves a few shots in a year, IMO it's worth any bother. YMMV.
     
  8. ShunCheung

    ShunCheung Moderator

    12-bit vs. 14-bit capture is like cutting a pie into 12 pieces or 14 pieces; it is still the same pie or same image but now you have finer increments. My experience with 14-bit is that it may give you some subtle advantages such as slightly better shadow details in some occasions. In most situations it is hard to notice any difference.
    On the D3, D700, and D7000, I always shoot 14 bit just in case I can use the subtle advantage. On the D300/D300S, its drops to maximum 2.5 frames/sec in the 14-bit mode. Since the D300 is my wildlife camera, I always shoot at 12 bit to get 6 to 8 frames per second for action photography. (The same applies to the D3X.)
     
  9. I wish someone could post some images shot in 12 and 14 bit and actually show the supposed 14 bit advantage. When I first bought my D300 in November, 2007, I did a test with 12 and 14 bit and even when I pulled the curve way up and way down, could see no difference at all. Perhaps there is someone here who has done a better test than that? Most of the time I don't need rapid fire so I'd shoot 14 bit if I thought there was a reason to.
     
  10. The bit depth has nothing to do with dynamic range. Black is black and white is white. The greater the bit depth, the more steps in between. The additional two bits gives 4 times as many steps. You probably won't see any difference unless you do a lot of editing which affects the color or contrast, with less tendency to show posterization.
    The affect on file size (RAW only) and processing time is minimal. You paid for this feature, why not use it?
     
  11. Shun, I have to say that the pie analogy is a bit specious in my view. To say that all bit encoding schemes cover a "fixed pie" range of ZERO to 0dB (full scale) is a little misleading. The zero case I would argue is a special case indicating no output signal. Having "no pie" is not to have anything whatsoever.

    Consider two pies, one cut in half, and the other in quarters. It would be more fair to say that in the half-pie case, the pie dynamic range is from one half to one whole pie, and in the quarter-pie case, to say that the range is from one quarter to one whole pie. The lower bound on signal should be ONE and not ZERO.

    In the end, the good reasons for using 14-bit capture is that bits 13/14 make good dither bits at the least, and good archival bits. Advances in DSP may yield better signal extraction in the future. For those reasons, I'm not in the business of throwing any data away.
     
  12. The bit depth has nothing to do with dynamic range. Black is black and white is white. The greater the bit depth, the more steps in between​
    Sorry, but it doesn´t work that way in RAW capture. Every stop of dynamic range is defined by doubling the light intensity from the previous f stop. For a given DR you need a corresponding bit depth to represent it at least in a linear scale, as most digital camera work in RAW.
     
  13. ShunCheung

    ShunCheung Moderator

    I should not have said "12-bit vs. 14-bit capture is like cutting a pie into 12 pieces or 14 pieces." Instead, using 12 bits means cutting it into 2 to the power 12 = 4096 pieces while 14 bits means cutting it into 16384 pieces.
    Regardless of how you cut it, you still have the original pie. You do not gain any dynamic range by cutting it into finer pieces. Cutting it into 4096 pieces is more than fine enough for by far the majority of the cases.
    My philosophy is to keep as much of the original information as possible. That is why I shoot either uncompressed RAW or lossless compressed RAW and keep as many bits as possible, unless doing do causes problems. On the D300/D300S, 14-bit mode means dropping to 2.5 frames/sec, which is unacceptable to me. On the D7000, lossless compressed RAW requires 2 seconds to save 1 image, leading to frequent buffer full issues.
    That is why on the D300, I shoot 12-bit lossless compressed RAW while on the D7000, I shoot 14-bit lossy compressed RAW. On the D3 and D700, I shoot 14-bit lossless compressed RAW.
     
  14. You do not gain any dynamic range by cutting it into finer pieces​
    Yes and no. If your sensor+electronics is capable of a DR of (n) bits, then you need (n) bits to be able to use all that DR.
    If you increase bit depth above that (n) number, then your assertion applies, you don´t get any additional DR (like cutting into smaller pieces).
    Now, if you decrease the bith depth below that (n) number, you do loose DR.
    If using a literal makes confusion, lets say you have a camera capable of 12 stops DR. Then you need 12 bit depth to capture all of it. If you use 14 or 16 you gain nothing, other than a bigger file. If you use 10 bit, you then don´t have 12 stops DR anymore, you get 10.
    All this applies to linear RAW, after you perform gamma encoding the game changes.
     
  15. ShunCheung

    ShunCheung Moderator

    Then you need 12 bit depth to capture all of it. If you use 14 or 16 you gain nothing, other than a bigger file. If you use 10 bit, you then don´t have 12 stops DR anymore, you get 10.​
    Francisco, you lost me there. Why does bit depth have a 1-to-1 relationship with the number of stops in DR? Even with 12 bits, there are 4096 tiny steps from top to bottom; that is plenty.
     
  16. It is because of the linear capture. If we agree that for every increase of one f-stop the intensity of light has doubled, then we could represent it as in the following table :
    Every fstop represent the doubling of light intensity
    The light intensity scale can be represented as powers of 2
    The exponent of the powers of 2 is the same as bit depth
    00YIov-335961584.jpg
     
  17. Shun, you are brushing aside the "zero" problem, regardless of how many slices you posit.
    Dynamic range proceeds downwards from 0dB FS (full scale). As long as you keep halving light values, and until you run out of photons to halve, you are achieving more stops of dynamic range. Zero is a special case. What you are concerned with is the range from ONE to 0dB FS.
     
  18. Francisco, you are almost right, but have it backwards. The distance between zero and one is arbitrary, and zero cannot be doubled to become anything but zero. It's a special case.
     
  19. One more comment, the fact that most of the values are in the upper bits is the argument used by the proponents of ETTR (expose to the right).
    The reason that it is possible to get banding in the shadows is because of the few available levels. Noise helps here
    Again, this is valid in linear state, when you perform gamma encoding the game changes. It is like giving more levels to the shadows and taking them away from highligts
     
  20. Francisco, you are almost right, but have it backwards. The distance between zero and one is arbitrary, and zero cannot be doubled to become anything but zero. It's a special case.​
    I did not use zero (0) in the light intensity scale, I used it in the fstop scale as a starting point of reference.
     
  21. ShunCheung

    ShunCheung Moderator

    Francisco, a 14-stop dynamic range merely means the difference from the brightest to darkest. It is not necessary to be able to represent every little step in the middle. You can still have a fine looking image without those details.
    Francisco, I got it. I understand your explanation. Thank you.
     
  22. You are right about the linearity and correlation of bits with stops in terms of sensor data.
     
  23. ShunCheung

    ShunCheung Moderator

    But back to the original quesiton: the D300/D300S does not even have 12 stops of dynamic range. Therefore, the DR argument is moot.
    Additionally, as far as I know, the D300 and D3X's sensors can only capture 12 bits. The extra two bits are merely calculated. That is why those DSLRs' frame rate slow down dramatically in 14-bit mode.
    The D3, D700, D3S and D7000 have sensors that can capture 14 bit native.
     
  24. Francisco, a 14-stop dynamic range merely means the difference from the brightest to darkest. It is not necessary to be able to represent every little step in the middle. You can still have a fine looking image without those details.​
    This will be equivalent to DR compression, but common digital cameras don`t work that way.
    I completely agree that you can have fine looking images without those details
     
  25. The D300 and D3x capture using a sub-MHz read clock in 14-bit mode, which slows the process down quite a bit, but makes for fewer read errors. They both produce more than 12 good bits. Whether bits 13/14 are any better than 'good dither' bits is up for debate.
     
  26. But back to the original quesiton: the D300/D300S does not even have 12 stops of dynamic range. Therefore, the DR argument is moot.
    Additionally, as far as I know, the D300 and D3X's sensors can only capture 12 bits. The extra two bits are merely calculated. That is why those DSLRs' frame rate slow down dramatically in 14-bit mode.​
    The case with D300/D300s is an interesting one. There are different guesses about how those camera handle the 14 bit conversion, as the in-sensor A/D converters are 12 bit.
    Since Nikon does not disclose how they do, we cannot be sure. Anyway, one of the guesses that IMHO is the most likely, is that it uses a proven technique of performing multiple A/D conversions (in this case 4) of the same analog sample (that´s why the speed is reduced by almost 4 times).
    The idea would be to average those 4 samples with the end result of some reduction of read noise.
    To avoid increasing processing time even more, the division is not performed. The 4 samples are just added, and that´s why you go from 12 to 14 bits (imagine adding 4096 four times you get 16384, equivalent of 14 bits)
    So in this case, what you really have is a 12 bit significant number with less noise, not 14 bit.
    Anyway, let me emphasize that this is just a guess of how the Nikon D300/D300s gets the 14 bits. It may be a completely different approach.
     
  27. Interestingly, dpreview.com measured a higher dynamic range from the D300 than from the D700...
     
  28. According to DXOMark data, what happens with the D700 is that at low ISO read noise is higher than shot noise, so the camera is not able to take full advantage of its sensor capabilities.
    For a comparison of DR between the D300s and D700 go here (select Dynamic Range), you can see how the DR vs. ISO curve of the D300s is linear. The curve of the D700 becomes linear after ISO 800
    If the D700 had lower read noise, one could extrapolate the values and it could reach almost 14 bits of DR alt base ISO
    Anyway, even for 12.15 stops of DR at base ISO for the D700, you need at least 13 bits
    From these graphs it should become evident that a bit depth of 14 bits makes sense only at low ISO. (I.E, the D700 at ISO 3200 has a DR of 9.44 stops, not even 12 bits are needed)
     
  29. One other issue with 14-bit on the D300 (and I'm sure the S too) is that the shutter lag just about doubles from 45ms or so to 85ms. That is even worse to me than 2.5fps.
     
  30. Sorry, but it doesn´t work that way in RAW capture. Every stop of dynamic range is defined by doubling the light intensity from the previous f stop.
    I think you meant to say that each bit represents a doubling of intensity. Regardless, your understanding of image encoding is completely false. You continue to build arguments based on a false assumption.
     
  31. One other issue with 14-bit on the D300 (and I'm sure the S too) is that the shutter lag just about doubles from 45ms or so to 85ms. That is even worse to me than 2.5fps.​
    Yes, it's very disturbing, which is another reason I don't use it. 12 bits is enough for me in any case, it sure beats 8 bits!
     
  32. I think you meant to say that each bit represents a doubling of intensity. Regardless, your understanding of image encoding is completely false. You continue to build arguments based on a false assumption.​
    So you don't agree that for a 14 fstop DR you need at least 14 bits in linear (RAW) capture?
    Note: Bith depth will not increase DR, but it could limit it
     
  33. Edward, I was also wondering why you thought each bit did not represent a doubling of intensity. The numbers coming off the sensor are linear, so why wouldn't that be the case for those numbers?
     
  34. And what is bit depth, in English?
     
  35. And what is bit depth, in English?​
    It is the number of bits used to quantify the signal. In the case of Nikon models like D3 series, D300 series and many others, it could be either 12 or 14 bits for RAW (You select it via menus)
    The maximum number that can be represented with 12 bits is 4095 (or 2^12 -1) and 16383 for 14 bits (2^14 -1)
    JPeg uses 8 bits, but the issues discussed here (about the relation of Dynamic Range and minimum number of bits) don´t apply directly since it is a processed format where a nonlinear curve has been applied.
     
  36. Per pixel?
     
  37. ShunCheung

    ShunCheung Moderator

    Right 12 bits or 14 bits per pixel. Please keep in mind that we are talking about Nikon DSLRs whose sensors have the Bayer pattern so that each pixel has only 1 color channel.
     
  38. i bit + either black or white , either 0 or 1, (2 tones)
    2 bits = black, 2 shades of gray, and white ( 4 tones )
    3 bits = black, 6 shades of gray, and white (8 states of tone)
    4 bits = black, white, and 14 shades of gray ( 16 states of tone)
    5 bits = 32 tones
    6 bits = 64 tones
    7 bits = 128 tones
    8 bits = 256 tones ( expressed as 0-255)
    9 bits = 512 tones
    10 bits = 1024 tones
    11 bits = 2048 tones
    12 bits = 4,096 tones
    13 bits = 8,192 tones
    14 bits = 16,384 tones
    15 bit = 32,768 tones
    16 bit = 65,536 tones
    Now here is the thing to understand if it has not been mentioned earlier. every photo site ,or pixel as you are calling them , in a sensor is actually monotone. It can only see / record a shade of gray. The color is made by capping each photo site with a red, green, or blue filter. 99 times out 100 the ratio is 1:2:1 or 1 red, 2 green , and 1 blue. This 1:2:1 ratio has been found to pretty well match the sensitivity of the healthy young human visual system to color.
    Color for each point ( photosite /pixel) is created by an algorithm that looks at the values of a pixel and the pixels that surround it and and makign an estimate of what the other two color values should be. This ordering of filters and interpolation of color color values is called respectively the Bayer Matrix and Bayer Demosaicing algorithm. It is called "Bayer" because if you think about this too much for long periods of time you will get a massive headache.
    So this means that any color in a 14 bit per channel image are made up of a combination of any of 16,384 x 16,384 x 16,384 tones.
    More bits are important for three main reasons:
    • the more samples the smoother rendition of a range of tones or color
    • the more fine details in the highlights the camera's visual system ( light recording sensor, analog to digital conversion and other signal processing and encoding processes ) can be recorded as discrete values ( data is denser i nthe highlights than in the shadows).
    • the more data (meaning after raw processing) Post- processing programs like Photoshop have as disposal headroom to absorb rounding errors in the millions of calculations necessary to create the final photograph.
     
  39. I shoot a 300S x 14 bit uncompressed Raw NEF's, not because I know squat about what you're talking about (yet) but, indeed the tonal range is stretched out, facilitating more editing latitude. I do appreciate your (Ellis) effort , et. al. and will continue to follow this most interesting thread, one that Nikon ignors.
    Some minor pix.
     
  40. Pat You do have more editing latitude, but it is not because the end points are stretched out , by which I think you mean further apart, -- absolute detail-less black and absolute detail-less white are still absolutes but because the gradations in between the two absolutes , the sampling rate of the continuous range of tone is ever more fine creating a better illusion of continuous tone and color, even after the (usually) unintentional damage done by the processes involved.
     
  41. Pat You do have more editing latitude, but it is not because the end points are stretched out , by which I think you mean further apart,
    On reflection , I see that I am wrong about this. A 14 bit capture obviously has a longer tonal scale than a 12 bit capture- -there is more distance between absolute black and absolutely specular no detail white. It is just that the extra dynamic range is concentrated mostly in the highlights where as I said before, there is a lot more data / information.
     
  42. I want to SEE the evidence with my eyes. I don't want to just see NUMBERS. I don't see any advantage until I can see PROOF.
    Anyone?
     
  43. ShunCheung

    ShunCheung Moderator

    I want to SEE the evidence with my eyes. I don't want to just see NUMBERS. I don't see any advantage until I can see PROOF.
    Anyone?​
    Dave, how about run some A/B comparisons to convince yourself?
    When can we expect your results?
     
  44. Shun, I'm slammed in school right now, just have enough time to take a break and read Photo.net forums :)
    I figured there are plenty of people here who say they shoot in 14 bit and notice a difference who could share their results.
    I did do a test with my D300 in 2007 shortly after purchasing it, shooting at various ISOs in a dimly lit room. At least in shadow, there is absolutely no difference between 12 and 14 bit whatsoever! I didn't test it with highlights at ISO 200 outside, though. Hadn't thought of that. Will do a test next summer while hiking near Mt.Rainier. Lots of highlight information on that mountain in the sunlight.
     
  45. ShunCheung

    ShunCheung Moderator

    Dave, the problem is that over the web, we are severely limited by the 8-bit JPEG images we can post. There is no way you can show the difference between 12-bit RAW and 14-bit RAW with 8-bit JPEGs. Therefore, you pretty much have to do some experiments yourself.
    My background is from shooting slide film with 5 stops of dynamic range. Having 7, 8 stops of dynamic range is already a luxury. I think you have to look pretty hard to find subjects with 14 stops of dynamic range to see a difference.
     
  46. This is a good read:
    http://www.earthboundlight.com/phototips/nikon-d300-d3-14-bit-versus-12-bit.html
     

Share This Page