Jump to content

12 bit vs. 14 bit


joe_cormier

Recommended Posts

<p>I generally shoot (with D300) in 14-bit, for that extra bit of latitude. But if I'm doing action stuff, I go to 12-bit for the speed gain. In practical terms, the 12-bit files are rarely distingsuishable from the 14-bit flavor. But if you're dealing with highly contrasty scenes, every bit of dynamic range can help you in post.</p>
Link to comment
Share on other sites

Use 14 bit for Landscapes , where you will be crafting your image in post.

14 bit = more info in the file= more tunability in PP.

I have found that things like your blue skies will appear smoother if you actually "pixel peep"

If your not selling very large prints or dont have people taking a microscope to your images, then you wont

see a difference.

Link to comment
Share on other sites

<p>In digital image processing, for a given Dynamic Range you will need a correspondig bit depth. So if you have a 13 stops DR you will need at least 13 bits. </p>

<p>It does not work the other way around, so if you increase bit depth above the DR, you just have useless bits (or noise is higher than the extra bit levels)</p>

<p>According to DXOMark, the D300s has a dynamic range just above 12 bits at base ISO, so it could make sense to use 14 bits, but only at base ISO (100 - 200). </p>

<p>The answer might be different for other cameras, as an example the D7000 is closer to 14 stops DR (13.9 according to DXOMark) at base ISO. So it definetly makes a difference.</p>

<p>At present, it makes no sense in any camera to use 14 bit at high ISO.</p>

 

Link to comment
Share on other sites

<blockquote>

<p>What are the disadvantages of shooting in 14 bit?</p>

</blockquote>

<p>Larger image files/less card capacity, longer write times to the camera's memory card, longer in-camera operations like long-exposure noise reduction, takes less shots to fill the buffer with burst shooting, some operations take longer in post.</p>

<p>I shoot 14-bit lossless almost exclusively. The only exception is for burst shooting if I need the extra shots before the buffer overflows. I'd rather have too much data than not enough. If it saves a few shots in a year, IMO it's worth any bother. YMMV.</p>

Link to comment
Share on other sites

<p>12-bit vs. 14-bit capture is like cutting a pie into 12 pieces or 14 pieces; it is still the same pie or same image but now you have finer increments. My experience with 14-bit is that it may give you some subtle advantages such as slightly better shadow details in some occasions. In most situations it is hard to notice any difference.</p>

<p>On the D3, D700, and D7000, I always shoot 14 bit just in case I can use the subtle advantage. On the D300/D300S, its drops to maximum 2.5 frames/sec in the 14-bit mode. Since the D300 is my wildlife camera, I always shoot at 12 bit to get 6 to 8 frames per second for action photography. (The same applies to the D3X.)</p>

Link to comment
Share on other sites

<p>I wish someone could post some images shot in 12 and 14 bit and actually show the supposed 14 bit advantage. When I first bought my D300 in November, 2007, I did a test with 12 and 14 bit and even when I pulled the curve way up and way down, could see no difference at all. Perhaps there is someone here who has done a better test than that? Most of the time I don't need rapid fire so I'd shoot 14 bit if I thought there was a reason to. </p>
Link to comment
Share on other sites

<p>The bit depth has nothing to do with dynamic range. Black is black and white is white. The greater the bit depth, the more steps in between. The additional two bits gives 4 times as many steps. You probably won't see any difference unless you do a lot of editing which affects the color or contrast, with less tendency to show posterization.</p>

<p>The affect on file size (RAW only) and processing time is minimal. You paid for this feature, why not use it?</p>

Link to comment
Share on other sites

<p>Shun, I have to say that the pie analogy is a bit specious in my view. To say that all bit encoding schemes cover a "fixed pie" range of ZERO to 0dB (full scale) is a little misleading. The zero case I would argue is a special case indicating no output signal. Having "no pie" is not to have anything whatsoever.<br>

<br /> Consider two pies, one cut in half, and the other in quarters. It would be more fair to say that in the half-pie case, the pie dynamic range is from one half to one whole pie, and in the quarter-pie case, to say that the range is from one quarter to one whole pie. The lower bound on signal should be ONE and not ZERO.<br>

<br /> In the end, the good reasons for using 14-bit capture is that bits 13/14 make good dither bits at the least, and good archival bits. Advances in DSP may yield better signal extraction in the future. For those reasons, I'm not in the business of throwing any data away.</p>

Link to comment
Share on other sites

<blockquote>

<p>The bit depth has nothing to do with dynamic range. Black is black and white is white. The greater the bit depth, the more steps in between</p>

 

</blockquote>

<p>Sorry, but it doesn´t work that way in RAW capture. Every stop of dynamic range is defined by doubling the light intensity from the previous f stop. For a given DR you need a corresponding bit depth to represent it at least in a linear scale, as most digital camera work in RAW.</p>

 

Link to comment
Share on other sites

<p>I should not have said "12-bit vs. 14-bit capture is like cutting a pie into 12 pieces or 14 pieces." Instead, using 12 bits means cutting it into 2 to the power 12 = 4096 pieces while 14 bits means cutting it into 16384 pieces.</p>

<p>Regardless of how you cut it, you still have the original pie. You do not gain any dynamic range by cutting it into finer pieces. Cutting it into 4096 pieces is more than fine enough for by far the majority of the cases.</p>

<p>My philosophy is to keep as much of the original information as possible. That is why I shoot either uncompressed RAW or lossless compressed RAW and keep as many bits as possible, unless doing do causes problems. On the D300/D300S, 14-bit mode means dropping to 2.5 frames/sec, which is unacceptable to me. On the D7000, lossless compressed RAW requires 2 seconds to save 1 image, leading to frequent buffer full issues.</p>

<p>That is why on the D300, I shoot 12-bit lossless compressed RAW while on the D7000, I shoot 14-bit lossy compressed RAW. On the D3 and D700, I shoot 14-bit lossless compressed RAW.</p>

Link to comment
Share on other sites

<blockquote>

<p>You do not gain any dynamic range by cutting it into finer pieces</p>

</blockquote>

<p>Yes and no. If your sensor+electronics is capable of a DR of (n) bits, then you need (n) bits to be able to use all that DR.</p>

<p>If you increase bit depth above that (n) number, then your assertion applies, you don´t get any additional DR (like cutting into smaller pieces).</p>

<p>Now, if you decrease the bith depth below that (n) number, you do loose DR.</p>

<p>If using a literal makes confusion, lets say you have a camera capable of 12 stops DR. Then you need 12 bit depth to capture all of it. If you use 14 or 16 you gain nothing, other than a bigger file. If you use 10 bit, you then don´t have 12 stops DR anymore, you get 10.</p>

<p>All this applies to linear RAW, after you perform gamma encoding the game changes.</p>

Link to comment
Share on other sites

<blockquote>

<p>Then you need 12 bit depth to capture all of it. If you use 14 or 16 you gain nothing, other than a bigger file. If you use 10 bit, you then don´t have 12 stops DR anymore, you get 10.</p>

</blockquote>

<p>Francisco, you lost me there. Why does bit depth have a 1-to-1 relationship with the number of stops in DR? Even with 12 bits, there are 4096 tiny steps from top to bottom; that is plenty.</p>

Link to comment
Share on other sites

<p>It is because of the linear capture. If we agree that for every increase of one f-stop the intensity of light has doubled, then we could represent it as in the following table :</p>

<p>Every fstop represent the doubling of light intensity</p>

<p>The light intensity scale can be represented as powers of 2</p>

<p>The exponent of the powers of 2 is the same as bit depth</p>

<div>00YIov-335961584.jpg.a42c9bffbf5f1eacef11272e79457f91.jpg</div>

Link to comment
Share on other sites

<p>Shun, you are brushing aside the "zero" problem, regardless of how many slices you posit.</p>

<p>Dynamic range proceeds downwards from 0dB FS (full scale). As long as you keep halving light values, and until you run out of photons to halve, you are achieving more stops of dynamic range. Zero is a special case. What you are concerned with is the range from ONE to 0dB FS. </p>

Link to comment
Share on other sites

<p>One more comment, the fact that most of the values are in the upper bits is the argument used by the proponents of ETTR (expose to the right).</p>

<p>The reason that it is possible to get banding in the shadows is because of the few available levels. Noise helps here</p>

<p>Again, this is valid in linear state, when you perform gamma encoding the game changes. It is like giving more levels to the shadows and taking them away from highligts</p>

Link to comment
Share on other sites

<blockquote>

<p>Francisco, you are almost right, but have it backwards. The distance between zero and one is arbitrary, and zero cannot be doubled to become anything but zero. It's a special case.</p>

</blockquote>

<p>I did not use zero (0) in the light intensity scale, I used it in the fstop scale as a starting point of reference. </p>

Link to comment
Share on other sites

<p><strike>Francisco, a 14-stop dynamic range merely means the difference from the brightest to darkest. It is not necessary to be able to represent every little step in the middle. You can still have a fine looking image without those details.</strike><br>

Francisco, I got it. I understand your explanation. Thank you.</p>

Link to comment
Share on other sites

<p>But back to the original quesiton: the D300/D300S does not even have 12 stops of dynamic range. Therefore, the DR argument is moot.</p>

<p>Additionally, as far as I know, the D300 and D3X's sensors can only capture 12 bits. The extra two bits are merely calculated. That is why those DSLRs' frame rate slow down dramatically in 14-bit mode.</p>

<p>The D3, D700, D3S and D7000 have sensors that can capture 14 bit native.</p>

Link to comment
Share on other sites

<blockquote>

<p>Francisco, a 14-stop dynamic range merely means the difference from the brightest to darkest. It is not necessary to be able to represent every little step in the middle. You can still have a fine looking image without those details.</p>

</blockquote>

<p>This will be equivalent to DR compression, but common digital cameras don`t work that way.</p>

<p>I completely agree that you can have fine looking images without those details</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...