Jump to content

12bit v. 14bit


w_j1

Recommended Posts

Is there really a difference that can be seen by the human eye? I am buying a

new camera, $ really isnt an issue, but I currently use 2 5d's and have NO

problems and love the full frame with my L lenses. The question is a new Mark

IIds with the 16.7 full frame, or the new mark III 10.2 with the 1.3. Speed is

NOT an issue, I have never run into a situation with my 5d's 3 or 4fps rate

where I said "man I wish it was just a little faster" so the big issue for me

is picture quality, will I really tell a big difference between the 10.2 14bit,

vs the 16.7 12bit? thanks.

Link to comment
Share on other sites

Digital works in a way that a lot of people don't understand - 14 bit gives you 4096 possible levels, but of these, 2048 are used in the brightest stop - 1024 are used to descrive the next stop - 512 are used to describe the 3rd stop down and so on and so forth.

 

When you get down into the shadow areas you don't actually have a lot of bits left to give you shadow detail - and as luck would have it, that's the very region our eyes are the most sensitive.

 

So to stop short of writing a book here, the extra 2 bits let you see potentially 4 times as much shadow detail without posterisation.

 

Can you see it? Yes you can.

 

Cheers,

 

Colin

Link to comment
Share on other sites

To put it in numbers:

 

12 bit = 4,096 level of gradation (tones)

 

14 bit = 16,384 levels of gradations (tones)

 

If you can see the difference in tonality between the two images only YOU can tell. I think, with a properly exposed picture and printed accordingly most people should see the difference, however subtle. Having that many more levels of gradation is a great thing. Imagine when we'll have a true 16 bit A/D conversion!

Link to comment
Share on other sites

Of course you have to remember that Jpegs are 8-bit by definition and only contain information on 256 intensity levels for each color.

 

You probably can't detect more then 256 intensity levels by eye anyway!

 

Do 14-bits give you more than 12-bits? On some images you may see a difference, especially in shadow detail after suitable processing of the image and conversion to a JPEG. Other images may look identical.

 

Bottom line is that it's better to have more bits, but there is a limit to how many you need, or how many are useful given a certain level of sensor noise. Like in most things, it's a case of diminishing returns. 8 is good, 10 is better, 12 is a bit better than that, 14 is a little bit better than 12 bit, 16 bit might be very slightly better than 14 bit in some special situations, 18 bit probably currently offers no real advantage over 16 bit.

Link to comment
Share on other sites

Yes, if you get the exposure wrong and have to do a lot of manipulation, more bits help, as long as they are bits which contain information, not just noise. 12 good bits are certainly better than 8 and 4 bits would be horrible. However that doesn't mean you'd get as much improvemment again by adding 4 more bits and going to 16.
Link to comment
Share on other sites

Ok, I understand the oppinions but really I would like to know which you guys would choose, given the choice of either the mark iii 10.2m or the mark II 16.7, if someone said, you can pick up either box and its yours, what would the choice be?
Link to comment
Share on other sites

hmmmm. The 1Ds-II is all about image quality. The 1D-III is all about speed. . .but it is newer.

 

And they both are weather sealed with top notch AF systems (although the 1D-III is superior (until you hit the AI servo bug))

 

I honestly would want to compare images from both before deciding. The 1D-III should clearly be better than a 5D (the 5D sensor is essentially the same as the 1D-II; just bigger)), but I do not know how the 1D-III would compare to the 1Ds-II sensor. Is it possible they are the same sensor? I am suspecting pixel density is similar, no? (I have not run the math)

 

I have no CLUE how 14bit vs 12bit affects image quality on a *practical* level.

Link to comment
Share on other sites

Why do people always relate bits to stops? We are not seriously thinking we get 12 stops in a 30D shot or 14 stops in 40D shot, do we?

 

The more bits a cam uses, the more levels it can use to "describe" each dot in the image, but how many stops does it "slice" up?

 

Can it really physically capture more stops, or can it use thinner slices to describe the same number of stops as before?

Link to comment
Share on other sites

No, increased bit depth doesn't necessarily capture a greater dynamic range. Dynamic range (viz. height of staircase) is independent of bit depth (viz. number of stairs). You can have tiny stairs or tall stairs, all making the same-sized staircase.

 

People use the term stop, I think, in general to refer to a doubling of light or signal. A lot of times the 12-bit vs 14-bit debate uses the stop term to discuss the fact that given any stop within the dynamic range, more distinct levels of signal can be collected.

Link to comment
Share on other sites

In fact when it comes to bit depth in a sensor, an extra bit corresponds exactly to an extra stop of sensitivity. The numbers in the raw files are proportional to the numbers of electrons captures, which is proportional to the number of photons landing in the site.

 

However in low light situations and at the top of the ISO range, I remain skeptical that the extra two bits will gain you anything, and that would be where you are most inclined to need it.

 

If you read carefully through the experiments and data on this (and surrounding) pages:

http://www.clarkvision.com/imagedetail/evaluation-1d2/

 

you'll see that for the 1D2, and the other models tested, at the highest ISO levels the camera is counting every single electron released by an incident photon. That's a function of the photo site size and the intensity of the light - putting a 14 bit sensor where previously you had twelve won't let you count those individual electrons with any more accuracy, and if in the dimmest areas only a single electron was there to be counted you can't improve on the image with more bits.

 

Where a 14 bit sensor should show an improvement (disregarding any accompanying improvement in the quantum conversion of photons to electrons) is in the highlights: at the same maximum amplification, where a 12-bit sensor would max out at 4096 electrons-worth of photons, a 14-bit sensor will count four times as many, a two stop improvement, before hitting the end stop.

Link to comment
Share on other sites

  • 7 months later...
The 14bit Sensor allows me to get more tonal gradations in my RAW workflow with lightroom. The recover feature allows me to bring back highlight detail you wouldn't think possible. My images look a lot more like film now. And once we get 16bit sensors, film will look even less appealing to many professionals.
Link to comment
Share on other sites

Many of these theoretical estimates of the stops covered by x bits assume that 0 electrons corresponds to a reading of 0. Well, it does, but then so too do several electrons, corresponding to the noise floor that allows for read noise, ADC conversion noise etc.. So the staircase, instead of starting at ground level, starts from the mezzanine, and the "correspondence" between stops and bits is not accurate.

 

Extra bits do help whenever there is a large number of transformations of the RAW data, by reducing the cumulative impact of rounding errors. This is most beneficial for shadow areas, where there are few significant bits to begin with. At a minimum, pixel values will pass through transformations for de-mosaicing, white balance, perhaps noise reduction, and certainly some sharpening. Don't assume that extra bits are all about added highlight detail.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...