Jump to content

8 bit to 16 bit conversion


Recommended Posts

So last night I was at home playing with photoshop to try and understand what

happens when an 8 bit image is heavily edited.

 

This was my test method:

 

512x512 document, 8bit grayscale.

 

did a diagonal gradiant from black to white.

created a brightness/contrast layer asjustments at -90 contrast

created another brightness/contrast layer asjustment at +90 contrast

 

the resulting image showed the lack of range and the stepping is very obvious.

 

flattened the image, and then converted to 16bit.

 

 

Now, the strange part: there is now a full gradiant, no stepping! It looks like

the conversion to a greater bit depth has increased the range.

 

So my question is: does 8bit to 16bit interpolate bits?

 

The other strange thing:

 

using the same version of photoshop at work, I dont get the same effect, the

stepping is still apparant on the converted 16bit! Is there an option somewhere

that enables bit depth interpolation??

Link to comment
Share on other sites

It may be that if you still have the layers (i.e. the image is not flattened) when you convert the image to 16 bits, Photoshop recomputes the result by reapplying the layers in 16 bit mode and thus giving you a better gradient. Did you flatten the image in the second case before converting to 16 bits?
Link to comment
Share on other sites

.

 

Kevin,

 

What screen display bit depth are you looking at the files through? IS it different for each display screen, and THAT is what you are seeing?

 

Also, just try opening up the smooth file on the other computer to see if it also displays smooth of not.

 

Also, try printing! What a surprise you may get there!

 

I often use sub-pixel gausian blur when converting from 8-bit to 16-bit to encourage the elimination of "picket fencing" in the histogram and still keep the image smooth yet detailed enough for re-sharpening and to expand the in-between colors and tones to take advantage of the additional bit depth. Real pictures do have smooth gradients as you test with, and they also often have vastly reduced color count often of "only" 50,000 colors in spite of the bit depth being able to carry much more than that.

 

Regarding 15-bits of image and 1-bit of noise - please tell us more in support of your "1-bit of noise" comment. Do you mean a 15-bit Photoshop image ... does ... what on opening in a "true" 16-bit program? Spreads out to 16-bit or stays "compressed" and the subsequent program adds a bit of non-image non-information to the image? I'd think, if ever, that the 15-bit image would extrapolate out to 16-bits just like an 8-bit image does, and there are not 8- additional bits of noise in a 16-bit image converted from 8-bits, so why do you think there is one bit of noise in a 15-bit image, especially if it never "lands" in a 16-bit world? Where is your comment coming from? Thanks in advance for a deeper explanation and links or whatever supports your "1-bit of noise" contention.

Link to comment
Share on other sites

Converting an 8 bit file to 16 bits has a dramatic affect on the appearance of the histogram with editing. It remains smooth, as though editing an original 16 bit file. I think this represents a reduction in round-off (or truncation) errors during editing. In any case, the histogram represents only the high byte of a 16 bit file.

 

The conversion process does not add noise in my experience. It simply adds a lower byte containing zeros to original byte containing image data.

Link to comment
Share on other sites

ok. I have figured out what it has done.

 

I must have converted to 16 before I flattened the image.

 

I have just tested this, I have a 16bit grayscale image with good smooth transitions. I converted to 8 bit added a -95 contrast layer and a +95 contrast layer - this then displayed the bit depth as very jagged.

 

I then converted to 16bit - with the adjustement layers intact and suddenly the smooth transitions came back with none of this stepping/jaggedness.

 

I then flattened this and pasted it onto the original 16 bit image (which I also had the contrast adjustment layers applied and flattened) and the only difference was a little noise...

 

this is CS1.

 

the noise makes me think that its the 1bit of noise we discussed earlier...

 

so this makes me think there is little real need to save anything in 16 bit. as long as all adjustments are done on layers and are converted to 16 before flattening, there is no real huge gain to capturing/scanning in 16 bit....

Link to comment
Share on other sites

Correct. As your tests have shown, if the original data is in 8bit, the only benefit of converting to 16bit would be for editing (involving significant changes in dynamic range).

 

Regarding noise, I'm not sure what you are seeing in your last test, but it would have nothing to do with this 15bit+1bit noise. Noise in the last bit of data for 16bit colour channels would be absolutely imperceptible, not just because of the limits of human vision, but because the video card and monitor never display such information (it would require a 48bit display system). Again, 16bit channels (48bit total) is only for editing. You never see that precision in a single displayed image.

Link to comment
Share on other sites

Also keep in mind that many (most?) 16-bit sources are actually less than that. My Canon Rebel XT's raw files have 12-bit depth, and my scanner uses 14 bits. I think that's pretty typical for these devices. They need 16 bits for storage, so there's no interpolation (the extra bits just start out as zero). There's certainly a greater range of color and headroom for editing than starting out with 8 bits, even though it's not truly 16 bits. Every little bit helps.
Link to comment
Share on other sites

Well I really cant tell the difference between the two. as long as the 8 bit image is manipulated in 16 bit mode. thats the whole point of starting this thread, as I doubt anybody can tell the difference on an rgb file, or grayscale (I get over the lack of monitor display bit depth by stretching the histogram after)

 

try the method yourself and see the results.

Link to comment
Share on other sites

<ul><i>Regarding 15-bits of image and 1-bit of noise - please tell us more in support of your "1-bit of noise" comment.<br>

[...]<br>

Where is your comment coming from?</i> (Peter Blaise Monohan)</ul>

It's been discussed for years by many of the Adobe folks on various sites, from <a href="http://blogs.adobe.com/jnack/2005/10/photoshop_in_th.html">John Nack</a>, Senior Product Manager of Photoshop to <a href="http://www.24help.info/adobe-photoshop/295471-16-bit-mode-6.html?pp=10">Chris Cox</a> (Photoshop developer since 1996), and various non-Adobe people (<a href="http://www.ukonehome.com/CS-16-bit-channels-are-really-15-bit-759343.html">Jeff Schewe</a>, <a href="http://www.northlight-images.co.uk/article_pages/16_bit_black_and_white.html">Keith Cooper</a>, Andrew Rodney, et al.)<p>

Essentially, in the earlier versions, 16-bit was handled as a signed value--data ranging from -32767 to 32768, only used the positive values (0-32768 [or so]). Supposedly--I haven't tested it--in CS2, 16-bit uses a <i>unsigned</i> 16-bit word rather than <i>signed</i>.<p>

<ul><i>Correct. As your tests have shown, if the original data is in 8bit, the only benefit of converting to 16bit would be for editing (involving significant changes in dynamic range).</i> (Karl Martin)</ul>

Or black and white, where converting to 16-bit before converting to B/W can help avoid banding after edits which wouldn't cause any visible issues in a color file.<p>

 

<b>Andrew</b>: Can you verify that CS2 is "full" 16-bit?

Link to comment
Share on other sites

From: Marc Pawliger

Subject: RE: 16 bit and the info palette

 

Message:

The high-bit representation in Photoshop has always been "15 1" bits

(32767 (which is the total number of values that can be represented by 15

bits of precision) 1). This requires 16 bits of data to represent is

called "16 bit". It is not an arbitrary decision on how to display this

data, it is displaying an exact representation of the exact data Photoshop

is using, just as 0-255 is displayed for 8 bit files.

Link to comment
Share on other sites

.

 

Some say that "extra" 1 bit over 15 is added as noise by Photoshop to the image file. Noise? Huh? When I have any bit-depth image file and open or convert, doesn't the opening program or conversion program merely stretch or compress the existing data to the bit depth and range of the opening or converting program? In other words, if my original file has 0-255 values, and I open it in Photoshop and get it stretched to -32767 to 32768 values, why would someone think there's also 1 bit of noise added just for the fun of it? When I capture in 12-bit or 14-bit and save or open in 16-bit, same/same, right? No "noise", just one range of number values stretching out to a larger range.

 

Thanks for the references to other sources, but since I can't start a dialog with the first attempt *, I suppose we'll have to continue to bring relevant information here.

 

Thanks for your insight.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...