Jump to content

Are 16 bit scans = 8 bit scans converted to 16 bit in CS2?


Recommended Posts

<p>Example images.... Image number one.</p>

<p>This demonstrates how the destruction of image data happens during the reduction of contrast in 8 bit mode. Each image started out as a simple blue to green gradient. The first three slices were created in 8 bit mode, the last one was created in 16 bit mode. Each one of them had the same 2 adjustments applied - output levels 120/140 followed by input levels 120/1/140. The first slice stayed in 8 bit mode the entire time. The second slice was converted to 16 bit mode after the initial output levels adjustment. The third slice was converted to 16 bit before any adjustments. The fourth slice was native 16 bits the entire way through.</p>

<p>The destruction happens during the compression of image data (output levels 120/140). It is made visible by the re-expansion of that data with the input levels adjustment.</p>

<p>I could post an example of just the output levels adjustment, but all the images would look the same - flat, grey, no contrast.</p><div>00TlUK-148265584.jpg.5b7a8c101fa4bad011d3790355368f79.jpg</div>

Link to comment
Share on other sites

  • Replies 91
  • Created
  • Last Reply

Top Posters In This Topic

<p>Example image number two....</p>

<p>This shows how up converting from 8 bit to 16 bit does not prevent banding/posterization. The images all started out as a simple blue to green gradient. All images have had just one edit applied to them, input levels 120/1/140. The first slice started out as 8 bit data and was edited in 8 bit. The second slice started out as 8 bit data, was upconverted to 16 bit, then had the levels adjustment applied. The third slice was native 16 bit and was edited in 16 bit.</p>

<p>This test is different than the first test because it is taking the original blue/green gradient and enhancing the contrast from the starting point. The first test just reduces contrast then puts it back to the same place where it started.</p><div>00TlUT-148267584.thumb.jpg.6ece8330a810633ce63cfd440523576b.jpg</div>

Link to comment
Share on other sites

<blockquote>...but there is only one problem that everyone keeps coming back to - posterization . That's what this whole discussion is about, banding in smooth tonal transitions. There isn't any other issue that we're trying to fix.</blockquote><p>

 

I don't know about that. I thought it was about image quality in general. But if we want to limit it to posterization then that's ok. I thought Frans' test images way above proved that banding could be reduced by converting up to higher bit depth, but seeing Jeff's post, the improvement may be down to the application of dithering. Why are your results different? Did you turn dithering off? If so, then that answers the question. Converting up to higher bit depth doesn't really matter. It is the application of a certain amount of noise which smooths gradients out. Is that the right assessment to make?

Link to comment
Share on other sites

I've just done a bit of checking of my PS setup, and I have the dithering checkbox checked. But when I up-convert an 8 bit image to 16 bits, there is no addition of any noise. I have checked this with Guillermo's Histogrammer, and all but every 256th level is empty. I only have PS 7, so I don't know if things may have changed since then.<p>

 

Here is a post I made earlier in the year on this issue and the attached image. Considering that I haven't changed my settings, I am assuming that the dithering checkbox was checked then too. Note that I used a polarizer (on this wide angle shot... tsk, tsk, tsk), which has accentuated the gradient in the sky. But the sky wasn't blown in any channel in the original.<p>

 

<blockquote><p>Here's an example of 8 vs 16 bit. 8 bit shows banding in the sky. The 16 bit image was actually an 8 bit jpg that was converted to a 16bit image for the editing phase, and then converted back to 8 bits. The reason I didn't work in 16bit the whole way was with my version of PS (PS7), I couldn't work out how to add a layer copy of the image itself (which is the method I used to give the image that sort of slight bleached look). This shows that 16bit editing can be useful for images which start out as 8 bit files.<P>

 

Now I guess there is more than one way to skin a cat, and Patrick could probably do this another way perhaps (or perhaps not) with less banding. But it does definitely show the value of 16bit editing.<p>

 

By the way method was: layer copy -> desaturate -> multiply blend mode -> levels (for brightness and colour imbalance) -> hue saturation (admittedly a rather large value of 35).</blockquote><div>00Tld8-148333684.thumb.jpg.c37ed6f77dc1e9e21b7bea64ff0b741f.jpg</div>

Link to comment
Share on other sites

<p>I had dithering off for my examples. I think that dithering on/off would make a difference, but it would be more as a mitigation of the banding caused by tonal adjustments. I don't think it is the primary factor, more likely is a secondary factor. I'll have to try my tests again with dithering on later tonight to see if there is a difference.</p>

<blockquote>

<p>I thought Frans' test images way above proved that banding could be reduced by converting up to higher bit depth, but seeing Jeff's post, the improvement may be down to the application of dithering. Why are your results different? Did you turn dithering off? If so, then that answers the question.</p>

</blockquote>

<p>Both Frans and I are getting the same results from our tests, I just added some additional tests with more variations. His three test images are the same test and same result as in the first example image I posted, with the exception that he didn't do what I did in the second slice (convert to 16 bit halfway through the editing process, after compression but before expansion). His first image is equivalent to the 4th slice in my image, his second image is equivalent to the third slice, and his third image is equivalent to the first slice.</p>

<p>Looking at your example I'm guessing the top image is the 8 bit one and the bottom image is the 16 bit one? I can see a bit of a difference, but I think that the overall jpg compression from uploading the image may be interfering.</p>

<p>The edits you did on the image are a good example of how complicated things can be in Photoshop - adding and removing saturation, adding contrast through blending modes, and the fact that saturation and contrast are intertwined by their very nature. Your steps introduce both tonal compression and tonal expansion, a subtle version of what Frans' test and my first example are doing. It certainly shows that when image edits get complicated it is better for the image to be in 16 bit.</p>

Link to comment
Share on other sites

<p><strong>THE DITHER EXPERIMENT RESULTS ARE IN!</strong></p>

<p>As promised, I ran some more tests. I applied the previously described editing to a 16 bit file, then to the same file converted to 8 bits and then to the same file converted to 8 and then to 16 bits. I did this with both Use Dither on and Use Dither off. When visually inspected, all the resulting images look like the ones I posted previously. I then applied Andrew's suggested analytical tool (which compares two images to see if they are identical or not) to the pairs of 16 to 8 bit and 16 to 8 to 16 bit images (with and without Use Dither) and <strong>guess what? Both sets show absolutely no difference; both resulting images come out a solid 128 value gray</strong>. So, at least for my Photoshop CS version 8.0 dither doesn't make a difference. How about that, Andrew?</p>

<p><strong>Conclusions</strong>:<br />1) Capturing at a higher bit depth is best<br />2) There is an advantage to convert an 8 bit file to 16 bits before editing<br />3) For Photoshop CS, version 8.0, dither doesn't make a difference</p>

Link to comment
Share on other sites

<blockquote>

<p>I'll have to try my tests again with dithering on later tonight to see if there is a difference.</p>

 

</blockquote>

<p>Repeated my same tests with dither turned back on. No real difference. I think dither is a very subtle issue compared to the scale of the edits we are doing for the tests.</p>

Link to comment
Share on other sites

<p>Sheldon,</p>

<p>I forgot to ask: do you use Photoshop and if so, what version (like CS2, version 9.1)? If your Photoshop is different from mine (CS version 8.0) could you do me a favor and tell me what exactly the help function says about Using dither (Help>Photoshop Help>Search: Using dither). Mine says: "<strong>The Use Dither (8-bit/channel images) option controls whether to dither colors when converting 8-bit-per-channel images between color spaces. This option is available only when the Color Settings dialog box is in Advanced Mode</strong>. When the Use Dither option is selected, Photoshop mixes colors in the destination color space to simulate a missing color that existed in the source space. Although dithering helps to reduce the blocky or banded appearance of an image, it may also result in larger file sizes when images are compressed for Web use."<br>

I added the emphasis in bold because I think that's the crucial part of the description.</p>

<p>Thanks in advance!</p>

Link to comment
Share on other sites

Frans (and Sheldon)... get your hands on <a href="http://www.guillermoluijk.com/software/histogrammar/index.htm">this </a> piece of software for showing 16 bit histograms. Convert your 8-bit image up to 16-bits with dithering on and see if it adds any noise to the image. I have photoshop 7, and it certainly doesn't add any noise. <p>

 

So as far as I am concerned, on PS7, you will maintain better image quality when editing an 8-bit file by first converting it up to 16-bits. Dithering plays no part in this, and I presume it can only be attributed to reduced rounding errors of the larger bit-depth.

Link to comment
Share on other sites

<p>I didn't try the Apply Image tool... I was just looking for seat of the pants differences. I'm using CS4. The help function for CS4 regarding dither says pretty much the same thing as you quoted.</p>

 

<blockquote>

<p>You will maintain better image quality when editing an 8-bit file by first converting it up to 16-bits. Dithering plays no part in this, and I presume it can only be attributed to reduced rounding errors of the larger bit-depth.</p>

 

</blockquote>

<p>Agreed, except I think the benefit is not because of the enhanced mathematical precision of 16 bit, but because there are simply more available tones and more space for existing tones to be pushed around without being lost. We might be talking about the same thing here, just saying it in different ways.</p>

Link to comment
Share on other sites

<blockquote>Agreed, except I think the benefit is not because of the enhanced mathematical precision of 16 bit, but because there are simply more available tones and more space for existing tones to be pushed around without being lost. We might be talking about the same thing here, just saying it in different ways.</blockquote><p>

 

Yep, I think so.

 

<p>Frans... on that link, click on one of the two links in the top right of that page. They say something like "Histogrammer v1.1" and "Actualizer v 1.2". These are the download links. I think they both link to the same program. The program shows you 16-bit histograms, unlike photoshop, and allows you to zoom right in on the histogram. To load your image in the program you need to click the button with the three dots (...) on it. There's a whole lot of other stuff in there that I haven't bothered working out yet.

Link to comment
Share on other sites

<p>Thought it might be interesting to apply the same editing that I used for the blue sky with clouds detail image to a gray wedge. The upper image is after editing the 16 bit version, the middle image after editing the 16 to 8 to 16 bit version and the lower image after editing the 16 to 8 bit version. Again, it didn't make any difference if I checked or unchecked the Use Dither box in Color Settings. While nobody creates and prints gray wedges just for the sake of it (at least I hope nobody does), these images clearly show the same kind of results as the actual blue sky with clouds detail image.</p><div>00ToCN-149767584.jpg.cf41034c7af448b71d9245db29cffaab.jpg</div>
Link to comment
Share on other sites

<p>Well, I just about read the whole thread. Thank you to each one who participated. May I make an analogy? Processing digital sound is analogous to digital "light" - and in processing 16 bit digital sound Adobe Audition allows processing to take place in 32 bit floating point for the simple reason that accumulated quantization errors are reduced in the final product. Obviously working in 16 bit photography is ideal, upconverting from 8 bit prior to processing makes sense.</p>
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...