Jump to content

Are 16 bit scans = 8 bit scans converted to 16 bit in CS2?


Recommended Posts

<p>The homework (certainly the Histogram and gradient on the above site) is flawed. Its data padding. And as yet, neither of you have illustrated any benefits in actual images (or even defined the so called benefits). But if this makes you feel better, by all means. But if you're going to disagree, you need far better science than that old gradient trick. As I pointed out, applying a Gaussian blur will produce the same results. Now lets see this with an actual image!</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

  • Replies 91
  • Created
  • Last Reply

Top Posters In This Topic

<p>Further, I'll point out some useful thoughts on the subject from someone that's actually using a scientific mindset (an actual color scientist), Bruce Lindbloom. <br>

http://www.brucelindbloom.com/index.html?DanMargulis.html</p>

<p>You can skip all the controversy about Margulis and move directly to point III and V:</p>

 

<blockquote>

<p>

<p>

<p>I think they may well be correct that in some circumstances, performing color corrections in 16-bit will yield higher quality results than performing the same corrections in 8-bit. <strong>However, this conclusion is true only in cases where all of the original 16-bit data is retained (i.e. the "extra" data has not been discarded).</strong> I am quite certain that none of those holding to the 16-bit advocates position is discarding the extra data like Dan is doing in his tests.</p>

<p>

<p>There is some question about the "extra data" contained in 16-bit images. <strong>Does it contain "valid" or "bogus" information?</strong> I think we should have a method for making this determination, or even further, to be able to measure how much of the extra data is valid and how much is bogus.<br>

Another very important aspect of this test has to do with what the 8-bit or 16-bit numbers represent, that is, are they linear with respect to intensity or do they represent some companded form, such as gamma 2.2 or L*? This has a strong bearing on how the error differences appear to the eye (which is what the test is all about).<br>

I also think the role of noise should be investigated. Images from scanners and digital cameras have noise, while most computer generated images do not. I suspect this is the reason for Dan's first Condition, although he never actually says that. Furthermore, converting a 16-bit image to 8-bits in Photoshop introduces noise into the image, as do transformations through profiles and mode changes. In the context of color correction, noise helps "break up" banding (dithering) that may otherwise occur, but this comes at the expense</p>

</p>

<br />

</p>

<br />

</p>

</blockquote>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>I did not define the benefits? Really? Let me quote from my previous posts: "There is an advantage to converting an 8 bit file to 16 bits before drastic editing: rounding off errors in the various editing steps will be smaller for the 16 bit file, even if originally it started out as an 8 bit file." and "... image will have more levels, less missing values in the histogram and there will be less posterization. Whether or not you will see this depends on the subject matter; blue sky and other large areas with little change in color/tonality are notorious for posterization."</p>

<p>Since apparently you are not going to do some simple tests yourself, let me do it for you. I took part of a sky shot from a 16 bit image and applied Output Levels 120/140 and then Input Levels 120/1/140. Here's the result:</p><div>00TkY6-147811584.jpg.2efa4861843bd369a731423c100737ad.jpg</div>

Link to comment
Share on other sites

<p>Then I converted the same image to 8 bits and applied the same Levels. Conclusion: there is an advantage to converting an 8 bit file to 16 bits before editing: rounding off errors in the various editing steps will be smaller for the 16 bit file, even if originally it started out as an 8 bit file and the image will have more levels, less missing values in the histogram and there will be less posterization. Whether or not you will see this depends on the subject matter; blue sky and other large areas with little change in color/tonality are notorious for posterization.</p><div>00TkYC-147811984.jpg.8299e76d51942049092dff0d03b7f2fe.jpg</div>
Link to comment
Share on other sites

<blockquote>

<p>Here's the result:</p>

 

</blockquote>

<p>Go on, what am I supposed to be seeing in terms of a benefit? <br>

Further, how does this dismiss what Lindbloom states about noise and actual data? </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>No, because I can smooth out the results as well using various image processing techniques like selective blurring or smart noise addition to effectively produce the same data padding (number smoothing). Just as I can fix a histogram, if that's what you use to define image quality, by applying similar data padding using a Gaussian blur. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>No, because I can smooth out the results as well using various image processing techniques like selective blurring or smart noise addition to effectively produce the same data padding (number smoothing). Just as I can fix a histogram, if that's what you use to define image quality, by applying similar data padding using a Gaussian blur.</p>

</blockquote>

<p>Sorry, but those arguments don't hold any water. First, while smoothing may make the sky and clouds look more acceptable, it will also result in loss of detail in the rest of the image, even if tedious, time-consuming masks are applied. Converting to 16 bits before editing makes any such manipulations unnecessary. Second, you will noticed that I don't use the histogram to define image quality, but the resulting images themselves. This is what you said just a few posts ago: "<strong>Now lets see this with an actual image!</strong>" and I showed you <strong>some actual images.</strong></p>

Link to comment
Share on other sites

<p>Yes, to your credit, you did (was dither on or off)? </p>

<p>I submit you're simply padding data here. Would altering the data using my blur/noise bring back the original appearance albeit with more work? Yes, I've successfully done this in Photoshop (in this case, its quite easy to load a luminosity mask to leave the clouds which are not banding alone). There's no real data being produced going from 8-bit to 16-bit, its again, data padding. But I will admit that you have demonstrated that if you were stuck with an 8-bit document and had to pull such a ridiculous set of curves, the net result is superior (and as you point out, faster) than not. So I stand corrected (you did as I asked, empirically demonstrate the advantages). </p>

<p>The Histogram comment was directed at the above URL which isn't an effective demonstration, its quite easy to fix histograms. </p>

<p>As to the effect of dither and a pretty interesting demonstration of 16-bit advantages, this page is quite interesting:</p>

<p>http://mike.russell-home.net/tmp/erpy/</p>

<p>Conducting the tests twice, with and without dither is an eye opener (and why such tests must be done with Dither off to see the actual effects of such conversions on the data). </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>was dither on or off</p>

</blockquote>

<p>Dither only comes into play when you convert an 8 bit image to a different color space, so no, dither was not on since I didn't convert to a different color space.</p>

<blockquote>

<p>I submit you're simply padding data here. Would altering the data using my blur/noise bring back the original appearance albeit with more work? Yes, I've successfully done this in Photoshop (in this case, its quite easy to load a luminosity mask to leave the clouds which are not banding alone). There's no real data being produced going from 8-bit to 16-bit, its again, data padding.</p>

</blockquote>

<p>No padding of data at all. The fundamental issue here is that rounding off errors in the various editing steps will be smaller for the 16 bit file, resulting in less loss of levels, less posterization as I have clearly stated and demonstrated. Also important to realize is that altering data with blur/noise will reduce the splotchy look but it will also cause loss of detail and will never correct for rounding off errors that occur at a dramatically higher level in an 8 bit file as compared to a 16 bit file, as I have clearly demonstrated.</p>

<blockquote>

<p>its quite easy to fix histograms.</p>

</blockquote>

<p>Fixing histograms for histograms' sake is meaningless. Histograms are tools; what counts is the final image. Adding blur/noise may make the histogram look better, but it doesn't undo rounding off damage caused by editing at a lower bit level.</p>

Link to comment
Share on other sites

<blockquote>

<p>Dither only comes into play when you convert an 8 bit image to a different color space, so no, dither was not on since I didn't convert to a different color space.</p>

 

</blockquote>

<p>Nope, it absolutely is affecting these conversions. Try the test images referenced above in the 16-bit URL and you'll see a significant difference with Dither on or off. You have to close the document after resetting the preferences. Further, since you like the writings of Bruce Fraser:</p>

<blockquote>

<p> BTW, you can turn off the noise in the 16-bit to 8-bit conversion -- it's the "Use dither (8-bit/channel images)" checkbox in Advanced Color Settings. I can see no reason to do so in real-world imaging, but it does let you see what's coming from the high-bit data itself and what's coming from the dither.</p>

</blockquote>

<p>You wrote:</p>

<blockquote>

<p>No padding of data at all.</p>

</blockquote>

<p>Sure it is. You've got 8-bits of data, you convert to 16-bit, its interpolated data and you yourself said above to stick with the original high bit data if available. </p>

 

<blockquote>

<p>Fixing histograms for histograms' sake is meaningless. Histograms are tools; what counts is the final image. </p>

 

</blockquote>

<p>I agree, as does Bruce. In the same post quoted above:</p>

 

<blockquote>

<p> Histograms are a lousy way to evaluate the efficacy of anything unless you understand what you're looking at. Comparing histograms on the same move done on an 8-bit file and on a 16-bit file tells you something, certainly, but what it tells you isn't particularly related to image quality. Gaps in histograms don't necessarily indicate a problem. They do, however, indicate a potential problem should you need to further differentiate the tones on each side of the gap. You'll have a lot more freedom to do so without introducing posterization and banding if you have some data inbetween than if you don't. That's really all the histogram tells you. Looking at histograms in isolation doesn't tell you anything useful. If you want to get rid of gaps in the histogram, a 40-pixel-radius gaussian blur does so very effectively. It also gets rid of the image...</p>

</blockquote>

<p><br /></p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>Nope, it absolutely is affecting these conversions. Try the test images referenced above in the 16-bit URL and you'll see a significant difference with Dither on or off. You have to close the document after resetting the preferences.</p>

</blockquote>

<p>Here's what Adobe's says: "The Use Dither (8-bit/channel images) option controls whether to dither colors when converting 8-bit-per-channel images between color spaces." Since I'm not converting the file using Image/Mode/Convert to Profile) but up or down convert using Image/Mode/8 Bits/Channel or 16 Bits/Channel I would think that dither wouldn't come into play. But assuming you would not be satisfied with that answer (funny how I have to go through great lengths in proving my opinions, even to the point where I have to repeat tests that you refuse to do, inspite of your own strong opinions) I repeated my tests with Use Dither (8-bit/channel images) checked in Edit/Color Settings and guess what? No difference. The edited 8 bit image is juse as bad whether Use Dither was checked or not.</p>

<blockquote>

<p>Bruce Fraser: BTW, you can turn off the noise in the 16-bit to 8-bit conversion -- it's the "Use dither (8-bit/channel images)" checkbox in Advanced Color Settings.</p>

</blockquote>

<p>Adobe says it only works on 8-bit files when you convert to a different color space. They don't say it works on 16-bit files and they don't say it works when you convert from 8 to 16 or 16 to 8 bit in, presumably, the same work space. My tests seem to confirm what Adobe says.</p>

<blockquote>

<p>Sure it is. You've got 8-bits of data, you convert to 16-bit, its interpolated data and you yourself said above to stick with the original high bit data if available.</p>

</blockquote>

<p>What kind of twisted logic is this? Because I agree that starting out with high bit data if available doesn't therefore mean that "data is padded". Once again, <strong>converting 8 bit files to 16 bit files before editing drastically reduces rounding off errors during editing, resulting in less loss of levels, errors in levels, hence posterization</strong>.</p>

<p>Glad to see that you, Bruce (God rest his soul) and I agree on what histograms tell us.</p>

Link to comment
Share on other sites

<p>Some further thoughts on "padding the data". When you convert an 8 bit file to 16 bits, nothing happens to the original 8 bits - they become the lower 8 bits in the new 16 bit file and have the same values (0 or 1). The 8 highest bits in the 16 bit file are all 0 to start with. So far, no "padding". Now when you edit and R, G and B values are recalculated based on what editing step you execute the rounding off error in the 16 bit file is way, way smaller than in the 8 bit file by a factor of 256. That's why you cause less damage to a 16 bit file. No data is "padded", but errors are way smaller. "Padding the data" is therefor an inappropriate term for this case, but for instance could rightfully be applied when upressing a file when pixels are created and R, G and B values are "created" that didn't exist in the original image.</p>
Link to comment
Share on other sites

<p>Well, I think the key is knowing exactly how the 8 bits are distributed within the 16 bits. I think Andrew is assuming that the 8bits of data would mearly be the lower 8 bits of the 16 bits, and therefore any calculations done on the data would not see a benifit until the value exceeded what 8 bits can hold. This seems logical if that were the way the bits are distributed.</p>

<p>However, what I'm pretty sure is happening is that the 8bits are translated into a different value that is then stored within the 16 bits. That is, the value is spread throughout the 16bits and not just the low order bits. This makes sense in that both 8 and 16 bits the high and low values represent the same range of color. The 16 bit can just represent more hues within that range. I think the advantage then is that rounding errors do not occur as often as editing is done. Since each editing action can introduce some rounding error when working with integer values, and since 8 bits have less whole numbers it gets more rounding errors. Two issues with this being an advantage. First, clipping will be done if you convert (i.e. translate) back to 8bit. Next, I think newer editors do not store the pixels as integer values while the image is open, so rounding error is not really a problem.</p>

<p>I think CS4 and definitely Lightroom do 'non-destructive' editing in that they store the commands in history and re-apply them as needed from the original data. Rather than transforming the integer pixel data with each command. However, many editors may have kept the data as floating point values rather than integer to avoid rounding errors until the file is saved anyway. What this means is that there is much less accumulation of rounding errors, so less impact to IQ. Still, storing the image back to 8bit will introduce some rounding error. To fully realize the benefit, you would have to stay 16bit.</p>

<p>The general, the net answer is that it depends on what your image editor is doing. With CS4 and lightroom you may not see any value or a small amount, and maybe even a problem with going back and forth between 8 and 16 bit. If your editor works with integer data, then it definitely will benefit from going to 16 bit and *STAYING* there. I don't think that 'scientific test' really come into play to disprove or approve as there are too many variables to reliably compare IQ. For example, display color spaces. However, simple math shows that 16 bits has more integer values than 8 bit. And, that 16 bit is not just 8bit with 8 zeros added on the end. Think about it, if that were true then white would come in the middle value and not the end value. For exampel, white is 255, 255, 255 in 8bit, and 65535, 65535, 65535 in 16 bit. The 8bit value is transformed to a 16bit value, more than likely the transformation is lossless going to 16bit, but it will definately introdue rounding errors going back, just because there are not enough whole numbers to map 16 bits worth of whole numbers down to 8 bits.</p>

<p>For what the OP is asking, definitely get a 16bit scan software as you are losing data. You will also avoid the initial transformation to 16bit latter. If you will actually see the difference is another issue, you may not today but as tech evolves you may see it latter as you upgrad PC hardware.</p>

<p>So, Frans, if you change your editing software you may want to re-test your workflow. It may not be valid anymore.</p>

 

Link to comment
Share on other sites

<p><strong>Correction to my previous post.</strong></p>

<p>Some further thoughts on "padding the data". When you convert an 8 bit file to 16 bits, all values of the 8 bit file get multiplied by a factor of 128.5 and rounded up (at least for my image editor, Photoshop CS version 8.0). This means that there are "open spaces" with a value of about 128 or multiples thereof between the various R, G or B values within the image. Those "open spaces" are not populated since there are no additional pixels created, so no "padding" (interpolating values) has occured; the old 8 bit data set has been recalculated by muliplying by 128.5 and rounding up. Now when you edit and R, G and B values are recalculated based on what editing step you execute the rounding off error in the 16 bit file is way, way smaller than in the 8 bit file because there are 16 bits available instead of 8. That's why you cause less damage to a 16 bit file. No data is "padded", but errors are way smaller. "Padding the data" is therefor an inappropriate term for this case, but for instance could rightfully be applied when upressing a file when new pixels are created with "created" R, G and B values that didn't exist in the original image.</p>

Link to comment
Share on other sites

<p>Do the tests again. At least with a modern version of Photoshop (I'm using CS4), 16-bit to 8-bit conversions are absolutely not the same with Dither on versus Dither off. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Okay, I've read through this thread several times, spent a lot of time thinking on this, and did another set of tests to confirm my thinking. Much of what I found supports Andrew's position.</p>

<p><em>Try this test..... </em> Create a long gradient with two closely related, highly saturated colors (I turned dither off on the gradient tool and in the color settings). Do two images in 8 bit (one upconverted to 16 bit after the creation of the gradient), one image natively in 16 bit. Apply a very strong contrast enhancing levels adjustment, something like input levels 120,1,140. The 8 bit file will show banding in the transition area. The 8 bit gradient upconverted to 16 bit will show the same amount of banding, and the 16 bit image will remain smooth.</p>

<p>However, do the first test from the beginning of the thread where you compress (output 120/140) then stretch (input 120/140) the image, and the 8 bit file will show banding, the 8 bit converted to 16 bit will not show banding, and the 16 bit file will not show banding.</p>

<p><em>So, why do two different tests have totally different outcomes? Here's my understanding...</em></p>

<p>Imagine that you have an 8 bit image that contains very closely related tonal transitions. There are a limited number of values with which to describe those transitions. It might be that every bit value is used to account for the smooth transition in the image. So, the image would have tones at 198,198,198 that are next to tones that are 199,199,199 that are next to tones that are 200,200,200, and so on. (I'm using grey for simplicity sake, but this holds true for color as well).</p>

<p>If you apply any edits that stretch out the tonal relationships (ie. adding contrast), then you will create gaps between those original 8 bit tones.<strong> This holds true regardless of whether you edit in 8 bit or 16 bit.</strong> For example 200,200,200 in 8 bit equates to 25700,25700,25700. The closest related 8 bit tone of 199,199,199 will equate to 25572,25572,25572. So, there are 128 tones of 16 bit data that go unused between the two tonal values. If you stretch the image far enough, what was 199,199,199 next to 200,200,200 will now be 190,190,190 next to 210,210,210 - resulting in visible banding. There is no way to get around that relative gap between tones, not even by doing the edit after converting from 8 bit to 16 bit.</p>

<p>Now, take that same 8 bit image and compress the tonal relationships (ie. reduce contrast) while still in 8 bit mode and you start to lose image data. What were two closely related tones will become merged into a single tone, and so on. There is no in-between for 199,199,199 and 200,200,200 so the data has to become one or the other. Do the same edit after converting to 16 bit mode, and all of a sudden there is plenty of room for finer transitions between the original 199,199,199 and 200,200,200. There are 128 tones between the two in 16 bit mode, so you can compress it severely while still maintaining discrete tonal relationships. You can then go on to pull most of the data right back out by adding the contrast back in. This is why the first test I ran across works so well. It shows how forgiving 16 bit mode is when you are compressing tonal values.</p>

<p>I believe the question of rounding errors is largely a red herring. The 8 bit data will get converted to positional markers on the 16 bit scale that represent the same relationship between tones and as data gets stretched through the addition of contrast, the relative gaps between tones will become larger, even if they are being calculated on the 16 bit scale. The math and rounding errors are insignificant (can you really see the difference between 25690,25690,25690 and 25700,25700,25700??). The real question is whether the edits you do are compressing tones or stretching tones.</p>

<p>So, here's my summary of the issue.</p>

<p><strong>1) 8 bit data edited in 8 bit. Worst</strong> <br /> You lose data when you compress the image (reduce contrast). You risk posterization when you stretch the image (add contrast).</p>

<p><strong>2) 8 bit data edited in 16 bit. Better.</strong> <br /> You don't lose much, if any, data when you compress the image (reduce contrast). However, you will still risk posterization when you increase contrast (from the original state, not reduce contrast then add it back like the first test in the thread).</p>

<p><strong>3) 16 bit data edited in 16 bit. Best.</strong> <br /> Obviously.</p>

<p>My final takeaway after testing this a whole bunch.... it's DARN hard to see any of these differences in real world images when doing realistic real world edits.</p>

<p>Don't sweat the little stuff. :-)</p>

<p> </p>

Link to comment
Share on other sites

<blockquote>

<p>Do the tests again. At least with a modern version of Photoshop (I'm using CS4), 16-bit to 8-bit conversions are absolutely not the same with Dither on versus Dither off.</p>

</blockquote>

<p>1) Since I don't have a later version than CS version 8.0 and you have CS4, why don't you do the tests?</p>

<p>2) You also need to explain how dither or no dither in your version of Photoshop impacts the results, since according to you it works very differently from my version. My hunch is that dither will blur the lines between tonality differences and thus would have a similar effect as adding noise and that would cause the histogram to look better but the image would suffer from a loss of sharpness.<br>

Furthermore I don't see how an impact of dither when <strong>down-converting</strong> from 16 to 8 bit has any impact on the case we have been discussing here which is what happens when you up-convert from 8 to 16 bit and the resulting improvements as compared to staying in 8 bits.</p>

Link to comment
Share on other sites

<blockquote>

<p>Much of what I found supports Andrew's position.</p>

</blockquote>

<p>You need to make some better distinction between the issues at hand, what your findings are and how those agree or disagree with Andrew's position.<br>

Everybody seems to agree that the higher the bit depth of the original image, the better the quality/lower posterization of the edited image are and your findings agree with that. The other issue that you have not addressed is how the final image differs if you start out with an 8 bit image and edit with or without first up-converting to 16 bits.</p>

<blockquote>

<p>Don't sweat the little stuff. :-)</p>

</blockquote>

<p>It all depends if you can afford to not sweat the little stuff. Many people work with 8 bit images and many people have posterization issues that more than likely would go away if they first up-converted their image to 16 bits.</p>

Link to comment
Share on other sites

<blockquote>

<p>The other issue that you have not addressed is how the final image differs if you start out with an 8 bit image and edit with or without first up-converting to 16 bits.</p>

 

</blockquote>

<p>Re-read what I wrote. The entire post was about that issue - how the image responds when it is native 8 bit and edited in 8 bit, when it is native 8 bit and edited in 16 bit, and when it is native 16 bit. The answer is that it depends on what<em> <strong>type</strong> </em> of edits you are doing to the image.</p>

<p> </p>

<blockquote>

<p>Many people work with 8 bit images and many people have posterization issues that more than likely would go away if they first up-converted their image to 16 bits.</p>

 

</blockquote>

<p>That's the key difference of opinion in this thread. I don't believe that upconverting to 16 bit solves the most common reason for posterization - when you try to add contrast to an image. Upconverting is certainly preferable, but for some operations it has essentially zero benefit.</p>

<p> </p>

<blockquote>

<p>It all depends if you can afford to not sweat the little stuff.</p>

 

</blockquote>

<p>My comment was meant much more broadly... not particularly limited to questions of 8 bit and 16 bit. :-)</p>

<p> </p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...