Are 16 bit scans = 8 bit scans converted to 16 bit in CS2?

Discussion in 'Digital Darkroom' started by david_simonds, Jun 22, 2009.

  1. Friends, I have an Eversmart Pro II which I use to scan 6x6cm and 4x5 chromes. The Oxygen software scans in 8 bit, though there is, I understand, a plugin that converts to 16 bit. My question is whether an 8 bit scan that is converted to 16 bit in CS2 responds to color management in the same way as a file originally scanned in 16 bit. FWIW, I do most all the color adjustments prescan with very modest adjustments in CS2. Since I am scanning 4x5 chromes, the 16 bit scans would be twice the size of already processor choking files in 8 bit.
    Thanks,
    David
     
  2. I think there is a theoretical loss to having been in 8 bit then upconverted into 16 bit. The numeric stair steps between tonal or color changes are coarser in 8 bit, and putting it into 16 bit doesn't smooth them out automatically.
    However, most of the drastic tonal demands take place in the editing process (curves, levels, etc) so as long as you're editing in the 16 bit world you should be okay.
    It's not like a native 8 bit 4x5 scan is likely to show significant banding/posterizing in the raw scan. It's only when you push around an 8 bit file that it would have problems.
     
  3. Spearhead

    Spearhead Moderator

    Your scanner produces a 14 bit output that is mapped onto 16 bits (two bits of zero.) The software is discarding 6 of those bits. If you convert back to 16 bits, you now have 8 bits of data and 8 zeros. Get different software for the scanning.
    The above post is wrong, you are pushing around 8 bits even in 16 bit mode if you threw away the other 6 bits of data.
     
  4. The above post is wrong, you are pushing around 8 bits even in 16 bit mode if you threw away the other 6 bits of data.​
    Not so fast, Jeff. There is an advantage to converting an 8 bit file to 16 bits before drastic editing: rounding off errors in the various editing steps will be smaller for the 16 bit file, even if originally it started out as an 8 bit file.
     
  5. Spearhead

    Spearhead Moderator

    You get an extra bit at most. Rounding errors aren't going out very far. There's really no reason to zero out the last 6 bits in the original file.
     
  6. Actually, the practical difference between starting with an 8 bit file and editing it in 16 bit mode vs starting with a 16 bit file is not as drastic as you suggest. Try it yourself in Photoshop, use this article as a guideline for how to massively compress then stretch an image in 2 steps using the levels tool.
    http://www.photoshopessentials.com/essentials/16-bit/page-2.php
    Create 3 new image files in PS and add the same gradient (color or B&W) in each. One file should start out in 16 bit with a full 16 bit gradient. The other two files should start out in 8 bit with an 8 bit gradient. One of the 8 bit files stays in 8 bit, the other gets changed to 16 bits to complete the edits (even though it's still only an 8 bit gradient). If you don't want to use a gradient, then just grab any color image from a RAW file - render one copy in 16 bit and two copies in 8 bit.
    The result is that the 8 bit file gets massively posterized, the 16 bit file still is a smooth gradient, and the 8 bit gradient edited in 16 bit mode looks pretty much like the 16 bit gradient. I know that this is somewhat of a contrived test to show the difference between 8 bit and 16 bit, but I can't imagine a scenario where you would be pushing a file more than this in PS.
    A couple side notes... an 8 bit file is not a 16 bit file with 8 bits of extra zeros. If it was it would take up the same amount of disk space as a 16 bit file. A 16 bit file takes up more disk space because it uses greater precision to describe the number of tonal transitions between pure white and pure black (65,536 steps between white and black vs 256 for 8 bit). When downconverting an image from 14 or 16 bits to 8 bits all that is happening is the RGB color values are each getting rounded off to the nearest integer between 0 and 255. It's not a question of throwing away the last 6 bits, it's a question of converting to a less precise scale of color measurement/description. However, an 8 bit image can still diplay 16.8 million colors - more than the human eye can see.
    Anyhow, not trying to be disagreeable. I agree with everything you have recommended - Scan in the highest bit file possible (16 bits), get new software if your current software limits you to 8 bits. There is no reason to throw away 14 bit data and save the raw scan as an 8 bit file.
    I scan my film as 16 bit TIF's in ProPhoto RGB color space.
     
  7. You want high bit data from the get go (not sampled up which buys you nothing). IF indeed, the device provides more than 8-bits per color, you absolutely want that data! We now have output devices (at least on the Mac OS) that can send 16-bit data to the print driver.
    http://www.digitalphotopro.com/gear/imaging-tech/the-bit-depth-decision.html
     
  8. sampled up buys you nothing​
    I have to disagree. As Sheldon and I have maintained, converting 8 bit to 16 bit has an advantage. Of course it is best to not throw any data out to begin with.
     
  9. I have to disagree. As Sheldon and I have maintained, converting 8 bit to 16 bit has an advantage. Of course it is best to not throw any data out to begin with.​
    Fine. You have something empirical to demonstrate this (and doing so without using dither on) which affects all things in such comparisons?
     
  10. Fine. You have something empirical to demonstrate this (and doing so without using dither on) which affects all things in such comparisons?​
    I'm not going to regurgitate what others have demonstrated so effectively. Study the material that Sheldon referenced and the related material in Real World Photoshop and do some tests yourself. You'll be amazed what you can learn if you really try.
     
  11. I'm not going to regurgitate what others have demonstrated so effectively. Study the material that Sheldon referenced and the related material in Real World Photoshop and do some tests yourself. You'll be amazed what you can learn if you really try.​
    Then your answer is no. FWIW, Bruce was a dear friend and business partner, I've got every version of the book published. Maybe you can point out where padding values with interpolated data is such a benefit.
    As for Histograms themselves, a very easy way to make them look improved (smooth) is apply a nice dose of Gaussian Blur. Of course, this hoses the image but heck, at least your Histogram looks nicer....
     
  12. Then your answer is no.​
    I've run my own tests and you may want to do the same before claiming that there are no benefits to sampling up before editing.
     
  13. I've run my own tests and you may want to do the same before claiming that there are no benefits to sampling up before editing.​
    You have something empirical to demonstrate this??? Or this is going to be a religious debate? If you've got the science, I'm all ears.
     
  14. You have something empirical to demonstrate this??? Or this is going to be a religious debate? If you've got the science, I'm all ears.​
    I'm not going to hold your hand. You're a big guy, you can try it out for yourself, but please do so before claiming that there are no benefits.
     
  15. I'm going to call this one a difference in semantics...
    In some situations (like my "example" referenced above) where you take 8 bit data, crush it, then stretch it, there's a huge difference in whether you edit in 8 bit or 16 bit, and not much difference whether you start with 8 bit or 16 bit data.
    However if you take a big long gradient that covers a more narrow range of tones (ie. blue to light blue) then stretch the heck out of it, the 8 bit file is much more likely to show banding than the 16 bit file. The 8 bit file may only start with 20 numerical steps with which to describe the transition between colors, regardless of whether you are editing in 16 bit. The native 16 bit file has thousands of numerical values with which to describe the color transitions and when it is stretched it will fare much better.
    So, it depends on the image and what you are doing with the image as to whether there is a noticeable or practical difference.
    FWIW, I defer to Andrew on these things. He has written more and knows more about Photoshop and color management than I ever will.
     
  16. I'm not going to hold your hand. You're a big guy, you can try it out for yourself, but please do so before claiming that there are no benefits.​
    OK, religious debate. In science on the other hand, there's something called peer review where someone such as yourself if confident in your tests (which obviously you're not) could submit them to others to provide actual evidence of your theories. This doesn't seem to be something you can provide so I'll stick with my original comments above of which you disagreed but provided nothing to back up your disagreement. I think we're done here gang.
     
  17. OK, religious debate. In science on the other hand, there's something called peer review where someone such as yourself if confident in your tests (which obviously you're not) could submit them to others to provide actual evidence of your theories. This doesn't seem to be something you can provide so I'll stick with my original comments above of which you disagreed but provided nothing to back up your disagreement. I think we're done here gang.​
    Spend a couple of minutes, run some tests and you'll have all the actual evidence you so desperately seem to look for. I ran these tests many years ago, now it's your turn.
     
  18. In research if someone states what others have shown shown to be true and they believe. It is not up to them to prove they are correct, but up to those who challenge them to submit the tests and data to prove they are wrong.
    He did not state it was "his" theory, but what he had read and believes to be true.

    Why don't you, Andrew submit evidence that he or the website quoted is incorrect?

    In science we accept the hypothesis until it is proven to be wrong.

    Just saying he is wrong, is not proof or ANY different than what you call a "religious" belief. Especially when it looks like several others are disagreeing with your "religious belief or debate" and you have proven nothing.
    You do not even quote any source materiel for your beliefs, but just insist that you are right.
    It does not even appear that you have read the page quoted.
    http://www.photoshopessentials.com/essentials/16-bit/page-2.php
     
  19. Andrew the science is pretty obvious, they've already shown how it works. Just think about it a little bit and I think you'll get it. Perhaps your misreading what they've said. When you convert a 8 bit file to 16 bit you don't instantly get more rage, but you get the possibility for more range. So when you edit the file that has been upsampled there are more discrete values to work with while editing. If you're not doing much editing then you're right, upsampling buys you nothing, but if you are editing a lot you have a larger working space with more colors to work with. When applying curves and other adjustments this allows there to be more in between tones which makes posterization and blocking artifacts less likely. Of course starting out at higher depths will produce better results.
    In response to the original question, upsampling will help if you edit a lot, but since you said you use the scanner settings mostly, I'd agree with everyone else who said use the highest you can get.
     

  20. In research if someone states what others have shown shown to be true and they
    believe. It is not up to them to prove they are correct, but up to those who
    challenge them to submit the tests and data to prove they are wrong. He did not
    state it was "his" theory, but what he had read and found to be true.
    Why don't you, Andrew submit evidence that he is incorrect?​

    Hold his hand? Sure. And I disagree that he's not responsible to prove his point!
    1. First step. Download DNG document from my iDisk* in the folder called ProPhotovsAdobeRGB files
    2. Next, open in ACR using default settings described in the DNG (there's no reason to mess with the rendering for these tests but you can if you so desire). Set the workflow options for ProPhoto RGB, 16-bit data.
    3. Next, go into the Photoshop color settings, uncheck the Dither check box (this is CRITICAL for providing apples to apples results in conversions).
    4. You now have a 16-bit document from Raw. Duplicate that image (Image>Duplicate). Convert that copy to 8-bit. Now back to 16-bit for the "data padding".
    5. Pull nice curve, I've provided one in the above folder with the DNG. Do the same on the "real" 16-bit document.
    6. Convert 16-bit Document to 8-bit for next step (analysis using Apply image which only works with images in the same bit depth). Shouldn't matter, you applied the curve on the so called better, 16-bit data.
    7. Set whichever image isn't listed as the target as the source. Set the Channel as RGB. Set the Blending to Subtract, with an Opacity of 100, a Scale of 1, and an Offset of 128.
    If the scans/test images were truly identical, every pixel in the image would be a solid level 128 gray. Pixels that aren't level 128 gray are different by the amount they depart from 128 gray. You can use Levels to exaggerate the difference, which makes patterns easier to see.
    What do you see? Are the two equal, identical? No. If indeed the so called benefits of converting 8-bit to 16-bit provided the same results, we'd see it. And more importantly, how does this prove there's a benefit? That's the 64,000 question! When you convert from 8-bit to 16-bit, you're padding data with nothing new in terms of actual data, unlike the data from a high bit capture. So it IS up to those who say there's a benefit to PROVE somehow that doing this conversion is beneficial (I've proven they are not equal/identical).
    I said converting to 16-bit buys you nothing, Frans says there is. Where's the proof? Again, I'm all ears but Frans apparently doesn't wish to provide a means to empirically prove this. And we haven't even defined "benefits" in this workflow which would kind of be useful no?
    As to Jamie's point about range, that's got nothing to do with bit depth. More bit depth doesn't equate into dynamic range. Its simply the number of steps in the staircase, not the overall height of the staircase.
    *My public iDisk:
    thedigitaldog
    Name (lower case) public Password (lower case) public
    Public folder Password is "public" (note the first letter is NOT capitalized).
    To go there via a web browser, use this URL:
    http://idisk.mac.com/thedigitaldog-Public
     
  21. Andrew,
    I didn't mean dynamic range I meant the range of tones available, or as you say the number of stairs. That fact that your experiment shows that there IS a difference between the two files shows the fact that there is more data for your editing program to work with.
    Who's to say what's better? It depends what you want, but with 16bit there are more colors to use. Upsampling itself doesn't gain you anything, just like upsizing your pixel dimensions won't, unless you do some processing to make use of the extra. By expanding your color pallet you have more colors to play with. If you just upscale and then print it's pointless. True, upsampling is not as good as a high bit depth scan, but if your making use of the extra colors it can help.
    Just like making a file larger (in terms of pixel dimensions) can help for making large prints if you properly sharpen and blur the larger file. It's not as good as having higher resolution to start with, but it's better than using the unsharpened low res file to make a large print.
    It all depends on what you do with the extra headroom once you have it. There's no reason to make the file larger unless you're actually going to make use of the extra data.
     
  22. That fact that your experiment shows that there IS a difference between the two files shows the fact that there is more data for your editing program to work with.​
    No, it only shows they are not the same. There's nothing yet to suggest there's "more real data," better data" etc. Someone (Frans) still needs to make proof of concept here. What's apparent is, they are not the same.
    Who's to say what's better?​
    Frans (or anyone else who's yet to prove its "better").
     
  23. Frans​
    I don't feel the need to prove anything; I did my homework already many years ago and you can either believe me, continue to claim that I'm wrong without supporting data or run some tests yourself. For those interested, here's the explanation: when you start with more levels there will be less damage done with editing (particularly those editing steps that require rounding off and most editing steps require rounding off) and the resulting image will have more levels, less missing values in the histogram and there will be less posterization. Whether or not you will see this depends on the subject matter; blue sky and other large areas with little change in color/tonality are notorious for posterization. Many years ago I did some simple tests, similar to the ones described in the article Sheldon referred to and the 8 bit file that was first converted to 16 bit before editing showed less posterization in said areas than the 8 bit file that was not converted but received the same editing steps.
     
  24. For those interested, here's the explanation: when you start with more levels there will be less damage done with editing (particularly those editing steps that require rounding off and most editing steps require rounding off) and the resulting image will have more levels, less missing values in the histogram and there will be less posterization.​
    Now I wonder why I purchased that 5DMII to replace my 5D. All that extra real resolution. Just interpolate the data. Here's the explanation: when you start with more (interpolated) pixels, there's more data and less damage done and the resulting image will have more pixels.
    The original comment was, interpolated bit depth buys you nothing. I'm still waiting for proof, on an actual image, with steps that illustrate there is any advantage in doing this.
     
  25. Now I wonder why I purchased that 5DMII to replace my 5D. All that extra real resolution. Just interpolate the data. Here's the explanation: when you start with more (interpolated) pixels, there's more data and less damage done and the resulting image will have more pixels.​
    Nobody claimed that interpolating is just as good as capturing more data to begin with.
    The original comment was, interpolated bit depth buys you nothing. I'm still waiting for proof, on an actual image, with steps that illustrate there is any advantage in doing this.​
    I and others have maintained that interpolation before editing results in less damage, less posterization. Sheldon posted a link to an article on that subject. I have described the tests I did many years ago and the results. If you don't believe others that clearly have done their homework and reported on the results, then maybe you should stop arguing and start conducting some tests of your own.
     
  26. The homework (certainly the Histogram and gradient on the above site) is flawed. Its data padding. And as yet, neither of you have illustrated any benefits in actual images (or even defined the so called benefits). But if this makes you feel better, by all means. But if you're going to disagree, you need far better science than that old gradient trick. As I pointed out, applying a Gaussian blur will produce the same results. Now lets see this with an actual image!
     
  27. Further, I'll point out some useful thoughts on the subject from someone that's actually using a scientific mindset (an actual color scientist), Bruce Lindbloom.
    http://www.brucelindbloom.com/index.html?DanMargulis.html
    You can skip all the controversy about Margulis and move directly to point III and V:
    I think they may well be correct that in some circumstances, performing color corrections in 16-bit will yield higher quality results than performing the same corrections in 8-bit. However, this conclusion is true only in cases where all of the original 16-bit data is retained (i.e. the "extra" data has not been discarded). I am quite certain that none of those holding to the 16-bit advocates position is discarding the extra data like Dan is doing in his tests.
    There is some question about the "extra data" contained in 16-bit images. Does it contain "valid" or "bogus" information? I think we should have a method for making this determination, or even further, to be able to measure how much of the extra data is valid and how much is bogus.
    Another very important aspect of this test has to do with what the 8-bit or 16-bit numbers represent, that is, are they linear with respect to intensity or do they represent some companded form, such as gamma 2.2 or L*? This has a strong bearing on how the error differences appear to the eye (which is what the test is all about).
    I also think the role of noise should be investigated. Images from scanners and digital cameras have noise, while most computer generated images do not. I suspect this is the reason for Dan's first Condition, although he never actually says that. Furthermore, converting a 16-bit image to 8-bits in Photoshop introduces noise into the image, as do transformations through profiles and mode changes. In the context of color correction, noise helps "break up" banding (dithering) that may otherwise occur, but this comes at the expense​
     
  28. Oops. Pilot error.
     
  29. I did not define the benefits? Really? Let me quote from my previous posts: "There is an advantage to converting an 8 bit file to 16 bits before drastic editing: rounding off errors in the various editing steps will be smaller for the 16 bit file, even if originally it started out as an 8 bit file." and "... image will have more levels, less missing values in the histogram and there will be less posterization. Whether or not you will see this depends on the subject matter; blue sky and other large areas with little change in color/tonality are notorious for posterization."
    Since apparently you are not going to do some simple tests yourself, let me do it for you. I took part of a sky shot from a 16 bit image and applied Output Levels 120/140 and then Input Levels 120/1/140. Here's the result:
    00TkY6-147811584.jpg
     
  30. Here's the same image converted to 8 bits and then back to 16 bits and applied the same Levels. Here's the result:
    00TkYB-147811784.jpg
     
  31. Then I converted the same image to 8 bits and applied the same Levels. Conclusion: there is an advantage to converting an 8 bit file to 16 bits before editing: rounding off errors in the various editing steps will be smaller for the 16 bit file, even if originally it started out as an 8 bit file and the image will have more levels, less missing values in the histogram and there will be less posterization. Whether or not you will see this depends on the subject matter; blue sky and other large areas with little change in color/tonality are notorious for posterization.
    00TkYC-147811984.jpg
     
  32. Here's the result:​
    Go on, what am I supposed to be seeing in terms of a benefit?
    Further, how does this dismiss what Lindbloom states about noise and actual data?
     
  33. Further, I'll point out some useful thoughts on the subject from someone that's actually using a scientific mindset.​
    Are my above examples sufficiently thoughtful and scientific?
     
  34. No, because I can smooth out the results as well using various image processing techniques like selective blurring or smart noise addition to effectively produce the same data padding (number smoothing). Just as I can fix a histogram, if that's what you use to define image quality, by applying similar data padding using a Gaussian blur.
     
  35. No, because I can smooth out the results as well using various image processing techniques like selective blurring or smart noise addition to effectively produce the same data padding (number smoothing). Just as I can fix a histogram, if that's what you use to define image quality, by applying similar data padding using a Gaussian blur.​
    Sorry, but those arguments don't hold any water. First, while smoothing may make the sky and clouds look more acceptable, it will also result in loss of detail in the rest of the image, even if tedious, time-consuming masks are applied. Converting to 16 bits before editing makes any such manipulations unnecessary. Second, you will noticed that I don't use the histogram to define image quality, but the resulting images themselves. This is what you said just a few posts ago: "Now lets see this with an actual image!" and I showed you some actual images.
     
  36. Yes, to your credit, you did (was dither on or off)?
    I submit you're simply padding data here. Would altering the data using my blur/noise bring back the original appearance albeit with more work? Yes, I've successfully done this in Photoshop (in this case, its quite easy to load a luminosity mask to leave the clouds which are not banding alone). There's no real data being produced going from 8-bit to 16-bit, its again, data padding. But I will admit that you have demonstrated that if you were stuck with an 8-bit document and had to pull such a ridiculous set of curves, the net result is superior (and as you point out, faster) than not. So I stand corrected (you did as I asked, empirically demonstrate the advantages).
    The Histogram comment was directed at the above URL which isn't an effective demonstration, its quite easy to fix histograms.
    As to the effect of dither and a pretty interesting demonstration of 16-bit advantages, this page is quite interesting:
    http://mike.russell-home.net/tmp/erpy/
    Conducting the tests twice, with and without dither is an eye opener (and why such tests must be done with Dither off to see the actual effects of such conversions on the data).
     
  37. was dither on or off​
    Dither only comes into play when you convert an 8 bit image to a different color space, so no, dither was not on since I didn't convert to a different color space.
    I submit you're simply padding data here. Would altering the data using my blur/noise bring back the original appearance albeit with more work? Yes, I've successfully done this in Photoshop (in this case, its quite easy to load a luminosity mask to leave the clouds which are not banding alone). There's no real data being produced going from 8-bit to 16-bit, its again, data padding.​
    No padding of data at all. The fundamental issue here is that rounding off errors in the various editing steps will be smaller for the 16 bit file, resulting in less loss of levels, less posterization as I have clearly stated and demonstrated. Also important to realize is that altering data with blur/noise will reduce the splotchy look but it will also cause loss of detail and will never correct for rounding off errors that occur at a dramatically higher level in an 8 bit file as compared to a 16 bit file, as I have clearly demonstrated.
    its quite easy to fix histograms.​
    Fixing histograms for histograms' sake is meaningless. Histograms are tools; what counts is the final image. Adding blur/noise may make the histogram look better, but it doesn't undo rounding off damage caused by editing at a lower bit level.
     
  38. Dither only comes into play when you convert an 8 bit image to a different color space, so no, dither was not on since I didn't convert to a different color space.​
    Nope, it absolutely is affecting these conversions. Try the test images referenced above in the 16-bit URL and you'll see a significant difference with Dither on or off. You have to close the document after resetting the preferences. Further, since you like the writings of Bruce Fraser:
    BTW, you can turn off the noise in the 16-bit to 8-bit conversion -- it's the "Use dither (8-bit/channel images)" checkbox in Advanced Color Settings. I can see no reason to do so in real-world imaging, but it does let you see what's coming from the high-bit data itself and what's coming from the dither.​
    You wrote:
    No padding of data at all.​
    Sure it is. You've got 8-bits of data, you convert to 16-bit, its interpolated data and you yourself said above to stick with the original high bit data if available.
    Fixing histograms for histograms' sake is meaningless. Histograms are tools; what counts is the final image.​
    I agree, as does Bruce. In the same post quoted above:
    Histograms are a lousy way to evaluate the efficacy of anything unless you understand what you're looking at. Comparing histograms on the same move done on an 8-bit file and on a 16-bit file tells you something, certainly, but what it tells you isn't particularly related to image quality. Gaps in histograms don't necessarily indicate a problem. They do, however, indicate a potential problem should you need to further differentiate the tones on each side of the gap. You'll have a lot more freedom to do so without introducing posterization and banding if you have some data inbetween than if you don't. That's really all the histogram tells you. Looking at histograms in isolation doesn't tell you anything useful. If you want to get rid of gaps in the histogram, a 40-pixel-radius gaussian blur does so very effectively. It also gets rid of the image...​
     
  39. Nope, it absolutely is affecting these conversions. Try the test images referenced above in the 16-bit URL and you'll see a significant difference with Dither on or off. You have to close the document after resetting the preferences.​
    Here's what Adobe's says: "The Use Dither (8-bit/channel images) option controls whether to dither colors when converting 8-bit-per-channel images between color spaces." Since I'm not converting the file using Image/Mode/Convert to Profile) but up or down convert using Image/Mode/8 Bits/Channel or 16 Bits/Channel I would think that dither wouldn't come into play. But assuming you would not be satisfied with that answer (funny how I have to go through great lengths in proving my opinions, even to the point where I have to repeat tests that you refuse to do, inspite of your own strong opinions) I repeated my tests with Use Dither (8-bit/channel images) checked in Edit/Color Settings and guess what? No difference. The edited 8 bit image is juse as bad whether Use Dither was checked or not.
    Bruce Fraser: BTW, you can turn off the noise in the 16-bit to 8-bit conversion -- it's the "Use dither (8-bit/channel images)" checkbox in Advanced Color Settings.​
    Adobe says it only works on 8-bit files when you convert to a different color space. They don't say it works on 16-bit files and they don't say it works when you convert from 8 to 16 or 16 to 8 bit in, presumably, the same work space. My tests seem to confirm what Adobe says.
    Sure it is. You've got 8-bits of data, you convert to 16-bit, its interpolated data and you yourself said above to stick with the original high bit data if available.​
    What kind of twisted logic is this? Because I agree that starting out with high bit data if available doesn't therefore mean that "data is padded". Once again, converting 8 bit files to 16 bit files before editing drastically reduces rounding off errors during editing, resulting in less loss of levels, errors in levels, hence posterization.
    Glad to see that you, Bruce (God rest his soul) and I agree on what histograms tell us.
     
  40. Some further thoughts on "padding the data". When you convert an 8 bit file to 16 bits, nothing happens to the original 8 bits - they become the lower 8 bits in the new 16 bit file and have the same values (0 or 1). The 8 highest bits in the 16 bit file are all 0 to start with. So far, no "padding". Now when you edit and R, G and B values are recalculated based on what editing step you execute the rounding off error in the 16 bit file is way, way smaller than in the 8 bit file by a factor of 256. That's why you cause less damage to a 16 bit file. No data is "padded", but errors are way smaller. "Padding the data" is therefor an inappropriate term for this case, but for instance could rightfully be applied when upressing a file when pixels are created and R, G and B values are "created" that didn't exist in the original image.
     
  41. Well, I think the key is knowing exactly how the 8 bits are distributed within the 16 bits. I think Andrew is assuming that the 8bits of data would mearly be the lower 8 bits of the 16 bits, and therefore any calculations done on the data would not see a benifit until the value exceeded what 8 bits can hold. This seems logical if that were the way the bits are distributed.
    However, what I'm pretty sure is happening is that the 8bits are translated into a different value that is then stored within the 16 bits. That is, the value is spread throughout the 16bits and not just the low order bits. This makes sense in that both 8 and 16 bits the high and low values represent the same range of color. The 16 bit can just represent more hues within that range. I think the advantage then is that rounding errors do not occur as often as editing is done. Since each editing action can introduce some rounding error when working with integer values, and since 8 bits have less whole numbers it gets more rounding errors. Two issues with this being an advantage. First, clipping will be done if you convert (i.e. translate) back to 8bit. Next, I think newer editors do not store the pixels as integer values while the image is open, so rounding error is not really a problem.
    I think CS4 and definitely Lightroom do 'non-destructive' editing in that they store the commands in history and re-apply them as needed from the original data. Rather than transforming the integer pixel data with each command. However, many editors may have kept the data as floating point values rather than integer to avoid rounding errors until the file is saved anyway. What this means is that there is much less accumulation of rounding errors, so less impact to IQ. Still, storing the image back to 8bit will introduce some rounding error. To fully realize the benefit, you would have to stay 16bit.
    The general, the net answer is that it depends on what your image editor is doing. With CS4 and lightroom you may not see any value or a small amount, and maybe even a problem with going back and forth between 8 and 16 bit. If your editor works with integer data, then it definitely will benefit from going to 16 bit and *STAYING* there. I don't think that 'scientific test' really come into play to disprove or approve as there are too many variables to reliably compare IQ. For example, display color spaces. However, simple math shows that 16 bits has more integer values than 8 bit. And, that 16 bit is not just 8bit with 8 zeros added on the end. Think about it, if that were true then white would come in the middle value and not the end value. For exampel, white is 255, 255, 255 in 8bit, and 65535, 65535, 65535 in 16 bit. The 8bit value is transformed to a 16bit value, more than likely the transformation is lossless going to 16bit, but it will definately introdue rounding errors going back, just because there are not enough whole numbers to map 16 bits worth of whole numbers down to 8 bits.
    For what the OP is asking, definitely get a 16bit scan software as you are losing data. You will also avoid the initial transformation to 16bit latter. If you will actually see the difference is another issue, you may not today but as tech evolves you may see it latter as you upgrad PC hardware.
    So, Frans, if you change your editing software you may want to re-test your workflow. It may not be valid anymore.
     
  42. Correction to my previous post.
    Some further thoughts on "padding the data". When you convert an 8 bit file to 16 bits, all values of the 8 bit file get multiplied by a factor of 128.5 and rounded up (at least for my image editor, Photoshop CS version 8.0). This means that there are "open spaces" with a value of about 128 or multiples thereof between the various R, G or B values within the image. Those "open spaces" are not populated since there are no additional pixels created, so no "padding" (interpolating values) has occured; the old 8 bit data set has been recalculated by muliplying by 128.5 and rounding up. Now when you edit and R, G and B values are recalculated based on what editing step you execute the rounding off error in the 16 bit file is way, way smaller than in the 8 bit file because there are 16 bits available instead of 8. That's why you cause less damage to a 16 bit file. No data is "padded", but errors are way smaller. "Padding the data" is therefor an inappropriate term for this case, but for instance could rightfully be applied when upressing a file when new pixels are created with "created" R, G and B values that didn't exist in the original image.
     
  43. Matt,
    Your post contains many assumptions about newer editors for which you seem to offer no proof. I would welcome inputs on this from people that can speak from solid knowledge, not mere speculation. Actual examples similar to the ones I posted would also be very welcome.
     
  44. Do the tests again. At least with a modern version of Photoshop (I'm using CS4), 16-bit to 8-bit conversions are absolutely not the same with Dither on versus Dither off.
     
  45. Okay, I've read through this thread several times, spent a lot of time thinking on this, and did another set of tests to confirm my thinking. Much of what I found supports Andrew's position.
    Try this test..... Create a long gradient with two closely related, highly saturated colors (I turned dither off on the gradient tool and in the color settings). Do two images in 8 bit (one upconverted to 16 bit after the creation of the gradient), one image natively in 16 bit. Apply a very strong contrast enhancing levels adjustment, something like input levels 120,1,140. The 8 bit file will show banding in the transition area. The 8 bit gradient upconverted to 16 bit will show the same amount of banding, and the 16 bit image will remain smooth.
    However, do the first test from the beginning of the thread where you compress (output 120/140) then stretch (input 120/140) the image, and the 8 bit file will show banding, the 8 bit converted to 16 bit will not show banding, and the 16 bit file will not show banding.
    So, why do two different tests have totally different outcomes? Here's my understanding...
    Imagine that you have an 8 bit image that contains very closely related tonal transitions. There are a limited number of values with which to describe those transitions. It might be that every bit value is used to account for the smooth transition in the image. So, the image would have tones at 198,198,198 that are next to tones that are 199,199,199 that are next to tones that are 200,200,200, and so on. (I'm using grey for simplicity sake, but this holds true for color as well).
    If you apply any edits that stretch out the tonal relationships (ie. adding contrast), then you will create gaps between those original 8 bit tones. This holds true regardless of whether you edit in 8 bit or 16 bit. For example 200,200,200 in 8 bit equates to 25700,25700,25700. The closest related 8 bit tone of 199,199,199 will equate to 25572,25572,25572. So, there are 128 tones of 16 bit data that go unused between the two tonal values. If you stretch the image far enough, what was 199,199,199 next to 200,200,200 will now be 190,190,190 next to 210,210,210 - resulting in visible banding. There is no way to get around that relative gap between tones, not even by doing the edit after converting from 8 bit to 16 bit.
    Now, take that same 8 bit image and compress the tonal relationships (ie. reduce contrast) while still in 8 bit mode and you start to lose image data. What were two closely related tones will become merged into a single tone, and so on. There is no in-between for 199,199,199 and 200,200,200 so the data has to become one or the other. Do the same edit after converting to 16 bit mode, and all of a sudden there is plenty of room for finer transitions between the original 199,199,199 and 200,200,200. There are 128 tones between the two in 16 bit mode, so you can compress it severely while still maintaining discrete tonal relationships. You can then go on to pull most of the data right back out by adding the contrast back in. This is why the first test I ran across works so well. It shows how forgiving 16 bit mode is when you are compressing tonal values.
    I believe the question of rounding errors is largely a red herring. The 8 bit data will get converted to positional markers on the 16 bit scale that represent the same relationship between tones and as data gets stretched through the addition of contrast, the relative gaps between tones will become larger, even if they are being calculated on the 16 bit scale. The math and rounding errors are insignificant (can you really see the difference between 25690,25690,25690 and 25700,25700,25700??). The real question is whether the edits you do are compressing tones or stretching tones.
    So, here's my summary of the issue.
    1) 8 bit data edited in 8 bit. Worst
    You lose data when you compress the image (reduce contrast). You risk posterization when you stretch the image (add contrast).
    2) 8 bit data edited in 16 bit. Better.
    You don't lose much, if any, data when you compress the image (reduce contrast). However, you will still risk posterization when you increase contrast (from the original state, not reduce contrast then add it back like the first test in the thread).
    3) 16 bit data edited in 16 bit. Best.
    Obviously.
    My final takeaway after testing this a whole bunch.... it's DARN hard to see any of these differences in real world images when doing realistic real world edits.
    Don't sweat the little stuff. :)
     
  46. Do the tests again. At least with a modern version of Photoshop (I'm using CS4), 16-bit to 8-bit conversions are absolutely not the same with Dither on versus Dither off.​
    1) Since I don't have a later version than CS version 8.0 and you have CS4, why don't you do the tests?
    2) You also need to explain how dither or no dither in your version of Photoshop impacts the results, since according to you it works very differently from my version. My hunch is that dither will blur the lines between tonality differences and thus would have a similar effect as adding noise and that would cause the histogram to look better but the image would suffer from a loss of sharpness.
    Furthermore I don't see how an impact of dither when down-converting from 16 to 8 bit has any impact on the case we have been discussing here which is what happens when you up-convert from 8 to 16 bit and the resulting improvements as compared to staying in 8 bits.
     
  47. Much of what I found supports Andrew's position.​
    You need to make some better distinction between the issues at hand, what your findings are and how those agree or disagree with Andrew's position.
    Everybody seems to agree that the higher the bit depth of the original image, the better the quality/lower posterization of the edited image are and your findings agree with that. The other issue that you have not addressed is how the final image differs if you start out with an 8 bit image and edit with or without first up-converting to 16 bits.
    Don't sweat the little stuff. :)
    It all depends if you can afford to not sweat the little stuff. Many people work with 8 bit images and many people have posterization issues that more than likely would go away if they first up-converted their image to 16 bits.
     
  48. The other issue that you have not addressed is how the final image differs if you start out with an 8 bit image and edit with or without first up-converting to 16 bits.​
    Re-read what I wrote. The entire post was about that issue - how the image responds when it is native 8 bit and edited in 8 bit, when it is native 8 bit and edited in 16 bit, and when it is native 16 bit. The answer is that it depends on what type of edits you are doing to the image.
    Many people work with 8 bit images and many people have posterization issues that more than likely would go away if they first up-converted their image to 16 bits.​
    That's the key difference of opinion in this thread. I don't believe that upconverting to 16 bit solves the most common reason for posterization - when you try to add contrast to an image. Upconverting is certainly preferable, but for some operations it has essentially zero benefit.
    It all depends if you can afford to not sweat the little stuff.​
    My comment was meant much more broadly... not particularly limited to questions of 8 bit and 16 bit. :)
     
  49. The point in question here seems to be primarily concerning the difference between 8 bit edits and 8-bit to 16-bit edits. No one is talking about the difference between native 8-bit vs 16-bit edits (other than Dan Margulis, I suppose). There is no need to concern oneself with "dithering" going from 16-bits to 8-bits, as that is not what Frans et al is talking about. We only need start with an 8 bit file and do two comparative edits - one in 8 bit and one upconverted to 16-bits. Like Frans, I have done this test before and got similar results to his shown above. It is beyond question that converting an 8-bit image to 16-bits first and then editing will result in less image degradation than staying in 8-bits all the way.<p>

    So the moral of this story is... If you are stuck with 8 bit output (which the OP is at present), then it is best to convert to 16-bits before doing any major edits, the reason being that you LOSE NOTHING, but stand to GAIN (comparatively) something.
     
  50. 1) Since I don't have a later version than CS version 8.0 and you have CS4, why don't you do the tests?​
    I did, and it proves that dither does affect the conversions plus, the post of Bruce's above is from Feb 2002, so it seems pretty clear to me that whatever version was out 7 years ago behaved the same way.
     
  51. I don't believe that upconverting to 16 bit solves the most common reason for posterization - when you try to add contrast to an image.​
    Applying Levels 120/1/140 is adding contrast to the max and as I have shown with my examples there is a huge difference between the 8 and 8 to 16 bit image.
     
  52. I did, and it proves that dither does affect the conversions​
    Care to share examples with and without dither with us and explain how it matters since all the indications from Adobe are that dither doesn't come into play when you convert a file from 8 to 16 bits or vica versa?
    the post of Bruce's above is from Feb 2002, so it seems pretty clear to me that whatever version was out 7 years ago behaved the same way.​
    My CS version is from even later than that, 2003 and the accompanying help information is pretty clear in stating that dither only comes into play when upconverting an 8 bit file to a different work space, as I already have mentioned. Besides the only places in the CS menus where you can check or uncheck dither are when you want to convert to a different work space, not when you up- or down-convert between 8 and 16 bits.
     
  53. Care to share examples with and without dither with us and explain how it matters since all the indications from Adobe are that dither doesn't come into play when you convert a file from 8 to 16 bits or vica versa?​
    The method is described above using Apply Image on both conversions. The net result is NOT a single level, but rather several on either side which proves there's a difference, the results of noise being applied in the conversion. It matters because dither IS added in the conversion and I think Bruce above explains why one would wish to test without this affecting the data.
     
  54. The method is described above using Apply Image on both conversions. The net result is NOT a single level, but rather several on either side which proves there's a difference, the results of noise being applied in the conversion.​
    Did I mention I don't have CS4? I thought I did! Why not post the resulting images, similar to what I did so we can all learn from your wisdom?
    It matters because dither IS added in the conversion and I think Bruce above explains why one would wish to test without this affecting the data.​
    So show me with examples.
    It's also interesting to note that a discussion on how editing an 8 bit version versus the upconverted 16 bits version of the same image may get you different results, is now overshadowed by the issue of dither or not while it is unclear if dither even comes into play.
     
  55. Did I mention I don't have CS4? I thought I did! Why not post the resulting images, similar to what I did so we can all learn from your wisdom?​
    Apply Image has been in Photoshop since version 1.0.7 if memory serves me (its been 19 years since I installed that build).
    It's also interesting to note that a discussion on how editing an 8 bit version versus the upconverted 16 bits version of the same image may get you different results, is now overshadowed by the issue of dither or not while it is unclear if dither even comes into play.​
    It comes into play because it introduces an apples to oranges comparison of the data handling! And its in response to your post here which is incorrect:
    Adobe says it only works on 8-bit files when you convert to a different color space. They don't say it works on 16-bit files and they don't say it works when you convert from 8 to 16 or 16 to 8 bit in, presumably, the same work space. My tests seem to confirm what Adobe says.​
    Here is the same document converted with and without Dither:
    [​IMG]
    IF the conversions where identical, there would be a single level (level 128). By moving the level sliders as I already specified above, its easy to see the noise introduced by dither INTO the conversion which DOES affect the science.
    This is applied with all color space conversions (even trips to Lab) as well as conversions from 16-bit to 8-bit.
    I don't know what tests you did to "prove" there's nothing dither does here so again, I suggest you test this again using the instructions above.
     
  56. I'm no expert mediator, but i'm guessing we're not going to get you two to send each other Christmas cards!
     
  57. Applying Levels 120/1/140 is adding contrast to the max and as I have shown with my examples there is a huge difference between the 8 and 8 to 16 bit image.​
    You are overlooking that you first did a massive reduction in contrast by applying output levels 120/140 before doing the 120/1/140 input levels adjustment. You didn't add contrast, you restored the original contrast that had been removed by the output levels adjustment. Nobody edits images that way in the real world, taking away contrast and then putting it back. Typically they will either want to reduce contrast or enhance it - not do one then the other.
    What your example images show, and what the test in the link I originally posted shows, is the advantage of 16 bit when doing contrast reductions .
    If you do just the contrast enhancement (input levels 120/1/140) without first reducing contrast you'll get exactly the opposite results. The 16 bit image will be better, and the 8 bit image will look worse regardless of whether it was edited in 8 bit or 16 bit. (Of course, they will all look like crap because no one would need to make that strong of an adjustment on a real world image.)
     
  58. Apply Image has been in Photoshop since version 1.0.7 if memory serves me​
    Could you please explain why Apply Image would be needed to convert from 8 to 16 bits. I use Image>Mode>16 Bits/Channel and, as I have mentioned already, dither doesn't seem to be an option when I use that. What is overabundantly clear from my posted examples is that whether dither was at play or not, editing an 8 bit image that has first been converted to 16 bits causes way less damage than the same editing applied to the unconverted 8 bit image. It's not that I don't want to understand if and how dither plays a role or not, but my examples are hard to reason away, don't you think?
    And its in response to your post here which is incorrect​
    Well, look for yourself what Adobe says for my version of Photoshop: "The Use Dither (8-bit/channel images) option controls whether to dither colors when converting 8-bit-per-channel images between color spaces. This option is available only when the Color Settings dialog box is in Advanced Mode." If that's incorrect than you better use your connections with Adobe to set them straight.
     
  59. Could you please explain why Apply Image would be needed to convert from 8 to 16 bits.​
    It absolutely is not needed. Its an analytical tool. It proves there's a difference using dither and not using dither. Read the posts above about this tool.
    I use Image>Mode>16 Bits/Channel and, as I have mentioned already, dither doesn't seem to be an option when I use that.​
    Dither is accessed in Color Settings. You are under the incorrect impression it plays no role in bit depth conversions and it does, Apply Image proves this. Did you read Bruce's quote?
    Well, look for yourself what Adobe says for my version of Photoshop:​
    Well what they say is true. And it also affects conversions from 16-bit as Bruce and I pointed out and as I've illustrated using the Apply Image test you've yet to try. What they say in the small info type in Color Settings is not incorrect. But it is incomplete. If everything they say here and elsewhere in the app was complete, there would be little need for books on the subject.
    Maybe you'll also notice I asked about dither in my FIRST post because it does affect the testing!
     
  60. You are overlooking that you first did a massive reduction in contrast by applying output levels 120/140 before doing the 120/1/140 input levels adjustment.​
    It doesn't matter. You overdo the editing to clearly show the difference. You could only apply some amount of Input Levels adjustment and you still would see a difference, but less dramatic. These tests are not designed to tell you to what extent the issue is going to show up in a particular image with a particular set of edting steps. They are designed to illustrate the underlying problems, what you may notice in your images if they show up and how to avoid/reduce problems in the first place by upconverting an 8 bit image to 16 bits before editing.
     
  61. It absolutely is not needed. Its an analytical tool. It proves there's a difference using dither and not using dither.​
    OK, that helps a lot to clarify things. No, I did not apply this analytical tool; I visually compared the results between the edited 8 bit files, one edited with the Color Settings Use Dither on and the other off and I didn't see any difference. Using the analytical tool may very well have shown a difference but from my visual observations I can say that any impact of dither is very, very small compared to the overall results.
    So, it looks like first converting an 8 bit image to 16 bits before editing results in less posterization while dither may have a secondary, much smaller impact.
     
  62. It doesn't matter. You overdo the editing to clearly show the difference. You could only apply some amount of Input Levels adjustment and you still would see a difference, but less dramatic. These tests are not designed to tell you to what extent the issue is going to show up in a particular image with a particular set of edting steps. They are designed to illustrate the underlying problems, what you may notice in your images if they show up and how to avoid/reduce problems in the first place by upconverting an 8 bit image to 16 bits before editing.​
    This comment tells me that you haven't read what I wrote above and/or haven't fully digested it. Re-read it again, then try the tests - because it absolutely does matter.
    The banding you see in your example does not arise from the addition of contrast. The banding happens because you took the nice smooth gradient blue sky image and compressed what were 256 discrete values (full 8 bit data) and put it into just 20 discrete values by changing the output levels to 120/140. What was black was converted to 120, what was white was converted to 140. Many of the closely related tones in the sky get converted to the same value (which can only lay on the integers between 120 -140). This compression of data that takes place in 8 bit mode is the destructive edit.
    When you put the contrast back in by doing input levels 120/1/140, all you are doing is making visible the damage that you did in the first step by stretching the values back to roughly their original postion, except there are now much fewer tones to work with.
    Don't believe me? Try this.... compress the data first in 8 bit mode by setting output levels 120/140, then convert to 16 bit , then undo the compression by setting input levels 120/1/140. The file will still look like crap, even though you did the addition of contrast step in 16 bit mode.
    I'll restate it again...
    Adding contrast to an 8 bit image = no practical difference between editing in 8 bit or 16 bit.
    Removing contrast from an 8 bit image = big difference between editing in 8 bit or 16 bit.
     
  63. Adding contrast to an 8 bit image = no practical difference between editing in 8 bit or 16 bit.
    Removing contrast from an 8 bit image = big difference between editing in 8 bit or 16 bit
    Indeed! As Bruce points out in his Real World Camera Raw, the two moves (compressing and expanding) play vastly different and profound effects on the data. With tonal compression, the levels are lost differently than clipping. If you lighten the midtones without moving the clipping point, the levels between the two are compressed. Pixels that were at differing values now have the same value. With tonal expansion, you don't lose the data per say, you stretch the data over a wider range which often produces banding in smooth gradients. The gaps between the values are expanded too far.
     
  64. Andrew - that's a good point about the midtone slider!
    I can see how almost any edit involving a change in the tone curve (without adjusting the clipping points) will result in both a localized increase in contrast in one range, and a decrease in contrast in another range. Even moving the midtone slider left or right will result in compression in the highlights and expansion in the shadows, or vise versa.
    So, pretty much all image edits in the real world would contain both addition and reduction of contrast. There would be portions of the image that would benefit from being upconverted from 8 to16 bit, and portions where there would not be benefits.
    The problem areas are the ones where you want to have a net increase in contrast above the original 8 bit baseline - which can lead to posterization. The upconvert to 16 bit doesn't help there, it doesn't create more tonal steps regardless of how much you compress and uncompress the data. In addition, the loss of data that happens during compression is only visible if you stretch out the data again. In the real world of image editing, you would have to create competing image edits of reducing contrast then adding it back for the benefit of 8 bit to 16bit conversion to really show. This is is poor Photoshop technique, IMHO, if you are trying to do quality image editing, regardless of whether you're working in 8 bit or 16 bit.
    So, now I've come full circle.... I think that the test referenced in that first link I posted is contrived and not realistic. It doesn't represent how we actually work in PS, and it is also designed to heavily emphasize the one area where upconverting 8 bit to 16 bit actually makes a difference.
     
  65. And then there's this just in from Chris Cox:
    Andrew;

    If you have an 8 bit source and need to apply extreme adjustments – then there might be some advantage to converting to 16 bit and dithering when converting back to 8 bit.
    In general, starting with an 8 bit image – you’ve already lost most of your detail.

    Dithering only happens when converting DOWN depths. There is no point (or purpose) in dithering when converting from 8 to 16, or 16 to 32.
     
  66. This has been a bizarre debate, which seems to have turned into a witch-hunt against Frans instead of answering the OP's question. The OP question concerns the difference between 8-bit upconverted to 16-bit edits vs native 16-bit edits. No one in this thread has suggested that the former could match or even be preferable to the latter. This we all agree on (accept Dan Margulis, as I noted). Frans stated (correctly, imo) that if you are stuck with 8-bit output, which the OP is at present, then there can be some advantage to upconverting to 16-bit before editing as opposed to staying in 8-bit all the way. Now why Andrew is going on about converting from 16-bit to 8-bit I've got no idea. We are talking about the situation where the OP is stuck with an 8-bit output to begin with. And Sheldon seems to be wandering around all over the place popping up on both sides of the argument depending on which way the wind is blowing. Why Sheldon are you going on about the difference between contrast addition and removal? The OP's question doesn't concern only one or the other. The discussion here is talking about editing in general. Editing in general shifts pixel values around. The point that Frans, and now myself, are making is that there are precision errors when moving data around. 16-bit depth is more precise than 8-bit. Simple as that.

    Let's take an extreme example: 3-bit vs 16-bit. 3 bits can encode 8 levels, and 16 bits can encode 65-odd thousand levels. Let's say your editing requires reducing all pixel values to 2/3rds of their original values. Let's take the highest value from each bit depth and see what happens:

    8*2/3=5.333; 65536*2/3=43690.666

    So, a simple error calculation associated with the 8 bit rounding (i.e. 5.333 becomes integer 5) is 0.333/8 = 0.042; 16-bit rounding error is .666/65536=0.0000102. You can see that the error associated with rounding in the larger bit-depth is much less than the smaller bit-depth. This is what Frans is saying. Now whether that makes a visible difference in real world edits is another question. It probably doesn't usually, but I have found with skies particularly (especially when you have blown the blue channel at capture) it doesn't take much to make them band.
     
  67. Let's take an extreme example: 3-bit vs 16-bit. 3 bits can encode 8 levels, and 16 bits can encode 65-odd thousand levels. Let's say your editing requires reducing all pixel values to 2/3rds of their original values. Let's take the highest value from each bit depth and see what happens:​
    Be careful here, the math doesn't automatically produce the results. For example, with 24 bit color, the math tells us we can define 16.7 million colors. One could suggest that 24 bit color would then provide a lot more colors than fewer bits per color. But if you capture a scene of say a gray card in 24 bit color, you're far from having 16.7 million colors. The math theoretically allows the definition of said colors but that hardly suggest all 24 bit files have that many colors (clearly they don't). If you assume that a 16-bit document from an 8-bit document has this many tone values, I think you better prove it. Now the math might be using a higher precision, but that doesn't mean the math accesses that degree of tonal data.
    You have a white or black document in 3-bit and then 16-bit. You saying one has 8 levels while the other has 65000 odd levels?
     
  68. No. That's why I said "...then there can be some advantage to upconverting to 16-bit before editing as opposed to staying in 8-bit all the way".
    The way I look at it is this. You lose nothing (as far as I know) from converting from 8-bit to 16-bit to do editing. But there is the chance of gaining something (in comparison with staying in 8 bit all the way) by doing it this way. So why not do it? Of course, as we all agree, the best option is to capture the highest bit depth to begin with, but if you're stuck with low bit depth, then it could make sense to move up to a higher bit depth for editing.
     
  69. With regard to the original post.... If I recall, the Oxygen software does have the ability to capture 16bit scans. Scitex trumpeted the idea of creating archive scans in 16bit space back in the late 90's. I would just poke around in the settings to find it.
    What I do remember from my Scitex was that the transparency scans were so good that a little dust clean up was all that was required.
     
  70. And Sheldon seems to be wandering around all over the place popping up on both sides of the argument depending on which way the wind is blowing.​
    Well, I'll certainly concede that I've refined my understanding of this issue over the course of this thread and that I don't agree with what I wrote in my first two posts (esp my second post).
    Why Sheldon are you going on about the difference between contrast addition and removal? The OP's question doesn't concern only one or the other. The discussion here is talking about editing in general. Editing in general shifts pixel values around.​
    This is the biggie. Pushing pixels around is fine, and we all agree on the obvious (8 bit good, 8 bit edited in 16 bit better, 16 bit best), but there is only one problem that everyone keeps coming back to - posterization . That's what this whole discussion is about, banding in smooth tonal transitions. There isn't any other issue that we're trying to fix.
    What causes posterization? Not having enough tones across a gradient to create the appearance of a smooth transition. You get that from trying to stretch data that isn't there.
    But my 8 bit file looks fine, no banding. That's right, it is fine, until you try to stretch those tones across a wider range (adding contrast ).
    Well, I'll just convert up to 16 bit, that will solve my problem. NO, it won't!
    Sorry for the digression, but I hope you get my point. The OP's question was fundamentally this - Is it okay to just upconvert my 8 bit file to 16 bit and edit, or should I be more worried about the bit rate at the time of capture? My answer is that the bit rate at the time of capture is MUCH, MUCH more important than what bit rate you edit in. My recent comments in this thread have all been about the why behind that.
    The reason I go on about contrast addition/subtraction is this... The problem of banding/posterization results from adding contrast to an image, which is equivalent to stretching tones across a wider range. When you start in 8 bit, you gain nothing from editing in 16 bit that contributes to solving that problem (unless you are doing unusual edits that compress then uncompress tonal ranges). You only get a practical benefit from capturing in a higher bit rate.
    So, my argument is not against upconverting to 16 bit for editing. Obviously it is better and it takes no effort. My argument is against the illusion that upconverting to 16 bit solves the fundamental problem of posterization/banding - it doesn't . It may be useful in other areas but it fails to fix the one thing that really matters.
    The takeaway is this... put your effort into starting out with a higher bit rate image if you care at all about any of this.
    The point that Frans, and now myself, are making is that there are precision errors when moving data around. 16-bit depth is more precise than 8-bit. Simple as that.​
    You are way off track with this line of thinking. It may be true but it is a red herring to this discussion and doesn't matter in the practical world.
    To use your own example against you.... Convert both 8 bit data and 16 bit data to 2/3 of their original values. In the 8 bit world - 200,200,200 becomes 133,133,133. In the 16 bit world - 25700,25700,25700 becomes 17133,17133,17133. Take the 8 bit data and put it on the 16 bit scale and you have 17090,17090,17090. The difference between the two due to rounding error is 43 bits out of 65,536. Or to put it in 8 bit terms, 1/3 of one bit - which is basically not visible.
    None of the rounding errors of the 8 bit space, though they may exist, make any significant difference when we are talking about what is the cause of banding/posterization. The fundamental question at hand is this - What happens when you compress and stretch tonal values and what benefits do you get from capturing/editing in 8 bit vs 16 bit?
    Anyhow, sorry to go back and forth on all these posts, I hate threads that are argumentative. But I'm interested in this issue (I scan and edit a lot of 4x5 film) and the more I look into it, the more I realize that there's only one real answer.... capture in a higher bit rate.
     
  71. Dither only comes into play when you convert an 8 bit image to a different color space, so no, dither was not on since I didn't convert to a different color space.​
    Un no...wrong answer. If you have Use Dither selected as an option in the Advanced Color Settings, a low level noise dither will be applied on _ANY_ transform whether it's from RGB>CMYK, RGB>RGB or 8 bit to 16 bit or 16 bit to 8 bit. It does not surprise me that adding noise helps reduce banding....which is EXACTLY what the Use Dither was put in place for, to cut down banding. So if you start in 8 bit and convert to 16 bit, you had noise added by virtue of the conversion––if you have that option on (it's on by default).
    What you are seeing is expected benefit from adding a bit of noise when doing color to tone adjustments. You are _NOT_ getting the full benefit if working in 16 bit from the very beginning however. Jumping into 16 bit from 8 bit will only ever provide a tiny benefit (and most of that is from adding the noise in the conversion).
     
  72. I have been away for only half a day a see what happens? Total chaos. Everybody voicing a different opinion and NOBODY is posting any examples as proof of what they are saying. I'm in and out of a 4 day event, but when I have the time I'll try out Andrew's analytical tool and report back. I'll show the results of Photoshop CS version 8.0 drastic edits on 16 bit, 8 bit and 8 bit converted to 16 bits images, with and without dither.
     
  73. Example images.... Image number one.
    This demonstrates how the destruction of image data happens during the reduction of contrast in 8 bit mode. Each image started out as a simple blue to green gradient. The first three slices were created in 8 bit mode, the last one was created in 16 bit mode. Each one of them had the same 2 adjustments applied - output levels 120/140 followed by input levels 120/1/140. The first slice stayed in 8 bit mode the entire time. The second slice was converted to 16 bit mode after the initial output levels adjustment. The third slice was converted to 16 bit before any adjustments. The fourth slice was native 16 bits the entire way through.
    The destruction happens during the compression of image data (output levels 120/140). It is made visible by the re-expansion of that data with the input levels adjustment.
    I could post an example of just the output levels adjustment, but all the images would look the same - flat, grey, no contrast.
    00TlUK-148265584.jpg
     
  74. Example image number two....
    This shows how up converting from 8 bit to 16 bit does not prevent banding/posterization. The images all started out as a simple blue to green gradient. All images have had just one edit applied to them, input levels 120/1/140. The first slice started out as 8 bit data and was edited in 8 bit. The second slice started out as 8 bit data, was upconverted to 16 bit, then had the levels adjustment applied. The third slice was native 16 bit and was edited in 16 bit.
    This test is different than the first test because it is taking the original blue/green gradient and enhancing the contrast from the starting point. The first test just reduces contrast then puts it back to the same place where it started.
    00TlUT-148267584.jpg
     
  75. ...but there is only one problem that everyone keeps coming back to - posterization . That's what this whole discussion is about, banding in smooth tonal transitions. There isn't any other issue that we're trying to fix.​
    I don't know about that. I thought it was about image quality in general. But if we want to limit it to posterization then that's ok. I thought Frans' test images way above proved that banding could be reduced by converting up to higher bit depth, but seeing Jeff's post, the improvement may be down to the application of dithering. Why are your results different? Did you turn dithering off? If so, then that answers the question. Converting up to higher bit depth doesn't really matter. It is the application of a certain amount of noise which smooths gradients out. Is that the right assessment to make?
     
  76. I've just done a bit of checking of my PS setup, and I have the dithering checkbox checked. But when I up-convert an 8 bit image to 16 bits, there is no addition of any noise. I have checked this with Guillermo's Histogrammer, and all but every 256th level is empty. I only have PS 7, so I don't know if things may have changed since then.
    Here is a post I made earlier in the year on this issue and the attached image. Considering that I haven't changed my settings, I am assuming that the dithering checkbox was checked then too. Note that I used a polarizer (on this wide angle shot... tsk, tsk, tsk), which has accentuated the gradient in the sky. But the sky wasn't blown in any channel in the original.
    Here's an example of 8 vs 16 bit. 8 bit shows banding in the sky. The 16 bit image was actually an 8 bit jpg that was converted to a 16bit image for the editing phase, and then converted back to 8 bits. The reason I didn't work in 16bit the whole way was with my version of PS (PS7), I couldn't work out how to add a layer copy of the image itself (which is the method I used to give the image that sort of slight bleached look). This shows that 16bit editing can be useful for images which start out as 8 bit files.
    Now I guess there is more than one way to skin a cat, and Patrick could probably do this another way perhaps (or perhaps not) with less banding. But it does definitely show the value of 16bit editing.
    By the way method was: layer copy -> desaturate -> multiply blend mode -> levels (for brightness and colour imbalance) -> hue saturation (admittedly a rather large value of 35).​
    00Tld8-148333684.jpg
     
  77. I had dithering off for my examples. I think that dithering on/off would make a difference, but it would be more as a mitigation of the banding caused by tonal adjustments. I don't think it is the primary factor, more likely is a secondary factor. I'll have to try my tests again with dithering on later tonight to see if there is a difference.
    I thought Frans' test images way above proved that banding could be reduced by converting up to higher bit depth, but seeing Jeff's post, the improvement may be down to the application of dithering. Why are your results different? Did you turn dithering off? If so, then that answers the question.​
    Both Frans and I are getting the same results from our tests, I just added some additional tests with more variations. His three test images are the same test and same result as in the first example image I posted, with the exception that he didn't do what I did in the second slice (convert to 16 bit halfway through the editing process, after compression but before expansion). His first image is equivalent to the 4th slice in my image, his second image is equivalent to the third slice, and his third image is equivalent to the first slice.
    Looking at your example I'm guessing the top image is the 8 bit one and the bottom image is the 16 bit one? I can see a bit of a difference, but I think that the overall jpg compression from uploading the image may be interfering.
    The edits you did on the image are a good example of how complicated things can be in Photoshop - adding and removing saturation, adding contrast through blending modes, and the fact that saturation and contrast are intertwined by their very nature. Your steps introduce both tonal compression and tonal expansion, a subtle version of what Frans' test and my first example are doing. It certainly shows that when image edits get complicated it is better for the image to be in 16 bit.
     
  78. I have learned by reading all the discussions and read the referece links. A topic that otherwise I would not have looked for. Thanks for the discussion.
     
  79. THE DITHER EXPERIMENT RESULTS ARE IN!
    As promised, I ran some more tests. I applied the previously described editing to a 16 bit file, then to the same file converted to 8 bits and then to the same file converted to 8 and then to 16 bits. I did this with both Use Dither on and Use Dither off. When visually inspected, all the resulting images look like the ones I posted previously. I then applied Andrew's suggested analytical tool (which compares two images to see if they are identical or not) to the pairs of 16 to 8 bit and 16 to 8 to 16 bit images (with and without Use Dither) and guess what? Both sets show absolutely no difference; both resulting images come out a solid 128 value gray. So, at least for my Photoshop CS version 8.0 dither doesn't make a difference. How about that, Andrew?
    Conclusions:
    1) Capturing at a higher bit depth is best
    2) There is an advantage to convert an 8 bit file to 16 bits before editing
    3) For Photoshop CS, version 8.0, dither doesn't make a difference
     
  80. Well done Frans, i'm sure you'll sleep like a log tonight! How old are you???
     
  81. I'll have to try my tests again with dithering on later tonight to see if there is a difference.​
    Repeated my same tests with dither turned back on. No real difference. I think dither is a very subtle issue compared to the scale of the edits we are doing for the tests.
     
  82. Sheldon,
    Did you apply the Apply Image tool from Andrew to compare the results and if so, what did it show?
     
  83. Sheldon,
    I forgot to ask: do you use Photoshop and if so, what version (like CS2, version 9.1)? If your Photoshop is different from mine (CS version 8.0) could you do me a favor and tell me what exactly the help function says about Using dither (Help>Photoshop Help>Search: Using dither). Mine says: "The Use Dither (8-bit/channel images) option controls whether to dither colors when converting 8-bit-per-channel images between color spaces. This option is available only when the Color Settings dialog box is in Advanced Mode. When the Use Dither option is selected, Photoshop mixes colors in the destination color space to simulate a missing color that existed in the source space. Although dithering helps to reduce the blocky or banded appearance of an image, it may also result in larger file sizes when images are compressed for Web use."
    I added the emphasis in bold because I think that's the crucial part of the description.
    Thanks in advance!
     
  84. Frans (and Sheldon)... get your hands on this piece of software for showing 16 bit histograms. Convert your 8-bit image up to 16-bits with dithering on and see if it adds any noise to the image. I have photoshop 7, and it certainly doesn't add any noise.
    So as far as I am concerned, on PS7, you will maintain better image quality when editing an 8-bit file by first converting it up to 16-bits. Dithering plays no part in this, and I presume it can only be attributed to reduced rounding errors of the larger bit-depth.
     
  85. Sheldon and Bernie,
    Thanks for your inputs. Nice to see at least some people agree with what I believe to be true.
    Bernie,
    Can you explain what this piece of software does? My Spanish is next to non-existent and I couldn't even find how or where to download what.
     
  86. I didn't try the Apply Image tool... I was just looking for seat of the pants differences. I'm using CS4. The help function for CS4 regarding dither says pretty much the same thing as you quoted.
    You will maintain better image quality when editing an 8-bit file by first converting it up to 16-bits. Dithering plays no part in this, and I presume it can only be attributed to reduced rounding errors of the larger bit-depth.​
    Agreed, except I think the benefit is not because of the enhanced mathematical precision of 16 bit, but because there are simply more available tones and more space for existing tones to be pushed around without being lost. We might be talking about the same thing here, just saying it in different ways.
     
  87. Agreed, except I think the benefit is not because of the enhanced mathematical precision of 16 bit, but because there are simply more available tones and more space for existing tones to be pushed around without being lost. We might be talking about the same thing here, just saying it in different ways.​
    Yep, I think so.
    Frans... on that link, click on one of the two links in the top right of that page. They say something like "Histogrammer v1.1" and "Actualizer v 1.2". These are the download links. I think they both link to the same program. The program shows you 16-bit histograms, unlike photoshop, and allows you to zoom right in on the histogram. To load your image in the program you need to click the button with the three dots (...) on it. There's a whole lot of other stuff in there that I haven't bothered working out yet.
     
  88. Thought it might be interesting to apply the same editing that I used for the blue sky with clouds detail image to a gray wedge. The upper image is after editing the 16 bit version, the middle image after editing the 16 to 8 to 16 bit version and the lower image after editing the 16 to 8 bit version. Again, it didn't make any difference if I checked or unchecked the Use Dither box in Color Settings. While nobody creates and prints gray wedges just for the sake of it (at least I hope nobody does), these images clearly show the same kind of results as the actual blue sky with clouds detail image.
    00ToCN-149767584.jpg
     
  89. Well, I just about read the whole thread. Thank you to each one who participated. May I make an analogy? Processing digital sound is analogous to digital "light" - and in processing 16 bit digital sound Adobe Audition allows processing to take place in 32 bit floating point for the simple reason that accumulated quantization errors are reduced in the final product. Obviously working in 16 bit photography is ideal, upconverting from 8 bit prior to processing makes sense.
     

Share This Page