Jump to content

What's the case for gamma today?


Recommended Posts

<blockquote>

<p>We never came to a conclusion as to why when I sample down the images, the RGB values don’t change a lick.</p>

</blockquote>

<p>Only values where two or more different input pixels were downsampled into a single output pixel will change. You measured the value of a solid color, which would not be expected to change.</p>

<blockquote>

<p>Again, this has not been proven.</p>

</blockquote>

<p>If you had understood what I had written instead of asking enough irrelevant questions to demonstrate that you did not, you could have determined for yourself that it was probably correct. I am attempting to prove it, but that may not be possible without an Adobe engineer.</p>

<blockquote>

<p>Show me the exact steps and I’ll see if you are correct or not.</p>

</blockquote>

<p>I have determined since I posted what you were replying to that the perceptual conversion to the V4 profile is the cause of the otherwise unexpected behavior. The behavior of the resizing is now explained, though the behavior of the perceptual conversion is not.</p>

<p>Not one observation is inconsistent with the claim that many image processing operations are only correctly performed in linear space.</p>

<p>The dramatically different behavior of Photoshop’s rescaling between 8 or 16 Bits/Channel and 32 Bits/Channel should be a dead giveaway that one of the two is incorrect.</p>

<p>Finally, I disagree that it is not in your best interests to come to a correct conclusion rather than championing an incorrect belief. Hypothetically, if you were to vehemently argue in favor of ideas that were completely incorrect, it might impact your reputation if it later came to light that the ideas were completely incorrect and there was plenty of information available at the time to determine that they were incorrect.</p>

Link to comment
Share on other sites

  • Replies 171
  • Created
  • Last Reply

Top Posters In This Topic

<p>One of the simplest possible test images can be found at the top of http://filmicgames.com/archives/354.</p>

<p>In this case the correct interpolation between black and white lines is 50% density gray. In sRGB 50% density is approximately 187.516, which is close enough to the 187 swatch in this image. Incorrectly interpolating in gamma space would result in a value of 127.5, which is close enough to the 128 swatch in this image.</p>

<p>It is clear in the full resolution image that the average brightness of the alternating lines is approximately the same as the 187 swatch and is much brighter than the 128 swatch. In the version of the image downsampled to 50% by Photoshop CS2, the sides now match the 128 swatch instead, having gotten much darker than in the original image. This is <em>incorrect </em>behavior.</p>

<p>Of at least a portion of my claims, that <em>is </em>proof.</p>

Link to comment
Share on other sites

<blockquote>

<p>I know it is using a linear calculation <em>because </em>the scaling is done correctly. </p>

</blockquote>

<p>So all scaling done without a linear calculation is always incorrect that’s your point? You can prove that how? Sure sounds like “<em>I know the ships are sailing off the earth because I saw it with my own two eyes</em>” mentality. Outside of this web site, is there some literature that backs up what you’ve just said? And so your take is, depending on the rendering intent or the profile, the scaling may be linear or non linear? Photoshop does both? </p>

<blockquote>

<p>This is a leading question based on the incorrect assumption that the algorithm changes.</p>

</blockquote>

<p>Well you just stated that this has to be the difference right? The same profile but with a different rendering intent used either produces the right or wrong scaling so what else is it?</p>

<blockquote>

<p>The algorithm does <em>not</em>change, it is the image data that is different between the two color spaces. Since the color spaces are not actually very different, I think it is the perceptual table in this profile that is altering the brightness of the Dalai Lama image but not the “Your scaling software” image. </p>

</blockquote>

<p>Then this “issue” or problem with an incorrect result is not solely based on the TRC gamma of the working space right? Or the incorrect results have to be some kind of manufactured image type? Sounds like it has to be manufactured to produce the incorrect results the author wishes to illustrate his theory and why the “my image sucks” doesn’t show this behavior depending on the gamma encoding tested. The bottom line after all this is your idea that “<em>Lots of software performs important operations incorrectly in gamma space</em>.“ should be retyped to say that “<em>if an image is designed to produce an effect that we can say is incorrect after sampling, it will sample incorrectly</em>“. Kind of makes sense since millions, maybe hundreds of millions of images have been resampled in Photoshop and only a few in the fringe seem to find images like “your image sucks” but not always Dalai Lama exhibit this “problem” depending on all kinds of mostly insignificant combinations! </p>

<p>Yet your point continues to be “<em>Lots of software performs <strong>important</strong> operations <strong>incorrectly</strong> in gamma space.</em>” </p>

 

<blockquote>

<p>I have no way of knowing what the internal calculations are, but the input and output values are consistent with linear calculations and gamma space calculations for the respective cases. Again, the calculation is the same either way, but in one case the V4 perceptual table has <em>mangled</em> the image and in the other case the image is unchanged.</p>

</blockquote>

<p>Mangled? I see. OK. That’s your scientific term for it? Now I’m getting the idea here this is just a theological image processing rant, not a scientific search for what’s going on here. I’m not interested in discussing theology (must be the atheist in me). You apparently have some kind of belief system in play here that’s not going to take scrutiny in any form, the earth was created in 7 days, forget the carbon dating or the effect of two very similar profiles on the data and the result of the sampling. Time to dig out of rabbit hole in opposite direction! </p>

<blockquote>

<p>The image gets significantly lighter during a perceptual conversion from sRGB to sRGB</p>

</blockquote>

<p>So what? Do you understand what a Perceptual rendering intent brings to the party a RelCol doesn’t? Do you understand that if we had the software to build a V4 sRGB profile, each software product would do so differently? Do you CARE that a V4 2.2 TRC profile, not the 1.0 TRC profile affects the data that puts into question the theory that a linear conversion has to be taking place? Are you the least bit curious about this or would you rather just stick to a theological idea of image processing and stick to the idea that “<em>Lots of software performs important operations incorrectly in gamma space?”.</em></p>

<blockquote>

<p>Again, the perceptual table does not make the calculations correct, they now have different input values other than the exact values needed to make the result gray.</p>

</blockquote>

<p>Ah, so yes, we have to <strong>build</strong> images a fixed way to introduce a result that proves <em>Lots of software performs important operations incorrectly in gamma space.</em>? Is that the idea? And images that are not build this way, the results of these operations mean what? </p>

 

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>It is equally valid to convert to the V4 sRGB color space with absolute or relative colorimetric rendering intent. In both of those cases, the V4 color space behaves like the V2 sRGB color space with respect to scaling of any images, showing that the cause of the discrepancy is the <em>conversion</em>, not the color space.</p>

<blockquote>

<p>The same profile but with a different rendering intent used either produces the right or wrong scaling so what else is it?</p>

</blockquote>

<ul>

<li>The V2 sRGB profile results in incorrect scaling.</li>

<li>A perceptual rendering intent conversion to the V4 sRGB profile significantly alters some images, which are then also incorrectly scaled.</li>

<li>Absolute or relative colorimetric rendering intent conversions to the V4 sRGB profile do not significantly alter these particular test images, which are then incorrectly scaled.</li>

<li>If Photoshop is set to a Mode of 32 Bits/Channel, any color space should be correctly scaled, though it is possible that the perceptual conversion will still cause the same changes it did before, now they would not interfere with correct scaling.</li>

</ul>

<p>I am finished answering irrelevant questions. I would be happy to answer questions from anybody else, or to answer your questions if I believe you actually want to know the answers to them.</p>

<p>I have one final question for you:</p>

 

<ul>

<li>If an image with alternating black and white lines is resized to be small enough that the lines are no longer resolved, what should the resulting solid color be?</li>

</ul>

<p>If you understand the answer to this question, you should understand the difference between correct and incorrect scaling.</p>

Link to comment
Share on other sites

<blockquote>

<p>Only values where two or more different input pixels were downsampled into a single output pixel will change. You measured the value of a solid color, which would not be expected to change.</p>

</blockquote>

<p>I’m perfectly aware that on complex images where four differing pixels are sampled into one, its <strong>quite likely</strong> the single pixel may have a different value. That’s kind of obvious. Your statement is clear: <em>If an image is resized to be smaller, and do so in gamma 2.2 space, the resized image will be darker than the original image. </em>Again, SO WHAT? You are treating this as if its wrong and worse, suggesting that this is a problem or points out something to do with linear vs. non linear treatment. </p>

<blockquote>

<p>Finally, I disagree that it is not in your best interests to come to a correct conclusion rather than championing an incorrect belief.</p>

</blockquote>

<p>Of course you do. You apparently have a stake for some reason in this idea. I don’t. You and your friend can be perfectly correct and if you can prove this to the group (something scientists do, its called peer review), great. So far you haven’t. You’ve expressed not solid reproducible steps to your theory. You’ve spouted image processing facts (facts in your mind) but haven’t proven them and worse, haven’t done any work to uncover why my V4 profile with a 2.2 TRC doesn’t behave as the theory says it should. In the end, doesn’t matter. Adobe isn’t going to change the product because no one has provided very good evidence they should. Most of us are going to dismiss what you are saying because you can’t prove your points. You are the one who has to ask yourself why, based on the current evidence and your knowledge of Photoshop code and processing, why you are damn sure your belief system is sound. Maybe you care, maybe you just want to believe in something, again, I’m got no interested in continuing a theological discussion of image processing. Your call. Get the facts (not the maybe, probably, I thank), or produce a series of steps anyone here can follow to prove your point. Or lets just agree to disagree. Makes no difference to me. </p>

<blockquote>

<p>If you had understood what I had written instead of asking enough irrelevant questions to demonstrate that you did not, you could have determined for yourself that it was probably correct.</p>

</blockquote>

<p><em>Probably</em>? Not good enough. It is or it isn’t and once again, its your job to prove the point! At the time, the ships <em>probably</em> did sail off the edge of the earth. <em>Probably...</em></p>

<blockquote>

<p>Not one observation is inconsistent with the claim that many image processing operations are only correctly performed in linear space. </p>

</blockquote>

<p>Just as not one observation is consistent with the claim that many image processing operations are only correctly performed in linear space! Again, the burden of proof is on you! One operation puts serious questions into the theory at this point, and until you figure out why, it will remain a question and burden on your theory. </p>

<blockquote>

<p>The dramatically different behavior of Photoshop’s rescaling between 8 or 16 Bits/Channel and 32 Bits/Channel should be a dead giveaway that one of the two is incorrect.</p>

</blockquote>

<p>No more than ships with three large sails fell off the earth faster than those with two. You may indeed be right, but you’ve provided no reason for this to be true. You have to do that or the behavior is moot. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>I am finished answering irrelevant questions. I would be happy to answer questions from anybody else, or to answer your questions if I believe you actually want to know the answers to them.</p>

</blockquote>

<p>Just be sure you ask questions that produce the desired results the authors expect from their “theory”. Questions that question the validity of the “theory” or questions who’s answers have to be backed up with actual facts will be ignored.</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Again in case you missed it:<br>

If an image with alternating black and white lines is resized to be small enough that the lines are no longer resolved, what should the resulting solid color be?</p>

<p>Now the burden of proof is upon you to prove that you are competent.</p>

Link to comment
Share on other sites

<p>No Joe, it doesn’t. The questions to the validity of YOUR theory is what’s in play here. Its a good idea to skirt all this into my direction. You and I are done here because you have provided more questions without answers and holes in your theory of the superiority of linear image processing and the incorrect work of Adobe than you’ve answered and now you are going to ask more questions. Its answers we seek. If you ever get them, let us know. In the meantime, we have images to screw up in gamma corrected space (at least according to you). <br>

So just what company do you write imaging code for as your day job? Just who is Joe C (anonymous posters, those with no info about them do always raise suspicions, especially when they are so quick to call others trolls)</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>I answered as many as I could to the best of my ability...</p>

</blockquote>

<p>Then you need some outside help in actually answering them to any degree of satisfaction on this end. Most of the answers where vague and dismissive. When you figure out what’s really happening here and why, why the V4 profile does what it does, why one test image produces vastly different results than another when the site YOU reference say they should produce the same results, yes, there is little point to continue. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>I have as good an understanding of those things as is possible without reading Photoshop code. As you correctly point out I have been unsuccessful in communicating those answers.</p>

<p>Can you please answer the question regarding the alternating black and white lines? It will probably be more productive than the last 10 pages of thread.</p>

Link to comment
Share on other sites

<p>Not that it matters, but I just made myself a file with horizontal black and white stripes. Cut it in half, resolution-wise, a bunch of times. (All working in CS5.) The 32-bit file (well, converted to 32-bit) in super-small size (the file is now 30 pixels) look pretty much even gray at 100%. The file in 16-bit looks like a couple stripes, though uneven. When you enlarge (just display) the 30-pixel files, the 32-bit one still looks mostly gray, though with darker bars at the top and bottom. The 16-bit one looks like stripes still, though they aren't uniformly black and white.<br>

As near as I can tell, this doesn't really prove anything, I'm afraid. The original file was stripes, so it could easily be argued that having stripes at all makes the 16-bit version more "right". The even gray display may be logically correct, but the original file did have stripes. (I'm passing on the squint test, since I wasn't allowed to do that for my driving test.)</p>

Link to comment
Share on other sites

<p>Crapcrapcrap. Shouldn't write messages before I'm fully awake.<br>

Thanks to erroneously mislabeling the files, I described them exactly backwards. It is the 16-bit file that appears gray (mostly) and the 32-bit file that appears striped, though unevenly. I still believe either could be argued to "appear" correct, since logically the fully-reduced file would be gray, and since the original file did have stripes, so having some stripes is arguably a closer representation.<br>

It does make me glad that I don't actually reduce my photos to the point of illegibility, though I'm sure some folks would suggest that it might be an improvement to do so on any given day.</p>

 

Link to comment
Share on other sites

<p>I am interested in the initial question but the thread seemed to have headed off in another direction that I could not follow. I would like to learn more more about gamma. The following link provided on page 1 of this thread looks like another fine example from Mr Koren's web pages<br>

<a rel="nofollow" href="http://www.normankoren.com/makingfineprints4.html#BW_testchart" target="_blank">http://www.normankoren.com/makingfineprints4.html#BW_testchart</a><br>

I was wondering if anyone could provide further links that may help me get to grips with this issue as I have to be honest and say it does seem strange that we apply gamma more than once but I am not in a position to debate or discuss until I know more about it.<br>

Regards Andrew</p>

Link to comment
Share on other sites

<p>What I think I have learned is that, with few notable exceptions like Lightroom, most software we photographers use today was written in the days when gamma encoding was necessary - and it does not lend itself well to a linear workflow. Today however I believe (happy to be proven wrong, however) that a linear workflow is both possible and beneficial for us in terms of being able to a) preserve the most information that was in the original capture, and b) work on it with minimal undesired data distortion.</p>

<p>The point I was hoping we would get to is that, again for us self contained photographers, we really do not need to gamma correct the underlying linear data at all, but simply apply gamma instead to the version of the 16 bit data that we pass off for output: to the video card, to the printer drivers, to the Jpeg encoder etc. - therefore keeping the underlying linear data whole. It looks like a linear workflow is definitely not ready for primetime in Capture NX2, my raw converter of choice.</p>

<p>As far as sources are concerned, I have found the following very interesting (the first three especially)<br>

<a href="http://graphics.stanford.edu/courses/cs178-10/applets/gamma.html">Gamma correction</a> - Stanford Applet<br>

<a href="http://www.normankoren.com/digital_tonality.html">Tonal quality and dynamic range in digital cameras</a> - Norman Koren<br>

<a href="http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html">Gamma FAQ - Frequently Asked Questions about Gamma</a> - Charles Poynton<br>

<a href="http://en.wikipedia.org/wiki/Gamma_correction">Gamma correction - Wikipedia, the free encyclopedia</a><a href="http://en.wikipedia.org/wiki/Gamma_correction">Gamma correction - Wikipedia, the free encyclopedia</a><br>

<a href="http://www.theasc.com/magazine/april05/conundrum2/page4.html">American Cinematographer: Color-Space Conundrum Part 2</a><br>

<a href="http://www.mathworks.com/matlabcentral/fx_files/7744/1/content/colorspace/doc/colorspace.html#HSV">colorspace formulas</a><br>

as well as the Burger and Russ textbooks that I mentioned in a previous post.</p>

Link to comment
Share on other sites

<p>If the goal is to simulate the way image colors blur together in the eye, then the calculation should be in the <em>linear</em> XYZ space. Other linear spaces derived from XYZ space, such as one of the standard ICC working spaces converted to linear (with the gamma changed to 1 as described elsewhere in this thread), will work as well.</p>

<p>Adding or averaging values in a gamma-corrected space give distorted results because in such a space, 2^gamma plus 2^gamma does not equal 4^gamma.</p>

<p>Another link about gamma (sorry if it was mentioned before): http://www.all-in-one.ee/~dersch/gamma/gamma.html</p>

<p>Also note that the Exposure Tool in Photoshop works in linear gamma.</p>

<p>Linear can also give you better tones - setting black point in a linear space or with the Exposure Tool is IMO superior to using curves or levels in a gamma space. In linear, you get an effect that more-realistically subtracts any diffuse, additive flare light from the shadows without damaging the mid-tones.</p>

Link to comment
Share on other sites

<p>Sorry if this issue has already been dealt with, I've only had time to skim through most of the posts.</p>

<p>What nobody appears to have raised is the issue of <em>levels per density step</em>, or levels per stop. In linear space we rapidly lose tonal resolution as we move down from full white to the darker tones. That's the main use of a gamma curve, to even out the distribution of bits over the brightness range. The 65535 levels per channel of a 16 bit space are all very well, but if we throw half of them away in the brightest stop, then that kind of defeats the object. Doesn't it?</p>

<p>Personally I would like camera manufacturers to introduce a (preferably controllable) non-linear amplification function <em>before</em> the A/D conversion stage. Applying a curve after digitisation is a bit (pun) daft really, but better than no curve at all. Applying an analogue log amplification stage prior to digitisation would improve signal to noise by separating digital jitter and A/D converter noise from the true sensor signal. It would also automatically increase the number of digital levels allocated to the darker tones.</p>

<p>Oh, and BTW Nikon et al, this constitutes timed and dated "prior publication" before you all go running off to your patent lawyers.</p>

Link to comment
Share on other sites

<p>@Rodeo - re incorporating a hardware log amp stage, you're too late. ;-) I suggested this in a private email to Tim L a couple of days ago, and, as I recall, mused about this a couple of years ago on photo.net, for exactly the same reason that you gave. FWIW, one can make a decent approximation to a log curve with just a couple more transistors per pixel in VLSI, so it's not as far-fetched as you might think. Unfortunately, my brother has been in intensive care for the past two weeks, so I haven't had time to participate to any extent in this thread.</p>

<p>Cheers,</p>

<p>Tom M</p>

Link to comment
Share on other sites

<blockquote>

<p>re incorporating a hardware log amp stage, you're too late. ;-)</p>

</blockquote>

<p><a href="http://www.cambridgeincolour.com/forums/thread6148.htm#post68053">Way </a>too late ;-)</p>

<blockquote>

<p>Applying a curve after digitisation is a bit (pun) daft really, but better than no curve at all.</p>

</blockquote>

<p>Is it? My understanding is that by applying a gamma correction to 16 bit linear data all you are doing is shifting the existing bits around. This does not 'create' more steps, add information or reduce banding in itself. The number of steps is the same in both cases (linearly spaced in one case and exponentially spread our in the other), but it should make virtually no difference to the number of real or perceived levels per stop in the end* (don't forget, we are staying in 16 bits, not compressing down to 8, as in Jpeg). Better to keep the data linear in the working space - with sliders that give you perceptually meaningful control (i.e. logarithmic/interpolated in some cases).</p>

<p>*For 16 bit images I believe this holds theoretically true up to a contrast ratio of 655:1 (why? - hint, in Photoshop it would be half as much), more in practice, with post processing software optimised for a linear workflow, which we know is rather rare today.</p>

<p>@Cliff: thanks for the interesting details.<br /> @Tom: sorry to hear about your brother. All the best.</p>

Link to comment
Share on other sites

<p>It's about the preview and nothing else.</p>

<p>You can't edit what you can't see.</p>

<p>If the tools designed for you to see this data don't let you manipulate it in such a graceful way then it doesn't matter what state of encoding the data is in. Linear data is a bitch to edit. So what algorithm controls the preview so you can see what you're editing while placing adjustment points on the curve? You place a curve point directly on linear data someone has to map it so it can be seen on the preview as it relates to the placement on the curve. Try doing that directly on linear data. </p>

<p>Once you slide that slider or adjust a point on a curve and see a change in the preview whether desired or undesired all bets are off on what it's doing to the data. No one cares. And no one can prove or connect the dots in figuring out what the algorithms are doing to the data under the hood in rendering the preview we see on screen.</p>

<p>All we know is that we have a reasonable facsimile of what we saw when we tripped the shutter.</p>

<p>Again! How do you explain all the clean, noiseless, bandless shadow detail assigning a 1.0 gamma sRGB profile to Jack Hogan's normalized 2.2 gamma encoded screenshot compressed to jpeg viewed at 100% in a web browser? This was the premise behind Jack Hogan's proof of linear over gamma encoded processing.</p>

<p>I noticed everyone skipped over my demonstration of that point.</p>

<p> </p>

Link to comment
Share on other sites

<blockquote>

<p>Linear can also give you better tones - setting black point in a linear space or with the Exposure Tool is IMO superior to using curves or levels in a gamma space. In linear, you get an effect that more-realistically subtracts any diffuse, additive flare light from the shadows without damaging the mid-tones.</p>

 

</blockquote>

<p>Cliff, can you prove this with a screenshot. </p>

<p>Make sure you assign your custom display profile to the screenshot and convert to sRGB for us to see what you see on your hardware calibrated display.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...