Jump to content

Gamma encoding revisited


Recommended Posts

<p>Tim, I understand and share your frustration. While I've come a long way in understanding the why and how of gamma encoding and how to calculate and graph step sizes and rounding/quantization errors, it isn't clear to me how a purely linear worlflow can be implemented for a reasonable cost, both in terms of image editing software and monitor calibration. I'm not sure if it is a good idea to just take a monitor designed to take encoded files and decode them to a calibrated gamma of 2.2.</p>

<p>I'm not even sure if companies like Adobe and NEC are seriously looking at easily implementable linear workflows and I don't mean having to buy both Lightroom <em><strong>and</strong> </em>Photoshop <em><strong>and</strong></em> a special monitor.</p>

Link to comment
Share on other sites

There is also a problem from a user-interface point of view. Gamma encoding is everywhere in the

interface, we're used to it, and it corresponds to the non-linear nature of our perception. A linear histogram

or curves palette is less intuitive than a gamma-corrected one. The same is true for RGB numbers. We all

have a pretty good idea where [128, 128, 128] sits visually in a normal workflow and it makes a lot of

sense to us, but that same tonality ends up around 56 in a linear workflow and is way over on the left of

the histogram. You can keep the interface gamma corrected even if everything is linear under the hood.

This is the approach Lightroom seems to take.

Link to comment
Share on other sites

<blockquote>

<p>and it corresponds to the non-linear nature of our perception...</p>

</blockquote>

<p>I have to strongly disagree: none of the data in the gamma-encoded image file is visible to our eyes until after decoding has restored it to linear.</p>

 

<blockquote>

<p>A linear histogram or curves palette is less intuitive than a gamma-corrected one. The same is true for RGB numbers. We all have a pretty good idea where [128, 128, 128] sits visually in a normal workflow...</p>

</blockquote>

<p>I suggest that this is more an issue of having gotten used to distorted data. It makes a lot more sense to me to unlearn the bad habits and use the linear system since that's what the eye will see in the end. The Lightroom approach as you describe it seems to me like a split personality.</p>

Link to comment
Share on other sites

<blockquote>

<p>It makes a lot more sense to me to unlearn the bad habits and use the linear system since that's what the eye will see in the end</p>

</blockquote>

<p>Yes, but no.<br />We, eyes+brain,receive linear data (light photons), bu we "see" not linear.<br />We "see" logarithmic (about), or "gamma encoded" (about).<br />Our visual system is HDR.</p>

Link to comment
Share on other sites

Frans, I think you misunderstood me. Data in the gamma-encoded image file IS visible—it's there on the

info palette, it's on the histogram, it's visible anytime we want to look at the numbers in the file. For

instance, in the current workflow a grey tone perceptually halfway between white and black is reported

reasonable close to 50%. That's a useful, intuitive feature that corresponds with how we see the world. In

a linear workflow 50% is a much brighter tone (about L* 75)—the image may look the same, but the

reported numbers will all be different. I don't see having an interface that corresponds with our perception

as a bad habit, but rather good interface design. It's standard practice in many fields to plot linear data

logarithmically because it makes more sense to us. At some point you simply have to deal with our non-

linear perception. You can either build an interface to accommodate it or force users to learn numbers that

don't correspond to their perception, but you can't make it go away.

Link to comment
Share on other sites

<p>Yes, we have learned to live with gamma encoding and have rationalized it to the point where conventional wisdom says it's the right way, but I'm not so sure. The eye is non-linear, but that doesn't mean it therefor applies to the digital darkroom where we do our editing.</p>

<p>Let me indulge a little. The eye is claimed to have a gamma anywhere between 1/2 to 1/3, but that's over the full input range of starlight to full sunlight. This enormous range is possible through several adjustment mechanisms if you will: the iris or the eye's aperture, the less sensitive cone receptor in the retina and the more sensitive rod receptors, and then there is the brain. They all work together to span the large input range of light. If the input range is smaller than that, then it follows that a smaller part of the gamma curve is in play which means that the deviations from a straight line are less. I suggest that the input range of what we do in the digital darkroom is way, way smaller than what the eye is capable of: our monitors and prints have a tiny dynamic range compared to what the eye is capable of covering. I suggest that the very limited dynamic range of digital darkroom viewing conditions will operate our eye over a nearly linear range; ergo, a pure linear workflow fits the conditions better.</p>

Link to comment
Share on other sites

  • 10 months later...

<p>I'm also very interested in this topic, Frans & Mark. One of you guys mentioned that you think the topic of gamma is rather confusing in the literature, with even well respected people/sites perpetuating misleading statements.</p>

<p>I'd love to see this confusion cleared up. Essentially, I think there's a lot of confusion re: what the actual output of the scene data is on the monitor/output device.</p>

<p><em><strong>Let me try to phrase my question thoroughly: </strong></em><br /> It seems people always write something like: "because cameras don't see the same way the human eye does (digital sensors record light linearly while the eyes see logarithmically), we apply a gamma curve to the raw sensor data... this has the added benefit of distributing the brightness levels across the available bits better, yada yada."</p>

<p>My question is: since CRTs/calibrated LCDs essentially apply the opposite gamma curve when outputting images,<em><strong> isn't the overall effect that the real world data -- recorded linearly by the camera, gamma 'encoded', then gamma 'decoded' by your monitor -- is actually represented *linearly* on your monitor (if no significant image editing is done to the RAW file)?</strong></em> If so, isn't it misleading to write that gamma encoding is done to make the camera see more like your eye? Isn't the job of an instrument (camera) to record the signal linearly, then display it linearly, then your eyes do whatever logarithmic transformation it's going to? Why imply that you want to make the camera 'see like your eye' when your eye is already going to 'see like your eye'... if the gamma encoding weren't undone by your monitor, that'd be like applying a double gamma encoding (assuming your eye's response is like a gamma encoding), no?<br /> <br />The exception would be in the limiting case where an output device has an extremely limited dynamic range (DR)... say a DR over which your eyes essentially see almost linearly. In that case, you may wish to apply some funky curves to the linear data to make it pop more... but given the formidable DR of most modern display devices (9EV for LCDs? 15EV for a JVC projector), seems to me you'd want to mostly just represent the real world data linearly (or with S-curves for contrast/pop/artistic effect). So I just find the common statement of wanting to make a digital sensor 'see like the eye' highly confusing.<br /> <br /> What do you guys think?</p>

Link to comment
Share on other sites

<p>Unedited linear sensor data is too dark to work with so the Raw file has had an extreme shepherd's hook shaped normalization curve and color description matrix applied in the Raw converter that makes the image viewable on a 2.2 gamma display, a display's native gamma requiring very little curve correction be applied to the video card allowing the majority of an 8 bit/255 RGB tonal distribution in drawing the image on screen with the least amount of banding especially in the shadows.</p>

<p>There's a bit of designed cheating that has to go on in an 8bit gamma encoded imaging system engineered to devote the majority of drawing 255 tones the eyes can see (low mids to white) compared to what's left for shadows which the eyes don't see as much. At least a gamma encoded working space will allow more refined shadow tweaks using fewer of the 255 levels with software's editing tools written for that gamma encoding. Try editing a linear data file and you'll want to give up because there's no way to tweak shadow detail or bring it out with definition.</p>

<p>Forcing a 1.0 linear gamma curve correction on an 8bit video system so that dark, non-normalized Raw data looks normal/viewable will cause a lot of banding in the shadows from lower mids into near black because of a doubling of errors caused by not only fewer tonal levels devoted to shadows in the video system but also fewer levels devoted to Raw linear data in the shadows as well as lots of noise that's too much for software to control.</p>

<p>That's what I think.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...