Jump to content

Gamma encoding revisited


Recommended Posts

<p>I've just read through 10+ pages of Jack Hogan's post of November 2010 called "What's the case for gamma today?" And I was amazed at how Jack tried to get answers but never seemed to get them to his basic question. Many discussions on photo forums and articles on the subject of gamma encoding seem very flawed or misleading to me, so let me state what I very much believe to be the case and envite your feedback.<br>

This is what I believe to be true:<br />a) gamma encoding was introduced in the 1940's to correct for the non-linear response of CRT's in TV's<br />b) gamma encoding has nothing to do with the eye's non-linear response<br />c) LCD monitors' natural response is far more linear than CRT monitors<br />d) LCD monitors' calibration to gamma 1 requires way less correction than for gamma 2.2<br />e) If we started from scratch, it would make oodles of sense to process camera output linearily without gamma encoding, calibrate our LCD's to gamma 1 and let the printer drivers translate to whatever is needed to get a linear print<br />f) Since we have an enormous legacy of gamma encoded images/videos and programs/monitors expecting gamma encoded input, it will take a revolution to do the gamma 1 scenario<br />g) gamma encoding/downsampling to 8-bits makes for small files and fast transmission, but degrades images to barely acceptable, particularly in the shadow areas<br />h) as a side-benefit, gamma encoding done correcly lowers the steps between brightness levels in the shadow areas where our eyes are most sensitive to brightess changes and increases step sizes in the brightest areas, where our eyes are the least sensitive<br />i) only image files of 12-bits or more per color can be gamma encoded/downsampled to 8-bits with reasonable shadow quality; any lower bit depth of the original image file will result in nasty artifacts in shadow areas due to rounding/quantization; the lower the original image file bit depth, the worse the artifacts<br>

I have spreadsheets and charts to show the step size changes and artifacts caused by gamma encoding image files of various bit depth, from 8-bit through 12-bit and at a theoretical infinite bit depth.<br>

Your feedback will be tremendously appreciated.</p>

Link to comment
Share on other sites

<blockquote>

<p>Your feedback will be tremendously appreciated.</p>

</blockquote>

<p>Don't have a clue on what to add to what you've already lined out, Fran. I don't see any evidence of added benefit or that it's worth the hassle in changing something that's been working fine for at least a decade in digital photography.</p>

<p>BTW where the heck you been? Haven't seen any of your postings here for quite some time.</p>

Link to comment
Share on other sites

<p>Tim, some people, including me, wouldn't necessarily agree that gamma encoding works fine. The benefits of not-encoding/linear gamma would be greatly simplified image editing programs, less damage to image files -although that damage may not be visible to most people when working with 12 or more bits per color - easier LCD calibration with less loss of levels, and a more logical approach that would be easier to understand. Right now many people believe, in error I may add, that gamma encoding is needed because of our eyes' non-linear response and that it somehow creates a "perceptually uniform" work space. Most articles that I've been able to find reflect that erroneous believe, even from some people very respected in the field of digital photography. And not many people, it seems, have a solid understanding of the issue of modified step sizes with gamma encoding.</p>

<p>I've been busy developing some woodworking skills (turning wood objects on a lathe) and writing a book.</p>

Link to comment
Share on other sites

Frans, you are completely right — there is tons of contradictory information floating around. I'm far from an expert and I'm happy to be corrected, but I think there are a couple subtle issues here.

 

It's my understanding that gamma encoding originally was not used to compensate for non-linear CRT response (a

common misconception), but to improve performance given limited computational 'bandwidth.' Gamma encoding

allows one to use lower-bit data more effectively by aligning them more closely to human perception in effect using bits

were we can perceive them rather than where we can't. So it IS related to the eye's non-linear response, just not in

the way many people think. It's a process aimed at improving performance not perception.

 

We have enough computer power these days that we could spare the memory to encode linearly, but I'm not sure

it's really a simplification. You need to either go to floats or high-bit data to keep the same quality. To me, it

seems simpler to add a little exponent from time to time especially given the unrealistic requirement to change

entrenched workflows in order to make a linear encoding work globally.

 

Of course there are plenty of people working with linear data, especially in the 3D rendering field, so it's not

unthinkable.

Link to comment
Share on other sites

<p>It's about the look of the final image. It's always been about that from the very beginning.</p>

<p>Show me proof in the form of an image that the way things have been working need to be retooled for linear encoding.</p>

<p>I have discovered one aspect of working on linear data in ACR in a gamma encoded output space (mine the 1.8 gamma ProPhotoRGB) near the black point that's had me scratching my head. Jeff Schewe suggested this tip for gaining extra shadow definition by using combinations of Black point adjust, Fill and the bottom slider on the Parametric curve.</p>

<p>Doing this using only Point Curve tweaks wouldn't work because for some reason the point nodes representing what I'm seeing in shadows down to the black point wouldn't respond. It's like the levels near absolute black are beyond the black point located on the Point Curve.</p>

<p>Below is a demonstration of what I'm talking about in trying to add definition to shadow detail in shaded foliage. The left side looks sort of flat and murky while the corrected side with the added adjusts looks as if it's been sharpened which it hasn't. I can only assume the software has been engineered to deal with this mapping mismatch between what's seen on screen and where it actually shows up on an adjustable curve.</p><div>00aDyS-455223584.jpg.4518359b84570bbd77b6f07605b874e1.jpg</div>

Link to comment
Share on other sites

<p><i> It's about the look of the final image.</i></p>

 

<p>This, of course, is a sensible point of view from a photographer or retoucher. There is another point of

view — that of the developer or someone who spends time making efficient workflows. I think the point

Frans is making is that we could get the same or marginally better final images with considerably less

complication if we abandoned gamma encoding. Gamma encoding has always been confusing and a

source of errors for developers. Of course it's only less complicated after the messy business of changing

the entire world.</p>

Link to comment
Share on other sites

<p>@Mark: Gamma encoding has been used since the 1940's to correct for non-linearity of CRT's used in TV's. Rather than add cost to millions of TV's to correct for this non-linearity, they choose to add cost to hundreds of TV broadcasts. The need for sending image information in large quantities to remote locations didn't arise until decades later and when that did happen, gamma encoding offered the side-benefit of allowing data to be compressed to 8 bits with preservation of detail in the shadows to some degree.</p>

 

Link to comment
Share on other sites

<blockquote>

<p>There is another point of view — that of the developer or someone who spends time making efficient workflows.</p>

</blockquote>

<p>Oh please, please, please... <strong>show me</strong> how this can work for creating efficient workflows. I am all for that. I just wish it was for creating efficient workflows for the person editing the image as I just did. You don't know how much time I spend fixing things like I demonstrated above which can't be applied to all images. There's no one size fits all approach. A developer only has to fix one thing, their software. Wow! If I only had it that easy.</p>

<p>You want to talk about efficient workflow? Try editing 1000 images to look as good as THEY SHOULD coming straight from the camera. Why don't we talk to the camera manufacturers and get them to solve this. They are the source, you know, of every digital camera image.</p>

Link to comment
Share on other sites

<p>Tim: The look of the final image is one thing; how we get there is a totally different story and a twisted story it is! Does gamma encoding allow for more or less acceptable compression to 8 bits for marginal quality images to be sent to remote locations? Absolutely. But for higher quality work, gamma encoding has only drawbacks: rounding/quantization losses when encoding the image,when editing the image and when decoding the image; more complicated image editing programs; more drastic corrections to LCD monitors to force a gamma 2.2 on them, rather than make minor corrections to their inherent gamma which is much closer to 1 than to 2.2. Rounding/quantization losses are not a problem for most people when working with higher bit depth, but editing program complications and LCD issues are the real biggies here.</p>

<p>I"m kind of at a loss with your editing example as it looks to me that you have darkened the shadows in both cases and that couldn't possibly help to get more detail in the shadows.</p>

Link to comment
Share on other sites

<blockquote>

<p>I"m kind of at a loss with your editing example as it looks to me that you have darkened the shadows in both cases and that couldn't possibly help to get more detail in the shadows.</p>

</blockquote>

<p>So much for consistent, calibrated display standards, right? And you want to retool everything to accommodate that techno tower of babel?</p>

<p>On my calibrated display the right side has more definition and clarity. In fact I went back and tweaked the point curve and grabbed even more definition and clarity. If you have an easier, faster way of doing this with 1.0 gamma output, I'm all <strong>eyes. </strong>Please prove it.</p>

Link to comment
Share on other sites

<p><strong>"Show me this happening in a high quality image. Prove it!"</strong><br>

<strong></strong> <br>

Tim: I said "Rounding/quantization losses are not a problem for most people when working with higher bit depth...", but I can't say for sure that they never will. I do know that gamma 1/2.2 encoding decreases the step size between neighboring brightness levels for the area between 0 and 25% of the full scale input, but it increases it for input values of 25% or more; at 50% it is 1.5 times the linear file step size and at 100% is is 2.2 times. Any image editing introduces more rounding/quantization errors. All this may or may not cause banding in bright gradations like blue sky. Of course I could force this to show up by using extreme image editing of a high-bit-depth file, but that wouldn't prove anything one way or the other.</p>

<p>Way more important are the issues of image editing program complexity and the necessary larger corrections to force an LCD monitor to a gamma of 2.2.</p>

Link to comment
Share on other sites

<p><strong>"So much for consistent, calibrated display standards, right? And you want to retool everything to accommodate that techno tower of babel?"</strong><br>

<strong></strong> <br>

A truly linear workflow would have a consistent standard of gamma 1: camera sensor, editing software, monitor calibration, printer driver. Wouldn't that be nice? And I already said that it would be darn hard to implement it everywhere.</p>

<p>On my calibrated monitor the right side of your image shows more definition and clarity in the shadows as well (as it would on a non-calibrated monitor, for that matter). What I still don't understand is why you pull the curves down in the shadow area when you want more definition in the shadows, unless you wanted improvements in that area of the shadows where your tweaked the curves to go back up again, at the expense of the deeper shadows.</p>

<p>By the way, this issue of how you propose to deal with shadows doesn't have much relevance to the thread's issue of gamma encoding, does it now?</p>

Link to comment
Share on other sites

I see no reason why you couldn't do this within your own workflow if you thought it would be worth it.

Spectraview will let me calibrate my monitor to 1.0 gamma, you could create a profile for a linear

working space, and if you are working with raw data, the input is already linear. You would only need to

gamma correct images you sent places that were expecting them—like the web.

 

Most people don't want to go that far especially since a linear monitor would be pretty bad for other

tasks that assume gamma correction. Lightroom's workflow is a nice compromise. Everything under the

hood works with linear data only adding gamma correction for the histogram, RGB readout, and output

image. This avoids the rounding problems without the need to change the world.

 

I dabble in programming and scripting and work with color a lot. I agree that everything would be easier

and less error prone if we could always count on linear data.

Link to comment
Share on other sites

<p><strong>"Spectraview will let me calibrate my monitor to 1.0 gamma"</strong></p>

<p>Mark: I'm not sure if that's a good idea. It could be that the circuitry in the NEC monitors has been optimized so that calibration to 2.2 would cause the least correction and if that's the case you could do more harm than good.</p>

<p>I definitely need to get myself educated on what's out there like Lightroom.</p>

Link to comment
Share on other sites

<blockquote>

<p>By the way, this issue of how you propose to deal with shadows doesn't have much relevance to the thread's issue of gamma encoding, does it now?</p>

</blockquote>

<p>Doesn't your quote below relate to steps available for adding definition to shadow detail...</p>

<blockquote>

<p>I do know that gamma 1/2.2 encoding decreases the step size between neighboring brightness levels for the area between 0 and 25% of the full scale input, but it increases it for input values of 25% or more; at 50% it is 1.5 times the linear file step size and at 100% is is 2.2 times.</p>

</blockquote>

<p>The pull down of the parameter curve points to an issue of where exactly the linear sensor data Black Point maps to the actual preview of absolute black on my calibrated display. For some mysterious reasons using that parametric shadow curve acts like a black point kicker I can't get using the point curve. Linear preview may not match up to linear data is what my point is about going all linear.</p>

<p> </p>

Link to comment
Share on other sites

<p><strong>"Doesn't your quote below relate to steps available for adding definition to shadow detail..."</strong></p>

<p>Tim: the gamma encoding side-benefit of having smaller steps available in the shadow areas will only be noticeable when you compare a linear 8-bit file to a gamma-encoded 8-bit file; with higher-bit-depth files you won't be able to see the difference. And since you wanted to have a discussion on how all this relates to high-bit-depth files...</p>

<p>As for your remarks about your sample image and what you did to do what, they raise more questions than they answer. It seems to me that what you do with pulling down the curves for the deep shadows and pull it up again for the slightly lighter shadows is give more contrast to the slightly lighter shadows at the expense of the deep shadows, which would have nothing to do with possible misalignment between the image data, the preview information and your monitor.</p>

<p>BTW, how exactly does the Block Quote feature work? It's been a long time since I used it and copy and paste + bold doesn't look as nice as the quotes on a blue field.</p>

Link to comment
Share on other sites

<p>My name is Tim.</p>

<p>My demonstration of linear data to preview misalignment is about how software (ACR) deals with my camera's linear data at the black point region in relation to my preview. Linear data around the area of black point is never consistent image to image depending on exposure and the amount of light in the scene.</p>

<p>The term linear as it relates between preview and data is never quite precisely defined. The look of actual linear sensor data viewed on a gamma encoded display is quite dark so software like ACR applies a specifically shaped base tone curve under the hood as all Raw converters do that attempts to normalize this dark preview. The black point region of this curve must be very precise because there are fewer bit levels clumped up against absolute black devoted to this region compared to lighter regions.</p>

<p>If normalization was applied through an 8 bit video system by calibrating the display to 1.0 gamma I believe there is going to be some serious artifacts near this black point region only evident in digital camera images (as opposed to CGI images such as a grayramp made in Photoshop) due to a lack of precision between the display, calibrators and shadow roll off variances from linear sensor data in the black point regions. I want the precision to be applied by my Raw converter (ACR).</p>

<p>The reason I pushed the black point close to zero while lightening the lighter shadows in my sample image is to provide definition to overcome the reduced dynamic range (flat shadows) of a print, the reason photographers edit digital images.</p>

Link to comment
Share on other sites

<p>Tim, forgive me for being so blunt, but I have not the foggiest idea what exactly you are talking about. Are you really saying that there is a mismatch between the image data as captured by the camera, the preview information and your monitor? Is this only or more pronounced when viewing linear data? Did you instruct your ACR to not do the gamma encoding? I'm lost and would appreciate any help.</p>
Link to comment
Share on other sites

<p>Sorry for making this so unclear, Fran. </p>

<p>Trying to bring out shadow definition near absolute black by placing and adjusting nodes on the point curve (the only tool to indicate what and where this is going on as it relates to the ACR preview on my calibrated monitor) is never consistent image to image. It's like a digital nether world. Sometimes I can get it with just the point curve and sometimes it requires using the parametric "shadow" curve.</p>

<p>This is why I indicated that there needs to be more precision in manipulating shadow level data (to avoid the cartoonish HDR look from over cranked Fill light to bring out shadow detail) and ACR aids in applying this precision by providing the Fill, Black Point, Point Curve and Parametric curve while at the same time taking into account the user's eyes adapting to viewing dark regions and zooming out to get an overall look from those edits. I'm deducing from these added tools that seem like overkill that Adobe engineers intimately understand this issue AND how it affects editing linear data in a gamma encoded output space.</p>

<p>If all this was switched to linear encoding at an ICC 8bit color managed display calibration workflow level and retooling the software to map it accurately for that, it still wouldn't solve the off and on nonlinear nature of digital sensor data especially in the shadows I just demonstrated above. </p>

<p>Now addressing Mark's point about linear encoding for making it easier for developers to create better software, I don't understand what this has to do with digital photography. I'ld think you'ld get more insightful comments posting this topic on a software developer's site. Most of your points mathematically speaking are too complicated to see how it benefits photographers.</p>

<p>To further put my point across concerning nonlinear nature of digital sensor data in the shadow regions I've posted a pulled back view of the same image crop posted above of the before and after appearance of shadow definition I often struggle with to get it to look natural without the cartoonish HDR look. It's different image to image.</p>

<div>00aED2-455439584.jpg.a1ac4556ccb0ce054a6fb54f47658b22.jpg</div>

Link to comment
Share on other sites

<blockquote>Now addressing Mark's point about linear encoding for making it easier for

developers to create better software, I don't understand what this has to do

with digital photography</blockquote>

 

<p>Really? That's a rather myopic view considering how much trouble you are going through to squeeze the final bit of

quality out of your software. Conventions and infrastructure that are easier to for people to develop around translate

directly to quicker releases, fewer bugs, and cheaper products. Since software is the primary darkroom for just about

all photographers these days, things that effect software development effect photographers.</p>

 

<p>The situation has improved a lot with the wide adoption of icc profiles, but it wasn't that long ago that questions

about gamma encoding made many images mystery meat and was a constant source of frustration. This was

especially frustrating when you wanted to drop images into a well-calibrated video system but couldn't be sure what the

intended gamma of an image was. It's still a point of frustration on the web. On top of that, it still confuses

photographers when trying to understand where exposed values end up on the histogram. Find any web forum topic

about exposing to the right or why 18% grey cards don't read 18% in photoshop and you quickly see that this is

confusing to almost everyone.</p>

 

<p>On a different note, have you tried out Lightroom 4 yet, Tim? Fill light is gone and you now have sliders for blacks,

whites, shadows, highlights, and the exposure slider is completely retooled. It's much different and you might like (I

stress might) what it does for your shots. The shadows slider works on a much narrower part of the histogram than fill

light making the cartoony look a little harder to achieve.</p>

Link to comment
Share on other sites

<p>Tim: thanks for your clarifications. It seems that you struggle way more with shadow issue than me. My version of ACR doesn't have the parametric curve with its fill and black point adjustments and, judging by what you struggle with, that may be for the better in my case. It remains to be seen if these kinds of issues would be easier or harder to address with linear data, but my guess is that linear data would result in a more straight-forward approach. After all, gamma encoding is an anomaly, a band-aid to correct for the non-linearity of the extinct CRT species.</p>

<p>Mark: couldn't agree with you more on the issue of gamma encoding causing mayor confusion, the reason why I started this thread in the first place.</p>

Link to comment
Share on other sites

<p>I've given my 2¢, so I'm done here. Hopefully others more knowledgeable than I will contribute to move this subject along toward more meaningful information with regards to Digital Darkroom issues. Where that is, I have no idea.</p>

<p>I gave it my best shot. As usual I feel I've waisted my time because I'm no farther ahead than I started with regards to understanding this subject and the benefits it may bring.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...