Jump to content

What's the case for gamma today?


Recommended Posts

<p>Perhaps I am starting to understand why I am here questioning the use of gamma, given today’s state-of-the-art technology. I am coming at this question in the year 2010. However, gamma has been around for almost 200 years. First there was film density. Then there were CRT’s electron guns. Then twenty years ago came Jpeg soon followed by the first consumer digital cameras: memory was expensive, computing power was limited and people post processed in 8 bits: for nearly two centuries gamma was needed for one reason or another to store and process images. So gamma was ever present, people always assumed it was needed and never questioned it.</p>

<p>Fast forward to 2010. Cheap storage, cheap processing power, large linear files and 16 bit color are widely available. If you are a digital photographer with modern color-managed equipment who rolls his own (like I am), is gamma an asset or a liability in your post-processing working color space? More than happy to send pictures to my overseas friends via email in perceptually efficient 8bit Jpeg, but my whole workflow revolves around 12/14/15/16 bit data, from capture to inkjet print. Every time Photoshop has to perform complex color operations on my aRGB (or ProPhotoRGB, or whatever) data it has to de-gamma it to get back to linear and then re-gamma it when finished. Not to mention when the image needs to be converted to a different color space, or printed. Each round trip to linear and back adds noise in the shadows and quantization in the highlights that may become apparent in complex post processing jobs. But apparent or not, why not stick to a linear color space (gamma =1) in the first place, from raw processor to output? What’s the case for gamma today, if you are like me?</p>

Link to comment
Share on other sites

  • Replies 171
  • Created
  • Last Reply

Top Posters In This Topic

<blockquote>

<p>is gamma an asset or a liability in your post-processing working color space?</p>

</blockquote>

<p>Why does it have to be one or the other? There are linear and non linear gamma encoded devices. And technically, many do not follow the simple gamma formula, they should not even be called gamma (but rather a Tone Response Curve). Many of our device are not linear gamma or linear TRC behaving, so we have to adjust to the output, its not good or bad, it just is.</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Geez! Not this old premise, again. </p>

<p>See this gamma print test chart:</p>

<p>http://www.normankoren.com/makingfineprints4.html#BW_testchart</p>

<p>Note the variations in gradualness between the different gammas. My DSLR and scanner NEVER renders this kind of smoothness taking a picture of such a gradient whether I increase or decrease exposure or light source. They all need editing to get it to mimic those grayramps. So much for consistent linear behavior from devices. The same goes for printers as well.</p>

<p>Now displays are far closer to rendering a grayramp evenly and smoothly from one model to the next out of the box than most input or output devices. And when measuring this native response are often already close to 2.2 gamma, not 1.0. So my video card doesn't need mangling of its TRC curves to produce a smooth and evenly gradual 255 level RGB grayramp.</p>

<p>I'ld rather have my data already represent this response that my display already represents when I edit my images and the only way to define that response coming from a linear Raw state is to convert to a gamma encoded color space.</p>

<p>The noise you claim to see in a gamma encoded environment can't be controlled on a consistent basis implementing a linear workflow. You are dealing with far too many complicated variants up the process chain that involve electronics, A/D converters and one size fits all rendering algorithms. </p>

<p>Going linear to improve image quality, efficiency and consistency is a fantasy.</p>

 

Link to comment
Share on other sites

<p>@Andrew: True, but the raw data out of good quality DSLRs is considered to be linear (i.e. proportional to the relative luminance at the scene). So why mess with gamma unless you have to? There are disadvantages to using it - and I am having trouble seeing the advantages in these days of raw-to-post-processing-to-inkjet 16 bit workflows. I really only need it when I save a low res jpeg to send via email. In the meantime, if we can work in a linear space we are not performing unnecessary arithmetic and the image data remains as close to its original state as it can be. Why mess with gamma if you do not need to?</p>

<p>@Tim: To move forward we need to question the past. Clearly I am talking about maintaining data integrity, not making captures look the way we want to. In a properly color managed set-up, where an amateur photographer like me starts with raw data and prints with his own inkjet printer, why distort image data to the tune of 1/2.2 gamma while post-processing if you do not need to? Why actively decide to degrade your data (even if almost invisibly)? The monitor is not an issue: in a gamma=1 color space the software will display images properly on your 2.2 gamma (or 1.8 or 2.5 or whatever) monitor - that's what color management is there for; meanwhile you haven't performed unnecessary data degrading operations on your image.</p>

<p>So I still wonder: In a situation like mine, why use a gamma corrected color space in 2010?</p>

 

Link to comment
Share on other sites

<blockquote>

<p>In a properly color managed set-up, where an amateur photographer like me starts with raw data and prints with his own inkjet printer, why distort image data to the tune of 1/2.2 gamma while post-processing if you do not need to? Why actively decide to <strong>degrade your data</strong> (even if almost invisibly)?</p>

</blockquote>

<p>How do you degrade data working in an already linear space of like say ACR/LR's 1.0 TRC ProPhotoRGB input space on a digital camera's Raw capture. Edits in Raw converters never touch the data. You're only seeing a reasonable facsimile of your parametric instructions on the preview generated on the fly by the Raw converter in accordance with color managed previews.</p>

<p>Also, not all (darkish) linear previews are rendered the same after demosaicing in a number of Raw converters that claim to turn off gamma adjusted color managed previews. I've checked this out myself. I viewed the same image in three RC's that have a linear setting (basically turning off color management) and all three were different.</p>

<p>One Raw converter (Raw Developer) has a linear setting creating a very dark rendering on a 2.2 gamma encoded display. The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on. To much of a PITA!</p>

<p>Scanners are different in the way they capture and process sensor data, but since I don't use them anymore and shoot digital Raw directly, I'm not sure if 1.0 linear capture and processing<br>

is all that useful.</p>

Link to comment
Share on other sites

<p>Can you visually prove that data is being degraded by posting a sample comparison showing the effects between the two different gamma encoded processes?</p>

<p>I'll bet a simple curve tweak can fix a lot of what you're seeing using standard processes already in place today without resorting to retooling everything for linear encoding.</p>

Link to comment
Share on other sites

<blockquote>

<p>True, but the raw data out of good quality DSLRs is considered to be linear (i.e. proportional to the relative luminance at the scene).</p>

</blockquote>

<p>That’s scene referred, we need to end up with output referred. See:<br>

<a href="http://www.color.org/ICC_white_paper_20_Digital_photography_color_management_basics.pdf">Digital photography color management basics </a><br /><br /></p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>The historic utility of gamma encoding (ie, the log density vs log intensity curve of photographic emulsions) was to compress the tremendous brightness range in natural scenes into a range that could be printed. The limited dynamic range of printers and displays still is an issue, so some compression of this sort is still needed, at least at the output stage.</p>

<p>That being said, Jack's proposal has some merit, at least in principle. Too many conversions back and forth between gamma and linear encoded data will, in principle, add round-off and possibly other errors to the data at each conversion. However, as Tim points out, this effect will be minimized (possibly to the level of being negligible) in high bit depth spaces, exactly the conditions Jack correctly states as making a switch to linear encoding worth considering. </p>

<p>In addition, pragmatic considerations enter this discussion. For example, while most people probably are routinely processing at 16 bpc, few stay at 32 bpc for any length of time while processing. The increases in CPU load and file size make doing everything at 32 bpc / floating point unwieldy at the present time. However, my guess is that if we did everything at 32 bpc, the round-off errors discussed above would again be negligible so there would be no impetus to make the switch to all linear processing.</p>

<p>Just my $0.02,</p>

<p>Tom M</p>

Link to comment
Share on other sites

<p>I agree with Jack that gamma is unnecessary for processing. Not all operations even correctly take the gamma into account, particularly resizing images in many software packages. This can require extra steps to be done by hand.</p>

<p>Since most displays and probably at least some printers are expecting gamma encoded data, it would still need to be in a gamma space for final output. As long as video bandwidth is still at a premium, I do not see displays moving to linear encoding, either.</p>

Link to comment
Share on other sites

<p>I've found there are some instances where correcting for crushed shadow detail can be better corrected instead of the use of curves or levels by assigning a profile with an altered gamma curve that lightens the image. For some reason a math constructed curve inside a simple matrix profile does a cleaner job of lifting shadow regions while maintaining clarity and definition. Trying to do it with curves forces you in a tight corner with very few adjustment nodes to work with to get the same results.</p>

<p>This has nothing to do with linear encoding but I think the same principle of letting the math do the mapping instead of using tools designed for adjustment on a linear scale applies in this instance. </p>

<p>For example there was a thread last week where the poster was asking for editing tips on fixing a jpeg image she shot of a group of guys lit by diffused morning light and a bit underexposed. The shadows of the first row of guys kneeling down had their feet and legs almost in black. Instead of using curves or levels I just assigned 1.8 gamma ColorMatch RGB to the sRGB image and revealed much more detail and definition than I could using the tools. </p>

<p>I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.</p>

Link to comment
Share on other sites

<blockquote>

<p>I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.</p>

</blockquote>

<p>The curves at least in simple matrix profiles are themselves very simple. A gamma curve is super simple (its defined by one specific formula (output = input gamma ) that describes a very simple curve. So I don’t know why such a curve couldn’t be produced using curves or some other similar method. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.</p>

</blockquote>

<p>If that is really what happened, then my best guess would be that the profile was converting between the sRGB tone curve and gamma 1.8, whereas the curves could only easily convert between gamma 2.2 and gamma 1.8, which is a different transformation. A sufficiently accurate curve could replicate this, though I do not know if the curves dialog box would allow it.</p>

<p>The sRGB tone curve is linear in the shadows, followed by a gamma 2.4 curve to end up close to the gamma 2.2 curve for the remaining tones.</p>

Link to comment
Share on other sites

<blockquote>

<p>How do you degrade data working in an already linear space of like say ACR/LR's</p>

</blockquote>

<p>If you start in linear, why do you NEED to apply a gamma corrected color space to move to Photoshop? </p>

<blockquote>

<p>Can you visually prove that data is being degraded by posting a sample comparison showing the effects between the two different gamma encoded processes?</p>

</blockquote>

<p>Anybody can see it by performing a simplified experiment: In Photoshop open an image with good dynamic range, convert it to 8 bits (this is the simplified part), zoom a dark portion to 100%, and apply a number of levels adjustments with gamma alternatively 0.4, 2.5, 0.4, 2.5, ... After how many round trips do you start seeing increased noise in the shadows? Answer: one.</p>

<blockquote>

<p>Also, not all (darkish) linear previews are rendered the same after demosaicing</p>

</blockquote>

<p>The underlying linear data represents the proper luminance at the scene: it is not 'darkish' in itself. If you do not color manage it, your monitor will distort it because of the physics of its electronics and produce less luminance than the data represented How much less? It can be modelled by a power function with a gamma of 2.2. That's why proper color managed software, knowing how your monitor will distort the underlying 'correct' linear data, will apply a compensating 1/2.2 curve to it before passing it to the monitor's input. Different RC's look different out of the box because they use different parameters and algorithms to open the raw file, not all of which are under the control of the user.</p>

<blockquote>

<p> The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on.</p>

</blockquote>

<p>No need. If you are working in a color managed system, your monitor is profiled and your images are properly tagged, your software (IE, Picture Viewer, PS, CNX2 but not non-color-managed Chrome) will make the correction for you on the fly while leaving the underlying data undisturbed.</p>

Link to comment
Share on other sites

<blockquote>

<p>That’s scene referred, we need to end up with output referred.</p>

</blockquote>

<p>@Andrew: I agree with the article you linked. I am talking about something else though: why not use sRGB1 instead, for example, with the same coordinates as sRGB but a gamma of 1 (vs effectively 1/2.2?)</p>

<blockquote>

<p>The limited dynamic range of printers and displays still is an issue, so some compression of this sort is still needed, at least at the output stage.</p>

</blockquote>

<p>@Tom: agreed. And that's my point. Why do it before it is needed? Why not leave your data alone, and perform the compression only if so requested by the output device? What are the advantages to doing it before then? I only see disadvantages.</p>

Link to comment
Share on other sites

<p>@Jack - If your image has enough bit depth, sure, you leave the data linearly encoded and compress only for output. </p>

<p>IMHO, the real question is not, "Why not do it?", but does linear encoding (as the working data space) confer any significant benefit to the user. My contention is that it doesn't because you must have adequate bit depth (ie, 16 or 32 bpc) to even consider processing in a linear space, however with such fine amplitude resolution, round-off errors at gamma-linear-gamma conversion steps also become negligible, thereby negating the benefits of linear encoding in the final product. </p>

<p>That being said, there are benefits to staying in a linear space for processing. For example, fewer lines of code are needed because fewer conversions back and forth are needed. This makes the code easier to maintain and faster to run. Whether this effect is significant to the end user is an open question.</p>

<p>Tom M</p>

Link to comment
Share on other sites

 

<blockquote>

<p>The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on.<br>

------>No need. If you are working in a color managed system, your monitor is profiled and your images are properly tagged, your software (IE, Picture Viewer, PS, CNX2 but not non-color-managed Chrome) will make the correction for you on the fly while leaving the underlying data undisturbed.</p>

</blockquote>

 

<p>Agreed. If you have a linear encoded document and an associated profile that describes that condition, it will not look too dark, it will look fine. These images look dark when the app believes they are in a gamma corrected space as all such ICC aware app’s will when presented an untagged doc or an incorrectly tagged dock. <br>

But at some point, we have to introduce some kind of TRC to output the data (or view it in a non ICC aware way, like in a web browser that doesn’t understand profiles). We have to render output referred data. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>I know why the linear data appears dark. However, no one pointed out what's involved creating an ICC profile for a digital camera from an a linear image so it doesn't appear dark.</p>

<p>The RC's that allow this linear setting have to turn off color management to build an ICC profile from this linear response. I just don't want to pay for the ICC based DC profiling package that does this. It's not cheap.</p>

<p>A normalized preview is already happening in ACR/LR anyway without an ICC profile. But as Jack pointed out ACR/LR don't allow tagging and/or converting to a 1.0 gamma version of sRGB, AdobeRGB or ProPhotoRGB in the Raw data editing/conversion process.</p>

<p>And Andrews point of encoding for non-color managed web viewing pretty much make this scenario too cumbersome to implement especially if you're processing and cataloging thousands of images.</p>

 

<blockquote>

<p>Anybody can see it by performing a simplified experiment: In Photoshop open an image with good dynamic range, convert it to 8 bits (this is the simplified part), zoom a dark portion to 100%, and apply a number of levels adjustments with gamma alternatively 0.4, 2.5, 0.4, 2.5, ... After how many round trips do you start seeing increased noise in the shadows? Answer: one.</p>

</blockquote>

<p>From examining my prints under a loupe the stochastic dithered pattern of my inkjet busts up and hides any noise in shadow detail. I just don't see it even in prints of ISO 800 shots where the noise clearly shows up at 100% view on the display.</p>

Link to comment
Share on other sites

<p>It doesn’t appear dark if properly handled. Or to put it another way, a gamma encoded image would look way too light if the assumption were it was linearly encoded. <br>

You build a profile for linear data just as you would non linear data. Depending on the type of profile, that info has to be specified depending on how you build it. FWIW, you can create a linear encoded RGB working space in Photoshop by using the Custom RGB option. Just set it for 1.0. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>My contention is that it doesn't because you must have adequate bit depth (ie, 16 or 32 bpc) to even consider processing in a linear space</p>

</blockquote>

<p>@Tom: Are you referring to the 'banding' issue? I believe that issue is poorly understood (perhaps by me). Follow me for a second: if you start with 12 bit linear data from your camera, and do not reduce the bit depth (our case), how is gamma correcting your data going to help with perceptual uniformity? Think about it: all you are doing is shifting your existing bits around - detail isn't added in the shadows that didn't already exist in the linear data; on the other hand you are amplifying noise in the shadows and creating quantization in the highlights, unless you add headroom in the form of more bit depth. For the same accuracy, linear is less noisy and requires fewer bits, not more. So why distort it with gamma in the first place?</p>

<p>Of course if you are taking your 12 linear bits and squeezing them down into 8 (e.g. Jpeg)gamma encoding really helps reduce banding because it uses the extra 4 linear bits to fill in the gaps created by gamma in the shadows. But that's not our case. When we start with 12 bits and end with 12+, it does not help one bit :-)</p>

<p>@Andrew: thanks for the suggestion about the custom RGB setting. I'll play with it.</p>

Link to comment
Share on other sites

 

<blockquote>

<p>creating an ICC profile for a digital camera</p>

</blockquote>

<p>@Tim: you do not need an ICC profile for the camera. You need one for the monitor because your system does not need to correct the linear data (that's already 'correct') - it needs to correct for the fact that your monitor will distort it more or less according to a gamma function. If you do not have a custom profile of the monitor the default one that comes with the OS will do. </p>

 

Link to comment
Share on other sites

<blockquote>

<p>you do not need an ICC profile for the camera.</p>

</blockquote>

<p>Here I have to mostly disagree. An ICC profile is not the only method for color correcting the output from the camera, but something at least similar will be required—at minimum a color correction matrix.</p>

Link to comment
Share on other sites

<p>You need the raw converter to render an image with some kind of embedded ICC profile. <br>

A profile for a camera is kind of a confusing and confused term (it could be a DNG profile too). The raw converter has to figure out what it wants to assume for the native color space for processing, it doesn’t have to use nor fed an “ICC profile”. ACR and Lightroom are two examples. But what comes out the back end from the converter needs an embedded ICC profile. The raw data has no defined color space, its only when the converter begins the demosaicing process does the converter have to assume some color space, a process that isn’t really available to provide data to build an ICC profile (one reason its so darn difficult or some would say unnecessary to do). </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>It doesn’t appear dark if properly handled. Or to put it another way, a gamma encoded image would look way too light if the assumption were it was linearly encoded.</p>

</blockquote>

<p>Who and what determines proper linear handling of a demosaiced Raw image in determining its linearized appearance? Who sets the standard? Like I said before each RC that claims to have a linear setting delivers a different rendering of the same image.</p>

<p>There's no ground zero for representing unmanipulated linear sensor data. It's all interpreted. The RC Raw Developer's linear setting makes all properly exposed Raw images appear dark without a gamma correction profile assigned on top of other settings including an additional tonal curve to give a normalized appearance.</p>

<p>ACR's settings just makes it noticeably lighter than Raw Developers rendering, but flat, low contrast and murky. And I couldn't really pin down exact linearized ACR settings in discussions with Adobe program engineer Eric Chan because of him mentioning proprietary algorithms are applied along with known non-proprietary camera manufacturer parameters during the demosaicing stage that can't be turned off. He basically pointed out by the time you see a preview of the Raw data whether dark or light, there's quite a bit of interpretation going on during demosaicing.</p>

<p>And I've made a 1.0 gamma profile in Photoshop's CustomRGB in Color Settings for these supposedly linearized images from these RC's. It's not very good. More work than it's worth in getting it to look right.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...