What's the case for gamma today?

Discussion in 'Digital Darkroom' started by jack_hogan, Nov 15, 2010.

  1. Perhaps I am starting to understand why I am here questioning the use of gamma, given today’s state-of-the-art technology. I am coming at this question in the year 2010. However, gamma has been around for almost 200 years. First there was film density. Then there were CRT’s electron guns. Then twenty years ago came Jpeg soon followed by the first consumer digital cameras: memory was expensive, computing power was limited and people post processed in 8 bits: for nearly two centuries gamma was needed for one reason or another to store and process images. So gamma was ever present, people always assumed it was needed and never questioned it.
    Fast forward to 2010. Cheap storage, cheap processing power, large linear files and 16 bit color are widely available. If you are a digital photographer with modern color-managed equipment who rolls his own (like I am), is gamma an asset or a liability in your post-processing working color space? More than happy to send pictures to my overseas friends via email in perceptually efficient 8bit Jpeg, but my whole workflow revolves around 12/14/15/16 bit data, from capture to inkjet print. Every time Photoshop has to perform complex color operations on my aRGB (or ProPhotoRGB, or whatever) data it has to de-gamma it to get back to linear and then re-gamma it when finished. Not to mention when the image needs to be converted to a different color space, or printed. Each round trip to linear and back adds noise in the shadows and quantization in the highlights that may become apparent in complex post processing jobs. But apparent or not, why not stick to a linear color space (gamma =1) in the first place, from raw processor to output? What’s the case for gamma today, if you are like me?
     
  2. is gamma an asset or a liability in your post-processing working color space?​
    Why does it have to be one or the other? There are linear and non linear gamma encoded devices. And technically, many do not follow the simple gamma formula, they should not even be called gamma (but rather a Tone Response Curve). Many of our device are not linear gamma or linear TRC behaving, so we have to adjust to the output, its not good or bad, it just is.
     
  3. Geez! Not this old premise, again.
    See this gamma print test chart:
    http://www.normankoren.com/makingfineprints4.html#BW_testchart
    Note the variations in gradualness between the different gammas. My DSLR and scanner NEVER renders this kind of smoothness taking a picture of such a gradient whether I increase or decrease exposure or light source. They all need editing to get it to mimic those grayramps. So much for consistent linear behavior from devices. The same goes for printers as well.
    Now displays are far closer to rendering a grayramp evenly and smoothly from one model to the next out of the box than most input or output devices. And when measuring this native response are often already close to 2.2 gamma, not 1.0. So my video card doesn't need mangling of its TRC curves to produce a smooth and evenly gradual 255 level RGB grayramp.
    I'ld rather have my data already represent this response that my display already represents when I edit my images and the only way to define that response coming from a linear Raw state is to convert to a gamma encoded color space.
    The noise you claim to see in a gamma encoded environment can't be controlled on a consistent basis implementing a linear workflow. You are dealing with far too many complicated variants up the process chain that involve electronics, A/D converters and one size fits all rendering algorithms.
    Going linear to improve image quality, efficiency and consistency is a fantasy.
     
  4. Sorry been getting Gateway time out server hic ups.
     
  5. @Andrew: True, but the raw data out of good quality DSLRs is considered to be linear (i.e. proportional to the relative luminance at the scene). So why mess with gamma unless you have to? There are disadvantages to using it - and I am having trouble seeing the advantages in these days of raw-to-post-processing-to-inkjet 16 bit workflows. I really only need it when I save a low res jpeg to send via email. In the meantime, if we can work in a linear space we are not performing unnecessary arithmetic and the image data remains as close to its original state as it can be. Why mess with gamma if you do not need to?
    @Tim: To move forward we need to question the past. Clearly I am talking about maintaining data integrity, not making captures look the way we want to. In a properly color managed set-up, where an amateur photographer like me starts with raw data and prints with his own inkjet printer, why distort image data to the tune of 1/2.2 gamma while post-processing if you do not need to? Why actively decide to degrade your data (even if almost invisibly)? The monitor is not an issue: in a gamma=1 color space the software will display images properly on your 2.2 gamma (or 1.8 or 2.5 or whatever) monitor - that's what color management is there for; meanwhile you haven't performed unnecessary data degrading operations on your image.
    So I still wonder: In a situation like mine, why use a gamma corrected color space in 2010?
     
  6. In a properly color managed set-up, where an amateur photographer like me starts with raw data and prints with his own inkjet printer, why distort image data to the tune of 1/2.2 gamma while post-processing if you do not need to? Why actively decide to degrade your data (even if almost invisibly)?​
    How do you degrade data working in an already linear space of like say ACR/LR's 1.0 TRC ProPhotoRGB input space on a digital camera's Raw capture. Edits in Raw converters never touch the data. You're only seeing a reasonable facsimile of your parametric instructions on the preview generated on the fly by the Raw converter in accordance with color managed previews.
    Also, not all (darkish) linear previews are rendered the same after demosaicing in a number of Raw converters that claim to turn off gamma adjusted color managed previews. I've checked this out myself. I viewed the same image in three RC's that have a linear setting (basically turning off color management) and all three were different.
    One Raw converter (Raw Developer) has a linear setting creating a very dark rendering on a 2.2 gamma encoded display. The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on. To much of a PITA!
    Scanners are different in the way they capture and process sensor data, but since I don't use them anymore and shoot digital Raw directly, I'm not sure if 1.0 linear capture and processing
    is all that useful.
     
  7. Can you visually prove that data is being degraded by posting a sample comparison showing the effects between the two different gamma encoded processes?
    I'll bet a simple curve tweak can fix a lot of what you're seeing using standard processes already in place today without resorting to retooling everything for linear encoding.
     
  8. True, but the raw data out of good quality DSLRs is considered to be linear (i.e. proportional to the relative luminance at the scene).​
    That’s scene referred, we need to end up with output referred. See:
    Digital photography color management basics
     
  9. The historic utility of gamma encoding (ie, the log density vs log intensity curve of photographic emulsions) was to compress the tremendous brightness range in natural scenes into a range that could be printed. The limited dynamic range of printers and displays still is an issue, so some compression of this sort is still needed, at least at the output stage.
    That being said, Jack's proposal has some merit, at least in principle. Too many conversions back and forth between gamma and linear encoded data will, in principle, add round-off and possibly other errors to the data at each conversion. However, as Tim points out, this effect will be minimized (possibly to the level of being negligible) in high bit depth spaces, exactly the conditions Jack correctly states as making a switch to linear encoding worth considering.
    In addition, pragmatic considerations enter this discussion. For example, while most people probably are routinely processing at 16 bpc, few stay at 32 bpc for any length of time while processing. The increases in CPU load and file size make doing everything at 32 bpc / floating point unwieldy at the present time. However, my guess is that if we did everything at 32 bpc, the round-off errors discussed above would again be negligible so there would be no impetus to make the switch to all linear processing.
    Just my $0.02,
    Tom M
     
  10. I agree with Jack that gamma is unnecessary for processing. Not all operations even correctly take the gamma into account, particularly resizing images in many software packages. This can require extra steps to be done by hand.
    Since most displays and probably at least some printers are expecting gamma encoded data, it would still need to be in a gamma space for final output. As long as video bandwidth is still at a premium, I do not see displays moving to linear encoding, either.
     
  11. I've found there are some instances where correcting for crushed shadow detail can be better corrected instead of the use of curves or levels by assigning a profile with an altered gamma curve that lightens the image. For some reason a math constructed curve inside a simple matrix profile does a cleaner job of lifting shadow regions while maintaining clarity and definition. Trying to do it with curves forces you in a tight corner with very few adjustment nodes to work with to get the same results.
    This has nothing to do with linear encoding but I think the same principle of letting the math do the mapping instead of using tools designed for adjustment on a linear scale applies in this instance.
    For example there was a thread last week where the poster was asking for editing tips on fixing a jpeg image she shot of a group of guys lit by diffused morning light and a bit underexposed. The shadows of the first row of guys kneeling down had their feet and legs almost in black. Instead of using curves or levels I just assigned 1.8 gamma ColorMatch RGB to the sRGB image and revealed much more detail and definition than I could using the tools.
    I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.
     
  12. I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.​
    The curves at least in simple matrix profiles are themselves very simple. A gamma curve is super simple (its defined by one specific formula (output = input gamma ) that describes a very simple curve. So I don’t know why such a curve couldn’t be produced using curves or some other similar method.
     
  13. I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.​
    If that is really what happened, then my best guess would be that the profile was converting between the sRGB tone curve and gamma 1.8, whereas the curves could only easily convert between gamma 2.2 and gamma 1.8, which is a different transformation. A sufficiently accurate curve could replicate this, though I do not know if the curves dialog box would allow it.
    The sRGB tone curve is linear in the shadows, followed by a gamma 2.4 curve to end up close to the gamma 2.2 curve for the remaining tones.
     
  14. How do you degrade data working in an already linear space of like say ACR/LR's​
    If you start in linear, why do you NEED to apply a gamma corrected color space to move to Photoshop?
    Can you visually prove that data is being degraded by posting a sample comparison showing the effects between the two different gamma encoded processes?​
    Anybody can see it by performing a simplified experiment: In Photoshop open an image with good dynamic range, convert it to 8 bits (this is the simplified part), zoom a dark portion to 100%, and apply a number of levels adjustments with gamma alternatively 0.4, 2.5, 0.4, 2.5, ... After how many round trips do you start seeing increased noise in the shadows? Answer: one.
    Also, not all (darkish) linear previews are rendered the same after demosaicing​
    The underlying linear data represents the proper luminance at the scene: it is not 'darkish' in itself. If you do not color manage it, your monitor will distort it because of the physics of its electronics and produce less luminance than the data represented How much less? It can be modelled by a power function with a gamma of 2.2. That's why proper color managed software, knowing how your monitor will distort the underlying 'correct' linear data, will apply a compensating 1/2.2 curve to it before passing it to the monitor's input. Different RC's look different out of the box because they use different parameters and algorithms to open the raw file, not all of which are under the control of the user.
    The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on.​
    No need. If you are working in a color managed system, your monitor is profiled and your images are properly tagged, your software (IE, Picture Viewer, PS, CNX2 but not non-color-managed Chrome) will make the correction for you on the fly while leaving the underlying data undisturbed.
     
  15. That’s scene referred, we need to end up with output referred.​
    @Andrew: I agree with the article you linked. I am talking about something else though: why not use sRGB1 instead, for example, with the same coordinates as sRGB but a gamma of 1 (vs effectively 1/2.2?)
    The limited dynamic range of printers and displays still is an issue, so some compression of this sort is still needed, at least at the output stage.​
    @Tom: agreed. And that's my point. Why do it before it is needed? Why not leave your data alone, and perform the compression only if so requested by the output device? What are the advantages to doing it before then? I only see disadvantages.
     
  16. @Jack - If your image has enough bit depth, sure, you leave the data linearly encoded and compress only for output.
    IMHO, the real question is not, "Why not do it?", but does linear encoding (as the working data space) confer any significant benefit to the user. My contention is that it doesn't because you must have adequate bit depth (ie, 16 or 32 bpc) to even consider processing in a linear space, however with such fine amplitude resolution, round-off errors at gamma-linear-gamma conversion steps also become negligible, thereby negating the benefits of linear encoding in the final product.
    That being said, there are benefits to staying in a linear space for processing. For example, fewer lines of code are needed because fewer conversions back and forth are needed. This makes the code easier to maintain and faster to run. Whether this effect is significant to the end user is an open question.
    Tom M
     
  17. The problem is that you have to create a gamma encoded ICC profile to get it to look correct for editing because it's too dark to work on.
    ------>No need. If you are working in a color managed system, your monitor is profiled and your images are properly tagged, your software (IE, Picture Viewer, PS, CNX2 but not non-color-managed Chrome) will make the correction for you on the fly while leaving the underlying data undisturbed.​
    Agreed. If you have a linear encoded document and an associated profile that describes that condition, it will not look too dark, it will look fine. These images look dark when the app believes they are in a gamma corrected space as all such ICC aware app’s will when presented an untagged doc or an incorrectly tagged dock.
    But at some point, we have to introduce some kind of TRC to output the data (or view it in a non ICC aware way, like in a web browser that doesn’t understand profiles). We have to render output referred data.
     
  18. I know why the linear data appears dark. However, no one pointed out what's involved creating an ICC profile for a digital camera from an a linear image so it doesn't appear dark.
    The RC's that allow this linear setting have to turn off color management to build an ICC profile from this linear response. I just don't want to pay for the ICC based DC profiling package that does this. It's not cheap.
    A normalized preview is already happening in ACR/LR anyway without an ICC profile. But as Jack pointed out ACR/LR don't allow tagging and/or converting to a 1.0 gamma version of sRGB, AdobeRGB or ProPhotoRGB in the Raw data editing/conversion process.
    And Andrews point of encoding for non-color managed web viewing pretty much make this scenario too cumbersome to implement especially if you're processing and cataloging thousands of images.
    Anybody can see it by performing a simplified experiment: In Photoshop open an image with good dynamic range, convert it to 8 bits (this is the simplified part), zoom a dark portion to 100%, and apply a number of levels adjustments with gamma alternatively 0.4, 2.5, 0.4, 2.5, ... After how many round trips do you start seeing increased noise in the shadows? Answer: one.​
    From examining my prints under a loupe the stochastic dithered pattern of my inkjet busts up and hides any noise in shadow detail. I just don't see it even in prints of ISO 800 shots where the noise clearly shows up at 100% view on the display.
     
  19. It doesn’t appear dark if properly handled. Or to put it another way, a gamma encoded image would look way too light if the assumption were it was linearly encoded.
    You build a profile for linear data just as you would non linear data. Depending on the type of profile, that info has to be specified depending on how you build it. FWIW, you can create a linear encoded RGB working space in Photoshop by using the Custom RGB option. Just set it for 1.0.
     
  20. My contention is that it doesn't because you must have adequate bit depth (ie, 16 or 32 bpc) to even consider processing in a linear space​
    @Tom: Are you referring to the 'banding' issue? I believe that issue is poorly understood (perhaps by me). Follow me for a second: if you start with 12 bit linear data from your camera, and do not reduce the bit depth (our case), how is gamma correcting your data going to help with perceptual uniformity? Think about it: all you are doing is shifting your existing bits around - detail isn't added in the shadows that didn't already exist in the linear data; on the other hand you are amplifying noise in the shadows and creating quantization in the highlights, unless you add headroom in the form of more bit depth. For the same accuracy, linear is less noisy and requires fewer bits, not more. So why distort it with gamma in the first place?
    Of course if you are taking your 12 linear bits and squeezing them down into 8 (e.g. Jpeg)gamma encoding really helps reduce banding because it uses the extra 4 linear bits to fill in the gaps created by gamma in the shadows. But that's not our case. When we start with 12 bits and end with 12+, it does not help one bit :)
    @Andrew: thanks for the suggestion about the custom RGB setting. I'll play with it.
     
  21. creating an ICC profile for a digital camera​
    @Tim: you do not need an ICC profile for the camera. You need one for the monitor because your system does not need to correct the linear data (that's already 'correct') - it needs to correct for the fact that your monitor will distort it more or less according to a gamma function. If you do not have a custom profile of the monitor the default one that comes with the OS will do.
     
  22. you do not need an ICC profile for the camera.​
    Here I have to mostly disagree. An ICC profile is not the only method for color correcting the output from the camera, but something at least similar will be required—at minimum a color correction matrix.
     
  23. You need the raw converter to render an image with some kind of embedded ICC profile.
    A profile for a camera is kind of a confusing and confused term (it could be a DNG profile too). The raw converter has to figure out what it wants to assume for the native color space for processing, it doesn’t have to use nor fed an “ICC profile”. ACR and Lightroom are two examples. But what comes out the back end from the converter needs an embedded ICC profile. The raw data has no defined color space, its only when the converter begins the demosaicing process does the converter have to assume some color space, a process that isn’t really available to provide data to build an ICC profile (one reason its so darn difficult or some would say unnecessary to do).
     
  24. It doesn’t appear dark if properly handled. Or to put it another way, a gamma encoded image would look way too light if the assumption were it was linearly encoded.​
    Who and what determines proper linear handling of a demosaiced Raw image in determining its linearized appearance? Who sets the standard? Like I said before each RC that claims to have a linear setting delivers a different rendering of the same image.
    There's no ground zero for representing unmanipulated linear sensor data. It's all interpreted. The RC Raw Developer's linear setting makes all properly exposed Raw images appear dark without a gamma correction profile assigned on top of other settings including an additional tonal curve to give a normalized appearance.
    ACR's settings just makes it noticeably lighter than Raw Developers rendering, but flat, low contrast and murky. And I couldn't really pin down exact linearized ACR settings in discussions with Adobe program engineer Eric Chan because of him mentioning proprietary algorithms are applied along with known non-proprietary camera manufacturer parameters during the demosaicing stage that can't be turned off. He basically pointed out by the time you see a preview of the Raw data whether dark or light, there's quite a bit of interpretation going on during demosaicing.
    And I've made a 1.0 gamma profile in Photoshop's CustomRGB in Color Settings for these supposedly linearized images from these RC's. It's not very good. More work than it's worth in getting it to look right.
     
  25. An ICC profile is not the only method for color correcting the output from the camera, but something at least similar will be required—at minimum a color correction matrix.​
    @Joe and Andrew: You are correct, that was poorly worded. I should have prefaced the ICC comment with 'For the purposes of this discussion...'. When I ask why gamma correction is needed in a self contained system that starts in x bits and does not downsize bit depth, I do not just mean starting from a 12 bit raw file. The same reasoning applies also if you start with a 16 bit linear TIFF (perhaps generated by your RC of choice). Would it not be better for your data if Photoshop (or other PP software) were to receive a linear TIFF and work in linear space?
     
  26. And I've made a 1.0 gamma profile in Photoshop's CustomRGB in Color Settings for these supposedly linearized images from these RC's. It's not very good. More work than it's worth in getting it to look right.​
    @Tim: I see what you mean, but I am not talking about the relative merits of various RCs or the process of rendering a raw file. If it makes it easier to understand, start from a 16 bit linear TIFF whose data has never had gamma applied to it, if you prefer. It does not mean that there haven't been corrections made to it to make it 'look' better, but it is in linear space - which according to my understanding so far means it has the least noisy, most accurate, densest and highest resolution data it can have, especially after all the tweaking that was necessary to get it to look that way. Why apply gamma to it? It will only degrade it and make it worse.
    So is gamma still needed in 2010, if you are like me? Perhaps the answer is no?
     
  27. If you wish to simulate the physical world, linear-light coding is necessary.
    On the other hand, if your computation involves human perception, a nonlinear representation may be required.
     
  28. Who and what determines proper linear handling of a demosaiced Raw image in determining its linearized appearance?​
    Whoever wrote the raw converter however the data is linear (there’s no way around that).
    There's no ground zero for representing unmanipulated linear sensor data. It's all interpreted.​
    True in terms of a color space, but the data is at capture, the way the photons are counted, linear.
    The RC Raw Developer's linear setting makes all properly exposed Raw images appear dark without a gamma correction profile assigned on top of other settings including an additional tonal curve to give a normalized appearance.​
    Because they want to provide the tone curve as part of the output referred rendering. They don’t have to. If you zero out all the ACR settings (note the default setting for brightness at 50), you get closer to that scene referred “dark” look.
    There is a lot of interpretation going on during demosaicing, as I said, the RC has to at this point assume what the color space the filters combined represent.
     
  29. And I've made a 1.0 gamma profile in Photoshop's CustomRGB in Color Settings for these supposedly linearized images from these RC's. It's not very good.​
    RC being Raw Converter? Why ProPhoto primaries? Do you know that’s the correct set of primaries its using ?
     
  30. @Andrew and Tim: I hear you. This thread, however, is about why a self contained photographer's working color space should be distorted by a power function with an exponent of gamma instead of not.
    If you wish to simulate the physical world, linear-light coding is necessary.
    On the other hand, if your computation involves human perception, a nonlinear representation may be required​
    Hello Jacopo. This is indeed the crux of the matter, and isn't the representation of a capture a simulation of the physical world? In the context of 12/14/16 bit linear data being post processed in 16 bits, when and why would a non-linear representation be required in a photographer's working color space, other than, potentially, as a very last step conversion for the sole benefit of the output device?
     
  31. This is indeed the crux of the matter, and isn't the representation of a capture a simulation of the physical world?​
    Not a simulation of physical world, but a simulation of perceived world.
    For example contrast and brightness are perceptual.
    The discrete cosine transform for jpg is performed on gamma encoded data.
     
  32. Not a simulation of physical world, but a simulation of perceived world.​
    @Jacopo: Why would you need to present to our eyes such a simulation? Our objective is to present to our eyes the nearest facsimile we can of the relative luminance that was at the scene (which of course we will perceive virtually the same way, logarithmically). Same relative luminance (perceived brightness) and contrast = same perception. That means an overall system gamma of 1. So why raise our data to any exponent other than one, let alone 2.2?
    Jpeg is different: it is a lossy compressor, which means you are willing to throw data away in order to have smaller files - might as well encode it with something close to an effective gamma of 1/2.2 to make it more perceptually efficient. But that's not our case. We start with 12/14/16 linear data and stay in 16 bits. There is no perceptual advantage in encoding OUR data with gamma. Is there?
     
  33. jack, why not use a Raw converter that allows converting to a custom 1.0 gamma ICC profile you can make in Photoshop? As long as the image looks correct and as intended, the profile will always show the correct preview (in color managed apps only) with the data remaining linearly encoded.
    Andrew, RC=Raw Converter. I don't use or assume ProPhotoRGB primaries using Raw Developer's linear setting. The data is written in sRGB or maybe monitor RGB. When opening this Raw Developer linear tiff, no assigning of a canned or Photoshop CustomRGB primaries profile will make an X-rite CCchart test shot look correct. You have to use a more sophisticated profiling package to measure off this tiff file to build the ICC profile.
    Wonder if there's a way to use Adobe's DNG Profile Editor with Raw Developer tiff?
     
  34. Andrew, RC=Raw Converter. I don't use or assume ProPhotoRGB primaries using Raw Developer's linear setting.​
    I’m confused then by this:
    And I've made a 1.0 gamma profile in Photoshop's CustomRGB in Color Settings for these supposedly linearized images from these RC's. It's not very good.​
     
  35. why not use a Raw converter that allows converting to a custom 1.0 gamma ICC profile you can make in Photoshop?​
    @Tim: yes, if that works it may be a good idea (as Andrew suggested). I am just wondering why everybody is using 1/2.2 gamma color spaces instead. Am I missing something?
     
  36. Andrew, where did I say I used ProPhotoRGB primaries when building a CustomRGB 1.0 primaries profile in Photoshop? I don't know how you can be confused.
    I don't know what else to tell you except that it's a PITA to do it this way with very little to gain going by the quality of the previews.
     
  37. Just a note about this, I started tinkering around with Raw Developer's linear setting that turns everything off, but still allows base tone curve, saturation, hue, RGB curve adjustments, setting black, neutral and white points along with a gamma slider.
    Guess which tool reveals the most shadow detail with the least amount of noise close to black point?
    The gamma slider!
     
  38. The best example of an all linear workflow would probably be Adobe Lightroom. With no extra actions required by the user, a digital raw file and all processing steps stay at 1.0 gamma up until the images are exported.
    The displayed RGB coordinates in Lightroom are in MelissaRGB (ProPhoto primaries but sRGB tone curve), though, which I am not sure I agree with since not much uses this color space either inside or outside of Lightroom.
    I cannot confirm that the issue still exists in Photoshop CS4 or CS5, but in the past if an image were resized in a gamma 2.2 space, the resized image would be incorrect, darker than it should. There may be more at stake with gamma than just rounding errors.
     
  39. Save us! Who gives a rats? I thought "gamma" was what killed you when an A bomb went off.
    Do I have to learn about this, now, to understand digital photography?
     
  40. No, Shadforth, you don't need to know this to be a digital photographer.
    If you think this thread was determined on the subject of gamma, you should've been around in discussion with Timo Autiokari...
    http://www.poynton.com/notes/Timo/Concerning_Timo.html
    ...may your eyeballs bleed and mind melt.
    There were some heated arguments going back at least a decade on this subject of a linear workflow between Adobe alumni engineers, digital evangelists and enthusiasts and this guy.
    For some reason Timo's website's domain name is up for sale. Go figure. Looks like he lost the argument.
    Consider these kind of talks similar to car repair enthusiast's insisting on the use of synthetic oil and titanium splitfire ignitor spark plugs over regular the regular kind.
     
  41. Guess which tool reveals the most shadow detail with the least amount of noise close to black point?​
    Of course, what other tool does photoshop have that gives you infinite amplification at the origin (thus extremely high amplification in the dark regions of your picture)? It will amplify both the signal and the noise equally at equal starting values, but is there a lot of signal in the darkest parts of our pictures? No. There is a lot of noise however (thermal, shot, read, amplifier, reset, etc.). These being the closest values to the origin, guess what gets amplified the most by gamma? That's why sRGB, Melissa, L*a*b* all have linearized curves near the origin. aRGB and Photoshop do not, however. But this begs my question. If there are no advantages in our situation to a gamma corrected color space, then why correct it in the first place, creating unnecessary discontinuities, quantization and increasing noise in the shadows?
    I don't know what else to tell you except that it's a PITA to do it this way​
    Perhaps I am missing something. Why would it be such a PITA? All you need is to choose a working color space with a gamma of 1. Everything else stays the same. I just wonder why everybody isn't doing this in 2010. Perhaps there are good reasons. What might they be?
     
  42. @Jacopo: Why would you need to present to our eyes such a simulation? Our objective is to present to our eyes the nearest facsimile we can of the relative luminance that was at the scene (which of course we will perceive virtually the same way, logarithmically). Same relative luminance (perceived brightness) and contrast = same perception. That means an overall system gamma of 1. So why raise our data to any exponent other than one, let alone 2.2?​

    Jack, think to camera exposure. Why makers don't use linear scale?
    You can answer they use a logarithmic scale.
    But why?
    The answer is they use a "perceived" linear scale not a "physical" linear scale.

    If you can approximate a "perceived" space, in that space the things are going linear.

    JoeC wrote:
    The displayed RGB coordinates in Lightroom are in MelissaRGB (ProPhoto primaries but sRGB tone curve)​

    If I'm right MelissaRGB is used only for histogram building. Unlucky choice I think.
    Displayed RGB coordinates are in monitor gamut.
     
  43. The year being 2010 with its cheap processing power and 16 bit (or even 32 bit) colour depth makes absolutely no difference to the basics. Those basics being that we actually want to be able to view our pictures, and that there is no practical viewing device that shows the same contrast ratio or brightness range as we see in real life.
    Forget the claims of LCD monitor makers of greater than 1000:1 contrast ratios. That's just nonsense and the manufacturers know it! If you actually measure the brightness range with a photometer, you'll find the average monitor manages about 300:1, at best and in a darkened room. Paper prints fair even worse, probably scraping just over a 100:1 range in normal lighting conditions and far worse if framed behind glass. Therefore a modest 7 stop subject brightness range (128:1) needs some gamma adjustment to fit onto a paper print, and anything over 8 stops needs help to be shown on a computer monitor as viewed in a darkened room. If we view the LCD display in normal room lighting then its contrast ratio drops to little better than a paper print. That's why by default most modern LCD displays emulate the old CRT gamma of 2.2 or thereabouts.
    In short, we still need gamma! As proof of this, look at the popularity of HDR techniques, which represent gamma gone mad.
     
  44. There were some heated arguments going back at least a decade on this subject of a linear workflow​
    @Tim: that's a good site, I am a fan of Poynton's, probably one of the most authoritative around on the subject. From the link you provided, this is one of the first quotes that I ran into:
    Linear intensity coding is fine if you can afford 12 or 14 or 16 bits per component, but if you have only a limited number of bits per component - 8, say - you must code nonlinearly to get decent performance.​
    We have been able to afford 16 (ok, 15 in Photoshop) bits for a few years. Gamma is needed to counteract the physical characteristics of your output device or if you need to COMPRESS your data. We photographers do not want to compress our data (that's lossy compression, by the way). We want to maintain its integrity as much as we can so we always have the best, densest, least noisy etc. data as a base to play with. Would a sound engineer mix a new track for a SuperAudio CD utilizing an MP3 compressed version of the piece as a source, instead of the master digital 24 bit linear track? Of course not. So why do we almost do that in our PP software?
    .
    @Jacopo: perception does not come into the equation until AFTER the output. I am asking why use a gamma corrected color space BEFORE the output (if ever - it depends on the type of output). Given the state of the art in 2010, IMHO it is easier and less noisy to work on linear data up to that point.
    .
    Hi Rodeo: I agree with most of what you say. But, similarly to Jacopo, you are talking about the output device (where we are often stuck with non-linear properties that need to be compensated).
    .
    I am talking about our internal working color space: for instance, why apply a power function to your linear data when you leave your raw converter to go into your favorite PP program, and then why use a gamma corrected color space within it? Where are the benefits? This is not a rethorical question. I am asking because perhaps there is a fault in my reasoning and I am more than happy to change my mind if someone can come up with a good SPECIFIC reason for why. Anyone?
     
  45. @Jacopo: perception does not come into the equation until AFTER the output.​
    I don't agree.
    The way software modify data depends on image values (gamma encoded values are different from linear values).
    So the software have to select the better way. And following the selection it have to give you a right scaled slider.
    For example, there are situation where software can work in Lab color space.
    You have no any control on this. And this is beneficial as I think software try to make the best choice.
     
  46. If I'm right MelissaRGB is used only for histogram building.​
    Correct, that and the RGB percentages. Not real useful.
    save us! Who gives a rats? I thought "gamma" was what killed you when an A bomb went off.
    Do I have to learn about this, now, to understand digital photography?​
    Some of us do want to know what’s going on under the hood (as some of us wanted to understand how analog photography worked, or mixed our own chemistry). No one is focusing you to read or attempt to comprehend the subject of gamma or TRCs. If you don’t want to, don’t!
    As for Timo, just about anyone besides Timo, at least those in the image processing world (a slew of Adobe engineers) dismissed most of his ideas. So there’s no wonder his site has slipped into the ether.
     
  47. And following the selection it have to give you a right scaled slider.
    For example, there are situation where software can work in Lab color space.​
    Jacopo, with regards to sliders, histogams, controls etc. I agree: ask the programmers to scale them whichever way it is most perceptually intuitive. And when post processing your linear data feel free to apply whatever corrections or to convert into whatever color space you feel is most appropriate (keeping in mind the PCS round-trip penalty discussed in an earlier post) to obtain the result you want. But why start off with distorted data? Why should Lightroom (which by some accounts keeps data linear until the end) need to apply gamma before passing data to Photoshop or another post processing program? Why should any RC or PP program unless it is required by the ouput? Why not pass a nice 16 bit tiff or equivalent in a suitably sized linear color space with gamma=1?
     
  48. Why should any RC or PP program unless it is required by the ouput? Why not pass a nice 16 bit tiff or equivalent in a suitably sized linear color space with gamma=1?​
    Do you think all are color managed?
    If the answer is no, it is always "required from output" .
     
  49. All right, let me ask the question a different way (again, not a rhetorical question). In this fairly comprehensive list of color space specifications why isn't there one titled 'Modern Digital Photographer's Post Processing Working Color Space' with a gamma equal to uno? Why isn't there a single one with a gamma of one :)? Legacy? Or what?
     
  50. We have to end up with gamma encoded data to print and view the data (certainly in non ICC aware applications). We have to move from scene to output referred! As long as the heavy lifting in the converter is happening in a linear TRC (which due to the data, it has to), as long as you render the gamma corrected data in high bit (even 24 bit depending on the output, say the web), why does anyone need to be dealing, post raw conversion with a linear data set and linear working space? The display is gamma corrected. The printers are gamma corrected. We have to convert to make the print. We can do this directly from linear to gamma corrected data in Lightroom or any converter that has output facilities. Post processing is just that, it doesn’t need to be (some may argue shouldn’t be) in a linear color space.
     
  51. why does anyone need to be dealing, post raw conversion with a linear data set and linear working space?​
    Fair enough. However, since the data STARTS linear the real question (and mine from the beginning) is why feel the need to convert it by 'default' into something else - unless you have to. In the meantime you can post process to your heart's content with the least noisy, most accurate, densest, highest 'resolution' etc. data you are going to have.
    Let me summarize the minuses, as I understand them, of converting to a gamma corrected color space off the bat:
    • Every time you apply gamma you (perhaps imperceptively) amplify noise in the shadows and quantize the highlights.
    • Every time you convert to a different color space, you need to go through a gamma round trip
    • Many arithmetic operations on color values need to be performed on linear data, forcing a gamma round trip (if not already in a linear color space)
    • Many operations are more complex in gamma
    • More operations mean slower processing
    Now the pluses of applying gamma off the bat:
    • Maybe it will be needed sometime in the future. Maybe not.
    It seems pretty weak to me. Are there other pluses?
     
  52. However, since the data STARTS linear the real question (and mine from the beginning) is why feel the need to convert it by 'default' into something else - unless you have to.​
    Actually the data starts out with something that doesn’t resemble an image. But to answer your question, which is discussed in my ICC white paper, its because we need output referred data.
    Every time you apply gamma you (perhaps imperceptively) amplify noise in the shadows and quantize the highlights.​
    The key word here is perhaps. Its a bit like the argument that when you edit an image, due to rounding errors, you degrade the data. If your image doesn’t need editing fine. But the reason we have tools like Photoshop and the like is we do need to edit the numbers. A bit of data loss is far preferable to an image appearance we don’t like or want. Do it in high bit, the rounding errors (which are still there) are moot, we don’t see it. Same with the gamma conversion you point out. We need output referred data. If you want pristine gamma 1.0 data, fine but if it looks awful, what good is it? IF you ever hope to print it, you are going to undergo at least one conversion anyway, the gamma is going to be different as will all the resulting values (and gamut).
    There may be misses but they are far less an issue than leaving the data alone. We have to print the data, we have to view it (often without the benefit of a profile that defines the 1.0 TRC nature of the data).
     
  53. If you want pristine gamma 1.0 data, fine but if it looks awful, what good is it?​
    It would be no good. However, you can make whatever changes are needed to make it look good on linear data, just as well (better, as it turns out) as on gamma encoded data - without all the negative side effects. So why convert it to start with?
    If you wrote a white paper on this subject, I would be interested to read it. Where can I find it?
    Do it in high bit, the rounding errors (which are still there) are moot, we don’t see it.​
    Maybe, maybe not. It is not just the rounding errors. However, in linear you do not have this problem, so why bring it on ourselves?
     
  54. However, you can make whatever changes are needed to make it look good on linear data, just as well (better, as it turns out) as on gamma encoded data - without all the negative side effects. So why convert it to start with?​
    What negative side effects? Again, it has to end up gamma corrected for output to anything but the above scenario (a display) and with an embedded profile, viewed within an ICC aware app. For that one, arguably far from demanding output, what’s the big deal? What gain will you see on-screen from this more “Pristine” data?
     
  55. What negative side effects?​
    IMHO the best you can do starting with gamma encoding vs linear is break even in the end. In linear you may actually win. The downside (from a few posts up):
    • Every time you apply gamma you (perhaps imperceptively) amplify noise in the shadows and quantize the highlights.
    • ...
    • Many arithmetic operations on color values need to be performed on linear data, forcing a gamma round trip (if not already in a linear color space)
    • Many operations are more complex in gamma
    • More operations mean slower processing
    Am I missing something?
     
  56. Am I missing something?​
    I think you are. First, the bullet points such as “you always amplify noise” needs to be shown and proven (I’m not saying its never the case or always the case, I think it needs to be shown). And as I’ve tried to illustrate, you have to end up with gamma corrected data eventually unless your only output is to the color managed display with an embedded profile with the output going to an arguably not very demanding device (a display). You have to convert to the printer output device. And its not slower depending on what app you do this.
     
  57. • Every time you apply gamma you (perhaps imperceptively) amplify noise in the shadows and quantize the highlights.​
    I don't see this on the display or on any prints. Gamma is only applied once after the conversion from linear Raw to display for web viewing (2.2 gamma). The ROI in time and money you're advocating to keep the data in a linear state isn't worth the hassle.
    As for slower processing for complex operations converting to 1.8 (ProPhotoRGB) or 2.2 (AdobeRGB/sRGB) gamma, I don't see this either. I clocked converting out of ACR and Raw Developer to be about 6-8 seconds per image on average on a 2004 1.8 ghz G5 iMac. I'm sure it's much quicker on a 2010 system.
     
  58. amplify noise” needs to be shown and proven​
    Shown: did you try the experiment I suggested a few posts up? Did you not see noise jump out at you after the first two or three iterations? It was building up from the very first time. As far as proof is concerned, it was done a few centuries ago, but you can prove it to yourself just by reading the equation. This is the way that Burger and Burge read it in 'Digital Image Processing', 2008:
    "The tangent to the function at the origin is thus either horizontal (gamma>1), diagonal (gamma=1), or vertical (gamma<1), with no intermediate values. For gamma<1 this causes extremely high amplification of small intensity values and thus increased noise in dark image regions. Theoretically, this also means that the gamma function is generally not invertible at the origin."
    Why else would sRGB et. al. linearize the function near zero? How does ProPhotoRGB or aRGB deal with it?What? They don't?
    Am I missing something? I think you are.​
    What am I missing, specifically? Again not a rhetorical question.
     
  59. Why should Lightroom (which by some accounts keeps data linear until the end) need to apply gamma before passing data to Photoshop or another post processing program?​
    It shouldn’t need to. I am not sure why Lightroom allows any color space to be used for an export but limits the choices for external editing to sRGB, Adobe RGB or ProPhoto RGB.
    I do all (well, most, occasionally I just don’t care) resizing and sharpening in either Lightroom or 16-bit gamma 1.0 space. Doing the same in a gamma 2.2 space would give incorrect results. Can anyone confirm that the latest version of Photoshop handles this automatically without extra steps?
     
  60. I don't see this on the display or on any prints.​
    All right, so failing to find a reason to support gamma, the argument is turning to 'I cannot see the difference, so I might as well pretend that it is not an issue'. Go back to my first post. My question is not why linear, but why gamma. Until proven otherwise, gamma can only be worse than linear because it is a distortion of linear data.
    Why is Lightroom, developed in the last ten years, based on a default linear space? If Photoshop were to be written today, as opposed to twenty years ago when Jpeg and 8 bits ruled, would it be based on a default gamma corrected color space? I haven't heard one reason in this thread that would support this view.
    .
    Thank you Joe_C. What gamma 1.0 space do you use?
     
  61. Because its not an issue. The only time it can even be an issue is on-screen display. You simple can’t have a linear data file go to an output device. I don’t know why this isn’t yet sinking in.
    I’ve got an 8-bit display path currently (because I’m on a Mac despite the high bit data, display panel etc). The largest image I can display on my 30” is 1920x1200, to a wide gamut (97% of Adobe RGB (1998)) display. That’s the most demanding output device I can view a gamma 1.0 image. And as Tim says, we don’t see anything at all useful about viewing this data in 1.0 TRC. Once we have to print a 30x40 on a far higher resolution output device with a far wider gamut, using way more pixels, its gamma corrected, there is no way around that.
    I’d think you’d be more upset that on this high bit, linear preview of data on a display, you either have to sample it way down to fit at 1:1 or live with the “data loss” (and there is a visual degradation) when you zoom out of a document that’s 5000 pixels wide so it fits fill image on that 30” display.
     
  62. What gamma 1.0 space do you use?​
    I use whatever primary colors I was using to begin with. Usually that tends to be linear sRGB, which doesn’t seem like a good name. Anybody know if there is a real name for such a color space?
    Because its not an issue. The only time it can even be an issue is on-screen display. You simple can’t have a linear data file go to an output device. I don’t know why this isn’t yet sinking in.​
    I have to disagree. Lots of software performs important operations incorrectly in gamma space. Is the latest Photoshop any better? Up through at least CS2 it did not.
     
  63. You simple can’t have a linear data file go to an output device.​
    I understand this very well, as you can read in my first few posts. My question, which you are not addressing, is why move to a gamma corrected color space when already in a linear color space with said advantages (until proven otherwise).
    'You are going to have to apply gamma in the end' is not a satisfying answer in the same way that hearing 'we are all going to die in the end' will not preclude me from living a good life until then.
    Why isn't there a gamma 1.0 working color space for photographers in 2010? If it ain't broke don't fix it?
     
  64. I have to disagree. Lots of software performs important operations incorrectly in gamma space. Is the latest Photoshop any better?​
    Again you are missing the point, the data has to go gamma corrected for output.
    And what do you mean (and what proof do you have) when you state Lots of software performs important operations incorrectly in gamma space. Incorrectly?
     
  65. I use whatever primary colors I was using to begin with.​
    @Joe C: Ah, of course. And where did you find (or how did you 'make') these color spaces?
     
  66. My question, which you are not addressing, is why move to a gamma corrected color space when already in a linear color space with said advantages (until proven otherwise).​
    My question, which you are not addressing, is why NOT move to a gamma corrected color space when its going to have to be in such a color space anyway.
    IF you want to work in a linear gamma space until you print, use Lightroom and print from Lightroom. You are not editing pixels anyway, you’re making metadata instructions for rendering FROM linear to gamma corrected data.
    Why isn't there a gamma 1.0 working color space for photographers in 2010? If it ain't broke don't fix it?​
    Because there’s no reason OR you can do so with LR or you can attempt to get linear data with a profile out of some RC (not ACR or LR).
    You are suggesting a solution in search of a problem, the problem doesn’t exist.
     
  67. If an image is resized to be smaller, and do so in gamma 2.2 space, the resized image will be darker than the original image. Resizing up will be wrong too, just I cannot remember whether it ends up too light or too dark. Whether this is objectionable is a matter of debate, but it is provably incorrect.
    Sharpening has similar issues, as do others, just resizing and sharpening are two easy ones to see the difference because they are not meant to change the overall brightness.
    Some examples can be found at http://www.4p8.com/eric.brasseur/gamma.html.
    I have verified this myself in some software, but do not have Photoshop CS3, CS4 or CS5 to see if Adobe ever addressed it. The issue has been around as long as digital image editing has, so clearly they have not been in any hurry.
     
  68. why NOT move to a gamma corrected color space when its going to have to be in such a color space anyway.​
    Andrew, between capture, fixing the picture to your liking, post processing, and saving the image for posterity there are millions (billions?) of operations on your data which are best performed on LINEAR data (just as an example, color operations). Then, and only then, if you need to, you can apply a gamma curve for output.
     
  69. Andrew, between capture, fixing the picture to your liking, post processing, and saving the image for posterity there are millions (billions?) of operations on your data which are best performed on LINEAR data (just as an example, color operations). Then, and only then, if you need to, you can apply a gamma curve for output.​
    So instead of proving this, we are to take that all image processing applications are just doing these functions wrong? is Timo back?
     
  70. Jack, based on what evidence are manipulations best performed on linear data? Are you referring to raw digital captures, scanned film or both?
    It sounds like you like the Lightroom/ACR way of working.
     
  71. And where did you find (or how did you 'make') these color spaces?​
    For linearized sRGB and AdobeRGB I just downloaded two I found somewhere since it was easier than making them. If I ever needed to publish anything in these color spaces I would find a way to make my own so as to avoid any copyright issues with the icc profile.
    I am afraid I do not know exactly how I would go about it. Worst case I could write them with a copy of the ICC spec and a hex editor.
     
  72. Pretty easy to make them (or any variant). Select the original working space in Photoshop’s Color Settings, toggle to Edit. You’ll see the WP, TRC gamma and primaries, just edit the TRC and save as a new profile out of PS.
     
  73. So instead of proving this, we are to take that all image processing applications are just doing these functions wrong?​
    Now now, don't change the subject. Joe C is correct, some programs get lost in the complexities of gamma, but nobody is saying that all image processing applications are wrong. However, what seems to be possible in 2010, which was not possible only a few years ago, is working in a linear color space without the limitations of the past (not enough bit depth, memory or processing power). So now that we can, why settle for second best?
    Thanks for the Gamma 1.0 making tip, Andrew.
     
  74. So instead of proving this, we are to take that all image processing applications are just doing these functions wrong?​
    Please see above. I and others have proven that many of them were wrong. If one values correct functions, it might be worth quickly testing whatever software one uses.
    This image from the end of the link I posted is great, try scaling it down 2:1 and see what happens.
    http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png
     
  75. Joe C is correct, some programs get lost in the complexities of gamma, but nobody is saying that all image processing applications are wrong.​
    Wrong to what result? Wrong specifically where with which product? Did you file this as a software suggestion at least in terms of Adobe’s methods of doing so (web based)?
     
  76. Jack, based on what evidence are manipulations best performed on linear data? Are you referring to raw digital captures, scanned film or both?​
    @Roger: All of them, because they all start as an R*G*B* cube proportional to the luminance of the subject. It's all downhill from there. As for the evidence, pick up any semi-recent signal processing book and it will tell you that for many operations you really need to get back to linear from wherever you were. Joe C's post and linked site are just a couple of the simplest examples of what goes wrong when you don't.
     
  77. Wrong to what result? Wrong specifically where with which product? Did you file this as a software suggestion at least in terms of Adobe’s methods of doing so (web based)?​
    Wrong in that a pixelated zoom of one image did not match the resized image in brightness. By comparison, if the resizing was performed in gamma 1.0 space, the zoom did match the resized image in brightness.
    I personally only tested GIMP, ImageMagick and an archaic version of Photoshop. None of them resized correctly without extra gamma conversion steps done by hand.
    I have not actually filed it as a software suggestion, my reasoning having been that at some point over the last 20+ years Adobe ought to have been made aware of it. I might even have seen evidence that an Adobe engineer was aware of it, I do not remember.
     
  78. I have not actually filed it as a software suggestion, my reasoning having been that at some point over the last 20+ years Adobe ought to have been made aware of it. I might even have seen evidence that an Adobe engineer was aware of it, I do not remember.​
    Well as a beta for the Photoshop team since 2.5 (and an alpha since CS), dealing with engineers and trying to get features and fixes, I can guarantee that you’ll never see a change until you provide very hard empirical facts they can quickly read and digest and not give up. You’ve apparently given up (or assuming your ideas are able to hold up to their examination) haven’t provided them data in a form they will examine. Telling them they are wrong is the first mistake in getting them to take you seriously.
    My suggestion is you talk to Chris Cox (he’s accessible on the Adobe forums). You should be aware Chris ripped Timo more than one butt-crack on this subject.
     
  79. or assuming your ideas are able to hold up to their examination​
    Did you look at that image I linked to? Copyright and internet etiquette prevent me from uploading or direct linking it. If so, what is your reaction?
    http://www.4p8.com/eric.brasseur/gamma-1.0-or-2.2.png
    The correct result of downsampling it 2:1 is clearly an image reading, “Your scaling software RULES”. What is the result in Photoshop?
     
  80. Did you look at that image I linked to? Copyright and internet etiquette prevent me from uploading or direct linking it. If so, what is your reaction?​
    Well its an untagged doc. So what should I Assign for the profile before I do anything?
    Then once I assign the profile, what steps am I supposed to take in terms of scaling (with what algorithm)? I don’t know what you mean by The correct result of downsampling it 2:1 is clearly an image reading, “Your scaling software RULES”.
    I’d be happy to play with the document but you need to define exact steps to take and what your take on the results mean to you (and why). IOW, you have a theory that you wish to prove using this one doc. I’m cool with that but you need to define the steps to take, and what the results are supposed to prove. IOW, what makes “Your scaling software RULES” “correct”?
     
  81. My apologies, I had not checked to see if the image was tagged sRGB. Please assign an sRGB profile before scaling. I would recommend sinc or Lanczos, but use any scaling algorithm of your choice.
    If the image is treated as sRGB data, a visual inspection, e.g., squinting at it, shows the word “RULES” in lighter letters on a darker background and the alternate text as much fainter if not quite indistinguishable from the background.
    It is therefore reasonable, both mathematically and intuitively, to expect a downsampled version of the same image to have roughly the same appearance, with the word “RULES” in lighter letters on a dark background and any alternate text fainter if not quite indistinguishable from the background. Many software packages produce a very different downsampled image, with the alternate text very clearly visible.
     
  82. Meanwhile, thanks to Joe C and Andrew’s suggestion plus Photoshop, I was able to create my own Gamma 1.0 color space. I then proceeded to open one of my virgin raw captures, chosen for its noisiness and dark gradations, with Capture NX2 - twice: first with my new and improved Gamma 1.0 version of sRGB color space as default (G1 in the pictures), second with the same old standard Nikon sRGB (approximately gamma 1/2.2) color space as default. Without applying any adjustments whatsoever to either, I then proceeded to save them both as properly tagged 16 bit TIFF files. You can see the result opened in Photoshop in Picture 1.
    Now bear in mind that these pictures should be virtually the same: this is the breakeven scenario, where in the first one gamma is not applied until just before being passed to the monitor (which will by its nature de-gamma it for the benefit of our eyes), while in the second one gamma is applied immediately, as the image is opened in the raw converter. Since I have not made any adjustments to the images between opening and saving, they will effectively have gamma applied once to the same data and so they should look the same. In practice a slight increase in brightness (contrast?) and slight shift towards yellow in the 2.2 gamma version is noticeable. The difference in brightness is due to the fact that the gamma curve of my profiled monitor is not exactly the same as sRGB’s standard gamma curve; what about the color shift? Perhaps the same?
    Just to show you that I am not cheating, the same two TIFFs are opened in a non color-managed editor in Picture 2. As you can see, Paint displays the linear gamma version as is, showing you the effect that your monitor has on your images if you do not pre-compensate for it: system gamma=2.2. The second one is pre-compensated by the standard sRGB color space so that the end result looks good (approximately the relative luminance at the scene): system gamma more or less =1.
    While playing with my new linear color spaces (I also made a ProPhotoRGB G1 version – which I think performed worse, possibly because too much processing is left to as yet unsophisticated color management software) I noticed how poor my beloved CNX2 is at color management on the fly: it looks like it forgets that it has 12+bits to play with, chops everything at 8 bits and applies gamma to that hacked up data for display – or perhaps it lets the video card deal with it, anybody know? The result is major posterization, as you can see around the neck of the girl on the left of Picture 3. Windows Photo Viewer is only a little less brutish, while Photoshop’s ACE comes through with aplomb. Don’t forget, the data is there in the file and it is the same in the three cases, but obviously not all programs take color management seriously yet. Let's hear it for Photoshop.
    If I worked mainly in Lightroom followed by Photoshop and printed my own pictures I think I would try to find a suitable Gamma 1.0 working color space to pass data from one to the other like Joe C does. However I do most of my post processing in CNX2 sometimes followed by Photoshop, and I am not going to switch to G1 until I figure out where is the color management bottleneck of my RC/video card/monitor combo. If anybody has any ideas I’d be glad to hear them.
    Jack
    00XiRR-304033584.jpg
     
  83. It turns out I don't know how to use the photo upload feature. Here is Picture 2.
    00XiRU-304035584.jpg
     
  84. And Picture 3
    00XiRV-304035684.jpg
     
  85. Please assign an sRGB profile before scaling.​
    OK, I have more questions about this file and the process than answers. Forgive me, but after just spending 2 days trying to track down a V4 profile scum dot bug issue in Snow Leopard and printing nearly 50 sheets of Luster, I don’t want to go down another rabbit hole.
    The document is in 8-bit. You say its sRGB. How was it built, is there a high bit version? Do we know that by testing an 8-bit version, whatever is supposed to result isn’t affected by the lack of more bits? Was it created in sRGB or assigned as presumably, the idea of this test is to say something about the TRC which is unique in sRGB unlike other RGB working spaces? I’m supposed to scale it using different products to what size (any or just in half?). Presumably after scaling, I see the message change and based on the message, the sampling algorithm is either “right” or “wrong” and how does the message imply the “correct” sampling despite the text provided? How does this “correct” or “incorrect” appearance affect other documents (what would I see either on screen or on print that would carry over from this one doc?). Am I supposed to apply a 1.0 TRC to this document and resize after doing so with the sRGB assigned doc? Use sRGB primaries with 1.0 TRC?
    It is therefore reasonable, both mathematically and intuitively, to expect a downsampled version of the same image to have roughly the same appearance, with the word “RULES” in lighter letters on a dark background and any alternate text fainter if not quite indistinguishable from the background.​
    I’m not certain its reasonable nor intuitive to suggest that’s the case but even if we agree, what’s this got to do with gamma or TRC?
    I know that I must use Nearest Neighbor in PS all the time to sample data for tasks that I’d never do for images. For example, I’ll often take targets used to build ICC profiles which are high rez, single colors per patch and sample down using Nearest Neighbor so that I end up with one pixel per patch to feed to ColorThink to build what is called a color list. If you use anything but nearest Neighbor, the color patch values are of course highly polluted. I have to use this sampling algorithm. But I’d never use that on an image for obvious reasons (it would visually look awful). So any sampling algorithm but nearest neighbor is totally “correct” in one scenario and totally “incorrect” in the other. What makes seeing “RULES” here correct for images or for non images?
     
  86. When you built the 1.0 sRGB variant did you use the Nikon sRGB V4 I see tagged in the images you provided or the sRGB in the RGB working space area within Photoshop (where other working space are grouped)?
    The difference in brightness is due to the fact that the gamma curve of my profiled monitor is not exactly the same as sRGB’s standard gamma curve; what about the color shift? Perhaps the same?​
    Most likely yes, the only true sRGB device is a theoretical CRT with p22 phosphors as defined way back in the middle 1990’s. Put a gun to my head, I’d probably pick the sRGB with native TRC, it doesn’t look as flat as the 1.0 version but the differences are tiny.
    Of the two, the differences are subtle it be a coin toss to pick which is “better”.
     
  87. When you built the 1.0 sRGB variant did you use the Nikon sRGB V4 I see tagged in the images​
    Yes, I used the Nikon V4 as, I am told, it has a 12 bit LUT for gamma instead of the standard 8. Agreed on the two images being virtually the same. Any idea on the source of the ugly posterization in CNX2?
     
  88. Never used CNX2 so I don’t know.
     
  89. If an image is resized to be smaller, and do so in gamma 2.2 space, the resized image will be darker than the original image.​
    I cannot replicate this. I even used that “your scaling sucks” image to test this. I took the eyedropper sampler tool in Photoshop set for a 1x1 pixel. Carefully selected the lighter gray text (the letter Y). Placed a 2nd sample point in the darker gray. 192/192/192 for text, 128/128/128 for outer gray in sRGB. Duplicate the image. Resize that 50% using Bicubic Smoother (use Bicubic, any other varient algorithm). Toggle between original and newer image you just sized. The values are identical. Yes, with a complex image, with adjacent pixels of differing values, sampling down using Bicubic may make the original spot lighter or darker, that’s supposed to happen (and would not with a Nearest Neighbor algorithm as I use to do this with my color patches, its not supposed to either). The gamma encoding has no role here that I can see and the algorithms are doing things correctly as far as I can see.
    Next, on the Dalai Lama image on the web page you reference. Convert from sRGB to sRGB but using a V4 profile and the perceptual rendering intent, scale 50%, I do not get the same gray effect I get with the V2 profile. The image looks like an image. So the issue here can’t be the gamma encoding, the V4 profile doesn’t have a 1.0 TRC. Its sRGB. Now convert the sRGB Dalai Lama image to a 1.0 TRC image (based on ProPhoto), indeed the rescale isn’t “wrong” (its not all gray) but the color looks quite poor (it looks poor on the web page too with the “correct” software) but the sRGB V4 profile with its TRC looks much better than either.
    Next I played with that Gamma Colors image. Converted to a 1.0 TRC working space I built, did the resize just like the above tests, I do not get the results seen on the web page (I get the “wrong” results”). So the 1.0 TRC in the space I built didn’t “fix” this “problem”. I converted it to the V4 sRGB image that produced the “correct” results for the Dalai Lama image. That too produced the “wrong” results much like the 1.0 TRC working space test. Note that I tried the scaling down changes the darkness on this image too, before and after values where identical.
    Based on these tests, I can’t conclude that the 1.0 TRC working space is a fix for anything or that the original sizing is necessarily “wrong”. I can produce the “correct” results the author shows using a space other than 1.0 TRC (as to why the perceptual conversion and not others with this V4 profile produce what is considered “correct” is unknown. But I can’t attribute it to the gamma at this point).
     
  90. Jack, I just assigned 1.0 gamma sRGB profile I constructed in CS3's Color Setting's "CustomRGB..." dialog box to the right normalized 2.2 gamma version in your "Picture 2" and there's quite a bit of clean detail showing folds of a collar and neck. See below.
    That's pretty good considering it's a downsized and compressed jpeg. The noise and banding along the edge of the jaw on the girl on the left in the normalized 2.2 gamma encoded image may have more to do with how CNX2's default base tone curve is "not so gracefully" rolling off the dynamic range of the shadows. Newer Nikons have more dynamic range than my old Pentax K100D. The clean look of the shadows even in the downsized web version backs this up.
    To add, if you've ever tried to add definition while pulling shadow detail using a curve directly on linearized data, you'll see it becomes a tight spot to tweak with very few allowable points. The response to these edits in the preview becomes so jumpy to where using keyboard arrows on a selected curve point becomes necessary. This is why ACR included the "Fill" slider.
    I've doing some experimenting on my own working on a high contrast test image tweaking the curve in the shadow regions. What I did to get around having to edit in this tight spot is set ACR's settings to zero, curves set to linear, adjust black slider for no clipping and reverse ACR's base exposure it applies to my Pentax PEF's to (-.50).
    Then I open in Photoshop with the tagged ProPhotoRGB 1.8 gamma space set in ACR, convert to 1.0 gamma ProPhotoRGB and re-assign regular ProPhotoRGB. This creates a very bright and flat preview where the histogram's endpoints fall in 3/4 zone for absolute black and 1/4 zone for absolute white. This now allows me to apply a very steep and symmetrical S-curve with a shape similar to the toe and shoulder roll off of negative film. Now the shadow area on the curve is broader and I can apply more adjustment points for tweaking.
    The curve shape I constructed on the same image in ACR with the original zero-ed out linearized settings was drastically different and much harder to tweak for definition in the shadows. I could only get about 3 points comparatively to the method in Photoshop. Because sensor data devotes fewer tonal levels to shadow regions there's less levels to make up detail, so you can't apply extreme bends to the curve in these areas to bring out definition without beefing up noise.
    Default tone curves in Raw Converters lack the precision and delicateness in these shadow regions to have every image with varying dynamic range to come out perfect. High contrast scenes tend to amplify the courseness in the roll off provided by this curve.
    00XiYc-304127584.jpg
     
  91. What makes seeing “RULES” here correct for images or for non images?​
    Most of your questions in that post are not relevant. I can answer them in a private email if you like?
    I cannot replicate this.​
    You neglected to mention what the striped letters looked like after the resizing. The solid letters were not expected to change.
    I am posting a small section of that image, plus what it looks like scaled down to half size in gamma 2.2 space on the left, and scaled down to half size in gamma 1.0 space on the right. The two scaled images are clearly different, and are not both correct.
    If you squint at the image, if it is displayed at anywhere near sRGB gamma, it should look a lot like the scaled version on the right, and almost nothing like the scaled version on the left.
    00XiZF-304135584.jpg
     
  92. Jack, here's a demonstration below of what I was talking about on my previous post concerning what the default tone curve can do to the shadows and how working on an image lightened enough to apply an S-curve that allows finer tweaks coming out of the shadows can allow a smoother rendering.
    Please not that I went from converting the original linear data in ACR to 1.8 gamma ProPhotoRGB, assigned 1.0 gamma ProPhotoRGB in Photoshop and converted again back to 1.8 gamma ProPhotoRGB where then I applied the custom S-curve. There was no increase in noise or banding at all. The same amount of noise in ACR is still there in the lightened version shown below.
    As you can see this allowed a particular shape to an S-curve to be applied that gave me more spread out points to work with for lightening and adding definition to the shadows and hide the noise.
    I got the same results editing in ACR using a wider range of tools but got slightly better color especially in the greens.
    00XiaQ-304155684.jpg
     
  93. Most of your questions in that post are not relevant. I can answer them in a private email if you like?​
    Well thats fine but I don’t see why you can’t (will not) answer them here just in case other lurkers have similar questions and may not agree at this point that the questions are not relevant.
    When I size the image down, using a 1.0 TRC working space which I presume one must convert to from sRGB, the values you say should not change in a 2.2 space behave the same; no change. Your initial point I quoted said that sizing an image down in a 2.2 space changes the values lighter if I’m understanding you. I can’t replicate this in either working space (2.2 or 1.0). The before and after values are identical.
    I converted two more images from the web site you supplied and again, I was able to replicate what I think the author is suggesting is “correct” but I did this with a 2.2 TRC space. I also was unable to replicate on two images, what the author says should be “correct” using a 1.0 space. So I don’t understand how the gamma has anything to do with what the author or you say is “correct” and “incorrect”. That is, with a 1.0 space, the size conversion doesn’t fix what he says should be fixed and with a 2.2 gamma space, it does. Can you explain this?
     
  94. I will attempt to answer each and every one of your questions here, then. Answering that many may take some time to type.
    In return, could you please comment on my latest image and your own results for the word “RULES” after scaling to half size in Photoshop?
     
  95. In return, could you please comment on my latest image and your own results for the word “RULES” after scaling to half size in Photoshop?​
    I would IF I understood what I was supposed to be doing and seeing. This idea of squinting hasn’t sunk in yet. Squint how much (to produce what visual result)? If I squint enough, even with the above example, I have no idea what I’m looking at (it doesn’t appear as anything that look like letters to me on the bottom, smaller text).
     
  96. I’ve taken your “your scaling software” doc and duplicated it three times with a variant in sRGB, sRGB using the V4 profile that did wonders with Dalai Lama and with a 1.0 TRC, sized 50% using BiCubic Sharpener and they look identical. The don’t look like the above “Scaling in 2.2 gamma vs. 1.0 gamma“ image and I have wonder what document you used and the final size. What I got from the site is tiny, 512x256 px, when sized down to 50%, its naturally even smaller, viewing at 100% doesn’t look anything like the above image in terms of how sharp it is, obviously zooming in to 200% looks worse (but is about the same size on my screen as your image above is). So something ain’t kosher here!
     
  97. Squint enough to mostly blur the sections with alternating light and dark lines. If practical, standing far enough away from the screen that the alternating lines blur together might be better still. The idea is that the alternating lines, if not resolved properly, will look similar to the brightness of some solid color. With this particular image, some parts should have a similar brightness to the background and other parts should have a similar brightness to the lighter solid lettering.
    Here are the answers to some of your questions, with more to follow:
    The document is in 8-bit. You say its sRGB. How was it built, is there a high bit version?​
    I did not build it, so I do not know how it was built. I do know that one of the alternating patterns was designed to have the same average brightness as the background in gamma 1.0 space, and another alternating pattern was designed to have the same average brightness as the background in gamma 2.2 and/or sRGB space (they should be similar enough in this case). Simply converting this image to 16-bit or higher and treating that as a high bit master will still suffice to demonstrate my point.
    Do we know that by testing an 8-bit version, whatever is supposed to result isn’t affected by the lack of more bits?​
    Affected? Yes, it will probably be affected unless the particular values that were chosen just happen to be accurate in higher bit depth. It will not be significantly affected, though. I could probably generate an actual higher bit depth version if you can convince me that you are not just stalling.
    Was it created in sRGB or assigned as presumably, the idea of this test is to say something about the TRC which is unique in sRGB unlike other RGB working spaces?​
    It was created in either sRGB or gamma 2.2 (I do not know and might have difficulty determining which one even with careful analysis), which for purposes of this demonstration are similar enough. The exact TRC is not important, only the fact that the TRC is significantly different from gamma 1.0.
    I’m supposed to scale it using different products to what size (any or just in half?).​
    I am most curious about how Photoshop scales it to exactly 50% size with cubic, sinc or Lanczos interpolation. Scaling it to exactly 50% size was what the image seems to have been designed to produce a particularly dramatic result for.
     
  98. Squint enough to mostly blur the sections with alternating light and dark lines.​
    Before or after sizing? See above. I see no lines after sizing with this tiny doc.
    I did not build it, so I do not know how it was built.​
    But you know I should assign sRGB? And you know the effects that is supposed to point to a gamma analysis doesn’t have any bearing on the bit depth? And to test this with the supposedly superior 1.0 gamma, I’m supposed to convert this as proof to myself that 1.0 processing is the fix for whatever the author says is the issue with the 2.2 space? If I convert, I use dither? The primaries of the 1.0 space doesn’t matter or it has to be sRGB?
    It was created in either sRGB or gamma 2.2​
    They are NOT the same! The sRGB color space does not follow the gamma formula and is not a 2.2 gamma.
    I am most curious about how Photoshop scales it to exactly 50% size with cubic, sinc or Lanczos interpolation.​
    If I use Bicubic (any variant) with the Dalai Lama I see the “gray” square looking image on the authors web page. If I use the sRGB V4 profile with a perceptual intent, it sizes much like the “correct” appearance the author shows on his site (actually the colors are better, not as muddy or muted). That is using a 2.2 TRC (which you say is “close enough” to a 2.2 gamma). So right away, the theory that we must use a 1.0 TRC doesn’t wash in this example. With the “your image sucks” I’ve told you what I see above (they all look the same).
    The Dalai Lama image and the “your image sucks” image don’t behave the same for one.
    The Dalai Lama image samples down “correctly” with at least one variant of a 2.2 TRC working space.
    I’ve yet to be told how to handle the images from you in terms of testing a 1.0 gamma space with the same sizing experiments. What I did was build my own 1.0 Gamma working space as I described above. IF the premise is, a 1.0 TRC space produces a correct sampling but a 2.2 does not, either I’m doing something wrong or the premise is wrong because the 1.0 space did not “fix” the “Your image sucks” file but it DID work on the Dalai Lama image (that is, when sizing down, I did not get a gray square. That would initially lead someone to suggest that the right thing to do is use a 1.0 gamma working space. That theory however seems a bit half baked at this juncture because a 2.2 space did this as well, actually better, and the 1.0 space didn’t fix the “your image sucks” image at all). Can you explain this?
     
  99. viewing at 100% doesn’t look anything like the above image in terms of how sharp it is​
    Could you post a crop of the letter R/S in the 50% scaled version? I would have expected that to be the size of one of the two lower corners in the image I posted. I do not know which of the two it will look like.
    Presumably after scaling, I see the message change and based on the message, the sampling algorithm is either “right” or “wrong” and how does the message imply the “correct” sampling despite the text provided?​
    Yes, after scaling, the possible outcomes are:
    • One message
    • A different message
    • Some mix of both messages
    The most likely outcome if the scaling takes place in gamma 1.0 space is the word “RULES”. The most likely outcome if the scaling takes place in gamma 2.2 or sRGB space is a different word that is offensive to some people. Most reasonable people would judge the “correct” outcome to be whichever of the two most resembled the full resolution image, which is where the squinting, standing a great distance away, etc. comes in, so that one can see what the large image should correctly look like if it were smaller.
    How does this “correct” or “incorrect” appearance affect other documents (what would I see either on screen or on print that would carry over from this one doc?).​
    Once the image has been resized down to half size, all future documents will look like the resized image, whatever that may be depending on the resizing algorithm, since it no longer contains alternating light and dark lines intended to cause unusual results. The appearance of the full resolution image might vary depending on what was done to it and what the output device was.
    Am I supposed to apply a 1.0 TRC to this document and resize after doing so with the sRGB assigned doc? Use sRGB primaries with 1.0 TRC?​
    With the software I have available, I can only get the correct result if I apply a gamma 1.0 TRC, resize, and then convert back to sRGB or similar. You might possibly be able to get the correct result through other means, in which case you might not need to do this. The primaries are irrelevant for purposes of this demonstration. I think that R=G=B for all pixels, though I am not certain. The effect should still be present even in grayscale.
     
  100. I’m not certain its reasonable nor intuitive to suggest that’s the case but even if we agree, what’s this got to do with gamma or TRC?​
    It is relevant because at different gamma or TRCs there are different halfway points between the alternating light and dark lines. Since resizing to 50% should mostly replace the alternating lines with this single color halfway between the two, it should give different results depending on the gamma or TRC.
    I know that I must use Nearest Neighbor in PS all the time to sample data for tasks that I’d never do for images.​
    That is correct, but the issue I am describing applies to images, and requires that a resizing algorithm reasonable for images be used. My preferences would be sinc, Lanczos, or bicubic, but other options may be valid.
    I will address the Dalai Lama and Gamma Colors images after I get caught up on all your other questions.
    But you know I should assign sRGB? And you know the effects that is supposed to point to a gamma analysis doesn’t have any bearing on the bit depth?​
    In both cases I know because I tried it at the 8-bit per channel in sRGB space and it illustrated the point that it was meant to.
    And to test this with the supposedly superior 1.0 gamma, I’m supposed to convert this as proof to myself that 1.0 processing is the fix for whatever the author says is the issue with the 2.2 space?​
    I had not even gotten that far. I was focusing on determining whether the default behavior without any explicit conversions to or from gamma 1.0 was correct or incorrect in Photoshop. If it was incorrect, converting to gamma 1.0 first would be one option to get the correct results.
    If I convert, I use dither? The primaries of the 1.0 space doesn’t matter or it has to be sRGB?​
    I recommend not using dither for simplicity, but I expect that the overall effect would be the same with or without dither. The primaries do not matter.
    They are NOT the same! The sRGB color space does not follow the gamma formula and is not a 2.2 gamma.​
    I understand that, however in this case they are similar enough to both show the same overall effect.
    I’ve yet to be told how to handle the images from you in terms of testing a 1.0 gamma space with the same sizing experiments. What I did was build my own 1.0 Gamma working space as I described above. IF the premise is, a 1.0 TRC space produces a correct sampling but a 2.2 does not, either I’m doing something wrong or the premise is wrong because the 1.0 space did not “fix” the “Your image sucks” file but it DID work on the Dalai Lama image (that is, when sizing down, I did not get a gray square. That would initially lead someone to suggest that the right thing to do is use a 1.0 gamma working space. That theory however seems a bit half baked at this juncture because a 2.2 space did this as well, actually better, and the 1.0 space didn’t fix the “your image sucks” image at all). Can you explain this?​
    The 1.0 gamma working space you are using sounds reasonable to me. If the Dalai Lama image did resize differently depending on the gamma, do you at least agree that the two different outcomes cannot both be correct?
    I cannot yet explain your results when downsizing the “your image” image, but if you could post a crop of the downsized R or S, then I think I could.
     
  101. Could you post a crop of the letter R/S in the 50% scaled version?​
    http://digitaldog.net/files/Scaling.tif
    This is a screen capture of the three variations (sRGB at the bottom). This is a 100% view of the scaled images in Photoshop.
    The most likely outcome if the scaling takes place in gamma 1.0 space is the word “RULES”.​
    Likely? You are starting to worry me about a rabbit hole I didn’t want to dive into....
    Look, I think the best thing to do is dismiss this file. The Dalai Lama image I downloaded at least produces some of the behavior reported on the site you reference. That is, if I use the sRGB 2.2 TRC working space and size down, I see the same image the author reports. And if I use a 1.0 TRC working space I built, I see a similar result. But now, if I use a 2.2 TRC from a different (newer profile sRGB structure), I also see the “correct” so called behavior after sizing. In my mind at this stage, it puts more than a few holes in this 1.0 TRC works and others don’t theory. With the “your image sucks” file, nothing works as described on this site, at least not like the Dalai Lama image.
    Most reasonable people would judge the “correct” outcome to be whichever of the two most resembled the full resolution image, which is where the squinting, standing a great distance away, etc. comes in, so that one can see what the large image should correctly look like if it were smaller.​
    When you start saying things like “most reasonable people would do this or that” or Judge, or I should be squinting, or the image “should look correct” I get worried here. That’s not at all empirical. More rabbit hole digging. I’m happy to attempt to continue to figure this theory out, but you and this author have to prove the theory (its not up to me to dismiss it).
    The premise seems to be that something is “wrong” with sampling with anything but a 1.0 gamma space right? The premise shown on the site you reference shows the expected results of the Dalai Lama image, the image you are talking doesn’t. So unless you are sure we should continue with the “your image sucks” testing, I would suggest you stick to the Dalai Lama image. As I said, I CAN produce what the author says is “correct” without squinting and judging subjectively (clearly the image that samples down nearly all gray isn’t correct even if, as I suspect, it was specifically designed to point out this oddity that will unlikely ever be seen elsewhere). But I agree with the basic premise that we would like to size down ALL images, even this odd appearing image without it looking like a solid blob of gray. The author, and presumably you appear to be saying the issue is how the sizing algorithm deals with non 1.0 gamma working spaces. OK, I can make the image look as the author says is correct after sizing with a 2.2 TRC working space. Where do we go from here?
    With the software I have available, I can only get the correct result if I apply a gamma 1.0 TRC, resize, and then convert back to sRGB or similar.​
    Therefore the issue is the TRC period. I can prove that’s not necessarily the case. I can provide you a profile with a 2.2 TRC that produces what we expect after sampling. Or you can just search and download the sRGB V4 profile (should be on color.org), convert to sRGB with the perceptual intent and sample.
    This is why I asked a lot of questions about the documents. One test with a 1.0 gamma did not produce the results shown with a 2.2 gamma. Jumping to the conclusion that the sole issue here is the gamma of the space at this point simply doesn’t wash in my mind. My mind is still open but based on the test I did with a 2.2 TRC working space, I’m more inclined to think its either another issue or a combo of issues. I’m not about to build a web page that emphatically says there is an issue with all spaces other than 1.0 gamma spaces as I suspect is the case here. At the very least, we need to know why the Dalai Lama image doesn’t suffer the same fate with a 2.2 space that it does with another 2.2 space!
     
  102. Before or after sizing? See above. I see no lines after sizing with this tiny doc.​
    Squint or stand far away from the image before sizing. If that appears similar to the half size image, then the resizing algorithm could reasonably be considered to be correct. If that appears wildly different from the half size image, then the resizing algorithm could not reasonably be considered to be correct.
    This is a screen capture of the three variations (sRGB at the bottom). This is a 100% view of the scaled images in Photoshop.​
    None of those three look anything like the full resolution image. I do not recommend using any of those three methods to resize an image.
    When you start saying things like “most reasonable people would do this or that” or Judge, or I should be squinting, or the image “should look correct” I get worried here. That’s not at all empirical. More rabbit hole digging. I’m happy to attempt to continue to figure this theory out, but you and this author have to prove the theory (its not up to me to dismiss it).​
    Proving that resizing down in gamma space is incorrect is trivial, but I had assumed that since we were still having this discussion that you would dismiss such a proof. Since that may have been a bad assumption, here we go, using the values from the “your scaling software” image:
    If an image has alternating single pixel lines of sRGB 0,0,0 and 175,175,175 then if the image is resampled down to half size, the resulting image has insufficient resolution to display the separate lines and must replace them with a single solid color. This color should have the average linear intensity of light of the two colors. That color will not necessarily be “halfway between 0 and 175 of whatever arbitrary tone response curve one happens to be using at the time.”
    Decoding sRGB 0 and sRGB 175 gives 0% and approximately 42.896%. The average of the two is approximately 21.43%, which converted back to sRGB rounds to 128. If instead the two were averaged using the completely arbitrary sRGB tone curve, they would average to 87.5. Only one of those two values can be correct, not both.
    The value of 21.43% would be mostly unchanged across all color spaces (black points above 0% could affect it, but they would also modify the original image), whereas the 87.5 could be completely different depending on the totally arbitrary gamma or TRC. If the value 87.5 depends on anything other than the intensity of the original image, it cannot possibly be correct. The 21.43%, which was reached by averaging the linearized values, is therefore the correct one.
    Not quite a formal proof, but clear all the same.
     
  103. When I scale down that image of the Dalai Lama in sRGB, I get a gray blur that is not quite uniform, mostly sRGB values of 127 or 126, with more variation at the very edges.
    When I scale the same image down in gamma 1.0 space, I get a grayish image of the Dalai Lama that looks much like the original.
    At the moment I can only conclude, based on that and your results, that Photoshop’s scaling algorithm does not appear to be correct and may differ somehow from most other software (which tends not to be correct either).
    Finally, the animated gif of the dragonfly was an excellent example of the difference when scaling actual images instead of contrived test cases. The difference is subtle, but real, and once again only one of the two can be correct.
    http://www.4p8.com/eric.brasseur/gamma_austrolestes_annulosus.anim.gif
     
  104. When I scale down that image of the Dalai Lama in sRGB, I get a gray blur that is not quite uniform, mostly sRGB values of 127 or 126, with more variation at the very edges.
    When I scale the same image down in gamma 1.0 space, I get a grayish image of the Dalai Lama that looks much like the original.​
    Agreed. But when I scale down that image with a 2.2 TRC from an V4 sRGB, it actually looks better than then 1.0 TRC version and certainly not gray (gray blur that is not quite uniform, mostly sRGB values of 127 or 126). Its a recognizable downsized image. So how can you say the downsizing is incorrect?
    We might conclude this is a working space issue. But I don’t think we can conclude its a TRC issue since one 2.2 TRC produces this gray appearance and another with the same TRC does not. IOW, the argument at this point that a 1.0 TRC is going to produce the correct results in resizing only isn’t a correct assumption.
     
  105. I cannot determine exactly what algorithm Photoshop is using, but based on its performance with the “Your scaling software” image, it does not appear to be correct. An algorithm that is actually correct will be correct for all images, not just some images.
    I do not have a recent version of Photoshop, so I cannot run experiments to figure out what it is doing wrong.
    Just in case there were something wrong with the profile or conversion, what is the result if you downsample this image which is already in a gamma 1.0 space?
    00Xiea-304213584.jpg
     
  106. I just downloaded the trial of Photoshop CS5 and tried resizing the “Your scaling software” and Dalai Lama images to 50% size. If I first converted to a linear gamma space, I got correct output for both. If I used the default sRGB working space, I got incorrect output for “Your scaling software” and a mostly gray image for the Dalai Lama.
    It seems that something about your process is different, however Photoshop CS5 itself acts more or less the same as other versions and most other software, where some operations are performed incorrectly in gamma space.
    It could be argued that this is desired behavior for the software, but it is not reasonable to argue that incorrect final output is correct.
     
  107. It turns out that Photoshop’s 32-bit mode is intrinsically linear.
    This may or may not be useful since there is probably a memory and performance hit compared to 16-bit in a linear color space. Having additional options is a good thing, though.
     
  108. The scaling effects are correlated to aliasing.
    Every image is builded adding fine, and unnatural, geometric detail.
    I'm an not intersted in graphics, I'm interested in photos.
     
  109. if you've ever tried to add definition while pulling shadow detail using a curve directly on linearized data, you'll see it becomes a tight spot to tweak with very few allowable points.​
    Yes, thank you for the examples Tim, I agree. The sliders and controls in current software seem to have been designed with the assumption that the data is gamma encoded, therefore often they do not give you a perceptually meaningful level of sensitivity/control on linear data.
    Because sensor data devotes fewer tonal levels to shadow regions there's less levels to make up detail​
    My take here is slightly different. The levels are there and they are the same, but the software is not designed to give you smooth access to them in the linear case. The underlying information is always the same 12/14 linear bits that we captured, no more no less, whether we work in a linear or gamma encoded color space. It is at its least noisy in its original state, whatever you do to it afterwards can only degrade it, from this perspective (even if imperceptively). However, as you say in your post and perhaps for historical reasons, most software today is not designed to work on linear data. Because its sliders, displays and controls are designed to work on gamma encoded data instead, it will not give you the desired level of perceptual control over linear data: expecting gamma encoded data, the sliders act linearly on it, while in the case of linear data they should act logarithmically on it to give us a similar level of control.
     
  110. An algorithm that is actually correct will be correct for all images, not just some images.​
    I thought your premise, certainly the premise of the author of the URL you provided with all the image examples and math is suggesting that its not correct if it produces the results he shows as “incorrect” without a 1.0 TRC methodology and “correct” if the 1.0 TRC. Is that not the premise?
    The Dalai Lama image would be wrong in CS5 using the dreaded 2.2 TRC of sRGB. With the 1.0 TRC working space instead used, its “correct”. This would appear to “prove” the authors point (and yours) of the necessity of a 1.0 TRC in applications that don’t do the correct scaling in a non 1.0 space yes?
    Yet in the same CS5 app, using the same scaling algorithm that produced the “wrong” results with one 2.2 TRC working space, another nearly identical working space with a 2.2 TRC produces the “correct” results (IMHO, a better result than the 1.0 working space). I don’t know why this works, but it puts a lot of holes in the 1.0 TRC theory proposed here and on that site wouldn’t you say? Something else is happening in the processing that produces results the author and presumably you are suggesting is the “correct” scaling behavior other than a 1.0 TRC. Would that be a fair assessment this far?
     
  111. Would that be a fair assessment this far?​
    No, a fair assessment would be that you have done something incorrectly. The theory has no holes in it, CS5 behaved as the same as GIMP when I tried it, CS5 behaves correctly in 32-bit mode, and I am starting to suspect that you are just trolling.
     
  112. No, a fair assessment would be that you have done something incorrectly.​
    Which part specifically? The two tests using a 1.0 and a 2.2 TRC gamma in sRGB produce exactly the results the site says I should be seeing. So I’m doing something wrong in using another 2.2 TRC sRGB working space that produces the “correct” result? How is this incorrect?
    CS5 behaved as the same as GIMP when I tried it, CS5 behaves correctly in 32-bit mode.​
    What do you mean “32 bit mode”? What do you mean correctly? The sRGB 2.2 TRC that ships with CS5 (and all versions of Photoshop) produces the gray effect when sizing down Dalai Lama which is the “incorrect” result no? The 1.0 TRC I build sizes the Dalai Lama correctly as shown on the site so that’s expected no? The 2.2 TRC of the V4 profile however produces the “correct” donwsizing. So what exactly is wrong here?
    I am starting to suspect that you are just trolling.​
    Well thank you very much for that! I’ve yet to hear you clearly explain what is wrong here but you sure are quick to resort to calling me a troll. Unless you can clearly explain why the 2.2 TRC gamma in one working space produces the “incorrect” results and why another produces the “correct” results and how this implies that a 1.0 TRC is the only way to produce the correct results otherwise the app is “incorrect” I can only assume you are unhappy with the results that don’t bode with your belief system about 1.0 TRC being necessary (the crux of this 5 pages of threds).
     
  113. And FWIW, CS4 produces IDENTICAL results as CS5 in terms of the Dalai Lama testing reported above. One 2.2 TRC working space produces a gray blob when sized 50%, the other does not, the results are just fine.
     
  114. Which part specifically? The two tests using a 1.0 and a 2.2 TRC gamma in sRGB produce exactly the results the site says I should be seeing. So I’m doing something wrong in using another 2.2 TRC sRGB working space that produces the “correct” result? How is this incorrect?​
    Perhaps I have misunderstood what you are doing, what is the difference between “a 2.2 TRC gamma in sRGB” and “another 2.2 TRC sRGB working space”? As I have said before, any correct method will produce a correct result every time, not just some of the time.
    What do you mean “32 bit mode”?​
    Image > Mode > 32 Bits/Channel. I mean that the Mode was set to 32-bit.
    What do you mean correctly? The sRGB 2.2 TRC that ships with CS5 (and all versions of Photoshop) produces the gray effect when sizing down Dalai Lama which is the “incorrect” result no?​
    By correctly I mean that all test images look the same both before and after they are resampled. The gray is an incorrect result.
    The 1.0 TRC I build sizes the Dalai Lama correctly as shown on the site so that’s expected no?​
    Yes, that is expected.
    The 2.2 TRC of the V4 profile however produces the “correct” donwsizing. So what exactly is wrong here?​
    So the two different 2.2 TRC profiles mentioned above were V2 and V4? If the two behave inconsistently that would seem to be wrong, though I do not know what the underlying cause is.
    Unless you can clearly explain why the 2.2 TRC gamma in one working space produces the “incorrect” results and why another produces the “correct” results and how this implies that a 1.0 TRC is the only way to produce the correct results otherwise the app is “incorrect” I can only assume you are unhappy with the results that don’t bode with your belief system about 1.0 TRC being necessary (the crux of this 5 pages of threds).​
    That does not disprove anything I have said, all it indicates is that the behavior is inconsistent. That might possibly be due to a bug, user error, or perhaps some other cause I have not considered. If Photoshop and/or this particular profile do not behave correctly, that does not undermine my assertion that if not used carefully Photoshop will not behave correctly.
     
  115. I cannot determine exactly what algorithm Photoshop is using, but based on its performance with the “Your scaling software” image, it does not appear to be correct.

    I just downloaded the trial of Photoshop CS5 and tried resizing the “Your scaling software” and Dalai Lama images to 50% size. If I first converted to a linear gamma space, I got correct output for both. If I used the default sRGB working space, I got incorrect output for “Your scaling software and a mostly gray image for the Dalai Lama.

    CS5 behaved as the same as GIMP when I tried it, CS5 behaves correctly in 32-bit mode.​
    Anyone else confused by the various posts from this person? And I’m the troll?
    The behavior in Photoshop is Correct or Incorrect?
    The behavior in Photoshop is Correct because of a 1.0 TRC?
    The behavior in Photoshop is Incorrect because of a lack of a 1.0 TRC?
     
  116. The first sentence was posted before I downloaded CS5 based on the image you posted. The rest was posted after I had actually tried it myself.
    I also just downloaded a V4 sRGB profile from http://www.color.org/srgbprofiles.xalter, and it behaved the same way as Photoshop’s default sRGB profile, in that resizing did not work properly unless Photoshop was using 32 Bits/Channel.
    I suggest experimenting with 32 Bits/Channel. I suspect that all of these test cases will then resample correctly.
     
  117. Perhaps I have misunderstood what you are doing, what is the difference between “a 2.2 TRC gamma in sRGB” and “another 2.2 TRC sRGB working space”?​
    I told you, one is a V4 ICC profile and the original is a 2.0 version. BOTH have a 2.2 TRC!
    Image > Mode > 32 Bits/Channel. I mean that the Mode was set to 32-bit.​
    ALL my tests have been done in the bit depth of the data from the web site. I have done NOTHING other than Assign the sRGB profile (remember, it was untagged). In the case of the V4 profile, a conversion was made from the V2 sRGB profile to the V4 sRGB profile with a perceptual rendering intent (there is no such table in the V2 profile).
    By correctly I mean that all test images look the same both before and after they are resampled. The gray is an incorrect result.​
    As I have written far too many times now, the V4 profile, a profile with a 2.2 TRC produces the correct results as you have just defined! Please explain this in light of the discussions of 1.0 and all other TRC working spaces when sizing this test image.
    So the two different 2.2 TRC profiles mentioned above were V2 and V4? If the two behave inconsistently that would seem to be wrong, though I do not know what the underlying cause is.​
    Its only “wrong” because apparently you don’t like the results! It kind of puts into question, the idea that the sizing algorithm is “wrong” unless the data is in a 1.0 TRC. Why they are different I don’t know, its got as much or more to do with the color conversions than the TRC as both produce the same TRC. The sRGB TRC is what it is. They way colors map using a RelCol vs. Perceptual are different, that’s exactly the point of the ICC V4 spec. Instead of calling me names, why don’t you download the V4 sRGB profile, run the same tests and ask your friend with the web page what’s going on here?
    That does not disprove anything I have said, all it indicates is that the behavior is inconsistent.​
    My role or job isn’t to disprove anything, your role and job is to prove your theory and the theory your web site author is making. Saying the behavior is inconsistent is as silly as your point about squinting or “The most likely outcome“ or “Most reasonable people would judge the “correct” outcome“ and so on. I’m happy to continue this discussion with you if you’ll try using some scientific thought process and leave the unproven belief systems at the door. I have no desire to have a religious debate here. I’ve asked you some questions which you need to answer or lets just not continue here. Once again:
    The behavior in Photoshop is Correct or Incorrect?
    The behavior in Photoshop is Correct because of a 1.0 TRC?
    The behavior in Photoshop is Incorrect because of a lack of a 1.0 TRC?
    Again, you seem to feel I have to disprove what you are saying, you have to prove what you are saying. I have no dog in this fight. But I have a condition that produces the “correct” behavior using a 2.2 TRC and it appears you and the author of this site you brought up suggest that only a 1.0 TRC is the reason for the correct outcome.
     
  118. I also just downloaded a V4 sRGB profile from http://www.color.org/srgbprofiles.xalter, and it behaved the same way as Photoshop’s default sRGB profile, in that resizing did not work properly unless Photoshop was using 32 Bits/Channel.​
    You converted the Dalai Lama from sRGB (after assigning it assuming like me, you got untagged data) to sRGB with the V4 profile using the PERCEPTUAL intent? You must use Perceptual. The RelCol intent will produce the gray down sampled results. This in itself is very interesting and needs further investigation.
    The 32 bit part of this is all new, all of a sudden, I see no reason to go there just yet. Just assign sRGB to the Dalai Lama from the site, and convert to sRGB V4 Perceptual.
    I have to go into town now, but you have time to test this and provide your findings.
     
  119. The behavior in Photoshop is Correct or Incorrect?​
    There is more than one behavior in Photoshop. 8 or 16 Bits/Channel with a 1.0 gamma TRC is correct behavior. 8 or 16 bits/Channel with a V2 2.2 gamma TRC is incorrect behavior. 32 Bits/Channel with any TRC is correct behavior.

    The behavior in Photoshop is Correct because of a 1.0 TRC?​
    When the behavior in Photoshop is correct, which it is not always, it is correct because the resizing calculations are taking place in a linear space. It is possible for there to be more than one cause for that, of which a 1.0 TRC is one and 32 Bits/Channel (because it is probably floating point instead of integer) is another.
    The behavior in Photoshop is Incorrect because of a lack of a 1.0 TRC?​
    It is incorrect because of a lack of a linear space when the actual calculations are performed. In Photoshop it seems that a nonlinear TRC is no guarantee of a nonlinear space for calculations.
    Again, you seem to feel I have to disprove what you are saying, you have to prove what you are saying. I have no dog in this fight. But I have a condition that produces the “correct” behavior using a 2.2 TRC and it appears you and the author of this site you brought up suggest that only a 1.0 TRC is the reason for the correct outcome.​
    Internally, for reasons I do not understand, in that case the calculations seem to be done in a linear space.
     
  120. Using the perceptual rendering intent to convert to the V4 profile has strange effects on the Dalai Lama image but not on the “Your scaling software” image. The appearance visibly changes, which does not seem to be a desired result when converting from one sRGB color space to another sRGB color space.
    In any event, with the image data altered so radically, the particular values that were needed in order to downsample to a gray blob are no longer there, so the resizing behavior is what I would expect, even if the perceptual conversion was not.
     
  121. 8 or 16 bits/Channel with a V2 2.2 gamma TRC is incorrect behavior.​
    Expect when using the V4 profile which has a 2.2 TRC. The behavior is correct.
    When the behavior in Photoshop is correct, which it is not always, it is correct because the resizing calculations are taking place in a linear space.​
    Expect when using the V4 profile which has a 2.2 TRC. The behavior is correct. And when do you know its using a linear calculation? Why would it sometimes use linear and sometimes non linear? This is based on the gamma of the correct working space? Why would the behavior of the algorithm change based on two nearly identical (2.2 TRC) color spaces?
    because it is probably floating point instead of integer)​
    Because or absolutely? You really don’t know do you?
    It is incorrect because of a lack of a linear space when the actual calculations are performed. In Photoshop it seems that a nonlinear TRC is no guarantee of a nonlinear space for calculations.​
    Is seems or it is? Do you know for sure? When I use a 2.2 TRC from the V4 profile, Photoshop changes its mind and uses this linear calculation but when I use the same profile with a different rendering intent it doesn’t? Maybe?
    Internally, for reasons I do not understand, in that case the calculations seem to be done in a linear space.​
    You seem awfully sure of this. Can you provide any proof to this underlying processing TRC calculation?
    Using the perceptual rendering intent to convert to the V4 profile has strange effects on the Dalai Lama image but not on the “Your scaling software” image.​
    I already said that. I don’t know what ‘strange’ is supposed to mean, you continue to be unclear. Do you see the V4 Dalai Lama image convert the same with the V4 profile as the V2 profile with the perceptual intent? Do you see that when using the Perceptual intent, the results are not “wrong” at least as you have described what wrong is (Most reasonable people would judge the “correct” outcome)? Is not a gray blob, it looks like an image of the Dalai Lama right?
    The appearance visibly changes, which does not seem to be a desired result when converting from one sRGB color space to another sRGB color space.​
    Desired? Says who? I like it better. But that’s a judgment call. Its not a gray blob right? In the criteria (totally undefined criteria on the site you reference), its still incorrect? Really? Or maybe we (I) found a flaw in this theory about correct and incorrect scaling and the role of the gamma of the working space or processing.
    In any event, with the image data altered so radically, the particular values that were needed in order to downsample to a gray blob are no longer there, so the resizing behavior is what I would expect
    OK I think we are getting somewhere. So you agree that this V4 TRC 2.2 working space isn’t producing the “wrong” results like the gray blob with the original sRGB V2 working space? So that being the case, lets go back to the questions I posted above, you feel that something inside of Photoshop’s sizing calculations change, on the fly for correct and incorrect resampling based on what and proven how?
    My take (other than I don’t know why the perceptual table works while others don’t) is that this theory that a 1.0 TRC working space produces the correct while a 2.2 does not is wrong. Because a 2.2 TRC working space does produce sampling that doesn’t create a ugly gray blob.
     
  122. BTW, the reason I’m asking (demanding) you answer the questions about the processing, something you probably can’t without having access to the Photoshop code or an engineer, is because you (and to a much greater degree), the web site you reference has proposed a theory about correct and incorrect processing and the role of gamma encoding. As I said above, I do not have to disprove this and its not in my interest either way. But you (more importantly the author of the site) DO have to prove this theory. I think I found at least one example where the theory proponents need to scientifically and factually answer the questions about the process.
    Hundreds of years ago, people would watch a ship sail off into the sunset and fall of the edge of the earth because everyone “knew” the earth was flat. They had a theory, I guess at the time no one tried to disprove it, but the proponents had little empirical proof what they believed was true. There certainly was no scientific evidence that these ships fell off the earth because we now know they did not. This gamma theory is possibly a similar experience. You see the supplied image of the Dalai Lama produce one result with a 1.0 TRC working space and a different result with those that are not 1.0. Now I think I found an exercise where a 2.2 TRC working space does produce similar results as the 1.0 example, and its up to the proponents of this 1.0 linear processing belief to explain why this is happening and if their theory about linear gamma is still sound. Saying this or that may happen sometimes doesn’t wash unless you have proof, otherwise we can all dismiss the theory. The burden of proof is on the proponents of the theory.
    Now you or the web site author may be able to get an explanation from an Adobe engineer (I seriously doubt after the way the piece was written and apparently peer reviewed any engineer would even talk to the author). But short of that, guessing and assuming what’s happening under the hood in a product as complex as Photoshop, to continue to promote this theory is not something I or many intelligent readers are going to accept. And for good reason no?
     
  123. BTW, please feel free to explain, in light of the above, these comments:
    If an image is resized to be smaller, and do so in gamma 2.2 space, the resized image will be darker than the original image.​
    We never came to a conclusion as to why when I sample down the images, the RGB values don’t change a lick.
    I have to disagree. Lots of software performs important operations incorrectly in gamma space. Is the latest Photoshop any better? Up through at least CS2 it did not.​
    The crux of the argument and apparently the web site you referenced and why we’ve gone down this Rabbit Hole largely of your making.
    I do all (well, most, occasionally I just don’t care) resizing and sharpening in either Lightroom or 16-bit gamma 1.0 space. Doing the same in a gamma 2.2 space would give incorrect results.​
    Again, this has not been proven.
    a fair assessment would be that you have done something incorrectly.​
    Show me the exact steps and I’ll see if you are correct or not.
     
  124. And when do you know its using a linear calculation? Why would it sometimes use linear and sometimes non linear? This is based on the gamma of the correct working space?​
    I know it is using a linear calculation because the scaling is done correctly. I do not have the faintest idea why it would sometimes use linear and sometimes nonlinear. This is based on the correct brightness of the original image. I do not know if the author of these images intended them to be sRGB or gamma 2.2, but the two are similar enough that for purposes of these tests it does not matter which.
    Why would the behavior of the algorithm change based on two nearly identical (2.2 TRC) color spaces?​
    This is a leading question based on the incorrect assumption that the algorithm changes. The algorithm does not change, it is the image data that is different between the two color spaces. Since the color spaces are not actually very different, I think it is the perceptual table in this profile that is altering the brightness of the Dalai Lama image but not the “Your scaling software” image. If you simply look at it before and after converting to the V4 sRGB with perceptual intent, the brightness of the Dalai Lama image changes.
    Because or absolutely? You really don’t know do you?​
    I do not know for certain that it is floating point, no. I suspect that it may be because floating point is not typically combined with gamma curves, since additional precision is unnecessary.
    Is seems or it is? Do you know for sure? When I use a 2.2 TRC from the V4 profile, Photoshop changes its mind and uses this linear calculation but when I use the same profile with a different rendering intent it doesn’t? Maybe?​
    I will word it more strongly as it is. I have observed linear calculations in some cases, e.g., 32 Bits/Channel, and nonlinear calculations in others, e.g., V2 sRGB at 16 Bits/Channel. I have no way of knowing what the internal calculations are, but the input and output values are consistent with linear calculations and gamma space calculations for the respective cases. Again, the calculation is the same either way, but in one case the V4 perceptual table has mangled the image and in the other case the image is unchanged.
    I already said that. I don’t know what ‘strange’ is supposed to mean, you continue to be unclear. Do you see the V4 Dalai Lama image convert the same with the V4 profile as the V2 profile with the perceptual intent? Do you see that when using the Perceptual intent, the results are not “wrong” at least as you have described what wrong is (Most reasonable people would judge the “correct” outcome)? Is not a gray blob, it looks like an image of the Dalai Lama right?​
    The image gets significantly lighter during a perceptual conversion from sRGB to sRGB. I consider that strange. Is “significantly lighter” still unclear, or do I need to post before and after photos? The image data is different between the V2 and V4 perceptual rendering conversion, so of course the output is going to be different. The fact that it is not a gray blob does not mean that the output is not wrong. A gray blob is only one of nearly unlimited ways to get output that is wrong.
    OK I think we are getting somewhere. So you agree that this V4 TRC 2.2 working space isn’t producing the “wrong” results like the gray blob with the original sRGB V2 working space?​
    I do think we are getting somewhere, however I think that the V4 TRC 2.2 working space produces results that are also wrong and just less gray.
    So that being the case, lets go back to the questions I posted above, you feel that something inside of Photoshop’s sizing calculations change, on the fly for correct and incorrect resampling based on what and proven how?​
    Other than the linear calculations if the Mode is set to 32 Bits/Channel, I no longer believe that Photoshop’s calculations change, because I now have the additional information that the perceptual conversion is changing the raw image data.
    My take (other than I don’t know why the perceptual table works while others don’t) is that this theory that a 1.0 TRC working space produces the correct while a 2.2 does not is wrong. Because a 2.2 TRC working space does produce sampling that doesn’t create a ugly gray blob.​
    Again, the perceptual table does not make the calculations correct, they now have different input values other than the exact values needed to make the result gray.
     
  125. We never came to a conclusion as to why when I sample down the images, the RGB values don’t change a lick.​
    Only values where two or more different input pixels were downsampled into a single output pixel will change. You measured the value of a solid color, which would not be expected to change.
    Again, this has not been proven.​
    If you had understood what I had written instead of asking enough irrelevant questions to demonstrate that you did not, you could have determined for yourself that it was probably correct. I am attempting to prove it, but that may not be possible without an Adobe engineer.
    Show me the exact steps and I’ll see if you are correct or not.​
    I have determined since I posted what you were replying to that the perceptual conversion to the V4 profile is the cause of the otherwise unexpected behavior. The behavior of the resizing is now explained, though the behavior of the perceptual conversion is not.
    Not one observation is inconsistent with the claim that many image processing operations are only correctly performed in linear space.
    The dramatically different behavior of Photoshop’s rescaling between 8 or 16 Bits/Channel and 32 Bits/Channel should be a dead giveaway that one of the two is incorrect.
    Finally, I disagree that it is not in your best interests to come to a correct conclusion rather than championing an incorrect belief. Hypothetically, if you were to vehemently argue in favor of ideas that were completely incorrect, it might impact your reputation if it later came to light that the ideas were completely incorrect and there was plenty of information available at the time to determine that they were incorrect.
     
  126. One of the simplest possible test images can be found at the top of http://filmicgames.com/archives/354.
    In this case the correct interpolation between black and white lines is 50% density gray. In sRGB 50% density is approximately 187.516, which is close enough to the 187 swatch in this image. Incorrectly interpolating in gamma space would result in a value of 127.5, which is close enough to the 128 swatch in this image.
    It is clear in the full resolution image that the average brightness of the alternating lines is approximately the same as the 187 swatch and is much brighter than the 128 swatch. In the version of the image downsampled to 50% by Photoshop CS2, the sides now match the 128 swatch instead, having gotten much darker than in the original image. This is incorrect behavior.
    Of at least a portion of my claims, that is proof.
     
  127. I know it is using a linear calculation because the scaling is done correctly.​
    So all scaling done without a linear calculation is always incorrect that’s your point? You can prove that how? Sure sounds like “I know the ships are sailing off the earth because I saw it with my own two eyes” mentality. Outside of this web site, is there some literature that backs up what you’ve just said? And so your take is, depending on the rendering intent or the profile, the scaling may be linear or non linear? Photoshop does both?
    This is a leading question based on the incorrect assumption that the algorithm changes.​
    Well you just stated that this has to be the difference right? The same profile but with a different rendering intent used either produces the right or wrong scaling so what else is it?
    The algorithm does notchange, it is the image data that is different between the two color spaces. Since the color spaces are not actually very different, I think it is the perceptual table in this profile that is altering the brightness of the Dalai Lama image but not the “Your scaling software” image.​
    Then this “issue” or problem with an incorrect result is not solely based on the TRC gamma of the working space right? Or the incorrect results have to be some kind of manufactured image type? Sounds like it has to be manufactured to produce the incorrect results the author wishes to illustrate his theory and why the “my image sucks” doesn’t show this behavior depending on the gamma encoding tested. The bottom line after all this is your idea that “Lots of software performs important operations incorrectly in gamma space.“ should be retyped to say that “if an image is designed to produce an effect that we can say is incorrect after sampling, it will sample incorrectly“. Kind of makes sense since millions, maybe hundreds of millions of images have been resampled in Photoshop and only a few in the fringe seem to find images like “your image sucks” but not always Dalai Lama exhibit this “problem” depending on all kinds of mostly insignificant combinations!
    Yet your point continues to be “Lots of software performs important operations incorrectly in gamma space.
    I have no way of knowing what the internal calculations are, but the input and output values are consistent with linear calculations and gamma space calculations for the respective cases. Again, the calculation is the same either way, but in one case the V4 perceptual table has mangled the image and in the other case the image is unchanged.​
    Mangled? I see. OK. That’s your scientific term for it? Now I’m getting the idea here this is just a theological image processing rant, not a scientific search for what’s going on here. I’m not interested in discussing theology (must be the atheist in me). You apparently have some kind of belief system in play here that’s not going to take scrutiny in any form, the earth was created in 7 days, forget the carbon dating or the effect of two very similar profiles on the data and the result of the sampling. Time to dig out of rabbit hole in opposite direction!
    The image gets significantly lighter during a perceptual conversion from sRGB to sRGB​
    So what? Do you understand what a Perceptual rendering intent brings to the party a RelCol doesn’t? Do you understand that if we had the software to build a V4 sRGB profile, each software product would do so differently? Do you CARE that a V4 2.2 TRC profile, not the 1.0 TRC profile affects the data that puts into question the theory that a linear conversion has to be taking place? Are you the least bit curious about this or would you rather just stick to a theological idea of image processing and stick to the idea that “Lots of software performs important operations incorrectly in gamma space?”.
    Again, the perceptual table does not make the calculations correct, they now have different input values other than the exact values needed to make the result gray.​
    Ah, so yes, we have to build images a fixed way to introduce a result that proves Lots of software performs important operations incorrectly in gamma space.? Is that the idea? And images that are not build this way, the results of these operations mean what?
     
  128. It is equally valid to convert to the V4 sRGB color space with absolute or relative colorimetric rendering intent. In both of those cases, the V4 color space behaves like the V2 sRGB color space with respect to scaling of any images, showing that the cause of the discrepancy is the conversion, not the color space.
    The same profile but with a different rendering intent used either produces the right or wrong scaling so what else is it?​
    • The V2 sRGB profile results in incorrect scaling.
    • A perceptual rendering intent conversion to the V4 sRGB profile significantly alters some images, which are then also incorrectly scaled.
    • Absolute or relative colorimetric rendering intent conversions to the V4 sRGB profile do not significantly alter these particular test images, which are then incorrectly scaled.
    • If Photoshop is set to a Mode of 32 Bits/Channel, any color space should be correctly scaled, though it is possible that the perceptual conversion will still cause the same changes it did before, now they would not interfere with correct scaling.
    I am finished answering irrelevant questions. I would be happy to answer questions from anybody else, or to answer your questions if I believe you actually want to know the answers to them.
    I have one final question for you:
    • If an image with alternating black and white lines is resized to be small enough that the lines are no longer resolved, what should the resulting solid color be?
    If you understand the answer to this question, you should understand the difference between correct and incorrect scaling.
     
  129. Only values where two or more different input pixels were downsampled into a single output pixel will change. You measured the value of a solid color, which would not be expected to change.​
    I’m perfectly aware that on complex images where four differing pixels are sampled into one, its quite likely the single pixel may have a different value. That’s kind of obvious. Your statement is clear: If an image is resized to be smaller, and do so in gamma 2.2 space, the resized image will be darker than the original image. Again, SO WHAT? You are treating this as if its wrong and worse, suggesting that this is a problem or points out something to do with linear vs. non linear treatment.
    Finally, I disagree that it is not in your best interests to come to a correct conclusion rather than championing an incorrect belief.​
    Of course you do. You apparently have a stake for some reason in this idea. I don’t. You and your friend can be perfectly correct and if you can prove this to the group (something scientists do, its called peer review), great. So far you haven’t. You’ve expressed not solid reproducible steps to your theory. You’ve spouted image processing facts (facts in your mind) but haven’t proven them and worse, haven’t done any work to uncover why my V4 profile with a 2.2 TRC doesn’t behave as the theory says it should. In the end, doesn’t matter. Adobe isn’t going to change the product because no one has provided very good evidence they should. Most of us are going to dismiss what you are saying because you can’t prove your points. You are the one who has to ask yourself why, based on the current evidence and your knowledge of Photoshop code and processing, why you are damn sure your belief system is sound. Maybe you care, maybe you just want to believe in something, again, I’m got no interested in continuing a theological discussion of image processing. Your call. Get the facts (not the maybe, probably, I thank), or produce a series of steps anyone here can follow to prove your point. Or lets just agree to disagree. Makes no difference to me.
    If you had understood what I had written instead of asking enough irrelevant questions to demonstrate that you did not, you could have determined for yourself that it was probably correct.​
    Probably? Not good enough. It is or it isn’t and once again, its your job to prove the point! At the time, the ships probably did sail off the edge of the earth. Probably...
    Not one observation is inconsistent with the claim that many image processing operations are only correctly performed in linear space.​
    Just as not one observation is consistent with the claim that many image processing operations are only correctly performed in linear space! Again, the burden of proof is on you! One operation puts serious questions into the theory at this point, and until you figure out why, it will remain a question and burden on your theory.
    The dramatically different behavior of Photoshop’s rescaling between 8 or 16 Bits/Channel and 32 Bits/Channel should be a dead giveaway that one of the two is incorrect.​
    No more than ships with three large sails fell off the earth faster than those with two. You may indeed be right, but you’ve provided no reason for this to be true. You have to do that or the behavior is moot.
     
  130. I am finished answering irrelevant questions. I would be happy to answer questions from anybody else, or to answer your questions if I believe you actually want to know the answers to them.​
    Just be sure you ask questions that produce the desired results the authors expect from their “theory”. Questions that question the validity of the “theory” or questions who’s answers have to be backed up with actual facts will be ignored.
     
  131. Again in case you missed it:
    If an image with alternating black and white lines is resized to be small enough that the lines are no longer resolved, what should the resulting solid color be?
    Now the burden of proof is upon you to prove that you are competent.
     
  132. No Joe, it doesn’t. The questions to the validity of YOUR theory is what’s in play here. Its a good idea to skirt all this into my direction. You and I are done here because you have provided more questions without answers and holes in your theory of the superiority of linear image processing and the incorrect work of Adobe than you’ve answered and now you are going to ask more questions. Its answers we seek. If you ever get them, let us know. In the meantime, we have images to screw up in gamma corrected space (at least according to you).
    So just what company do you write imaging code for as your day job? Just who is Joe C (anonymous posters, those with no info about them do always raise suspicions, especially when they are so quick to call others trolls)
     
  133. Why are you unwilling to answer the one simple question? It is absolutely directly relevant to my claim, as you might see for yourself if you bothered to answer it.
     
  134. Why are you unwilling to answer the one simple question.​
    Why are you unwilling to answer so many complicated questions?
     
  135. I answered as many as I could to the best of my ability until I determined that it was not productive. If you really want me to answer the last set I would be happy to, but is there really any point?
     
  136. I answered as many as I could to the best of my ability...​
    Then you need some outside help in actually answering them to any degree of satisfaction on this end. Most of the answers where vague and dismissive. When you figure out what’s really happening here and why, why the V4 profile does what it does, why one test image produces vastly different results than another when the site YOU reference say they should produce the same results, yes, there is little point to continue.
     
  137. I have as good an understanding of those things as is possible without reading Photoshop code. As you correctly point out I have been unsuccessful in communicating those answers.
    Can you please answer the question regarding the alternating black and white lines? It will probably be more productive than the last 10 pages of thread.
     
  138. I would answer the question myself, however if I answer it, then it is likely that you will just dismiss my answer, accomplishing nothing.
     
  139. Not that it matters, but I just made myself a file with horizontal black and white stripes. Cut it in half, resolution-wise, a bunch of times. (All working in CS5.) The 32-bit file (well, converted to 32-bit) in super-small size (the file is now 30 pixels) look pretty much even gray at 100%. The file in 16-bit looks like a couple stripes, though uneven. When you enlarge (just display) the 30-pixel files, the 32-bit one still looks mostly gray, though with darker bars at the top and bottom. The 16-bit one looks like stripes still, though they aren't uniformly black and white.
    As near as I can tell, this doesn't really prove anything, I'm afraid. The original file was stripes, so it could easily be argued that having stripes at all makes the 16-bit version more "right". The even gray display may be logically correct, but the original file did have stripes. (I'm passing on the squint test, since I wasn't allowed to do that for my driving test.)
     
  140. Crapcrapcrap. Shouldn't write messages before I'm fully awake.
    Thanks to erroneously mislabeling the files, I described them exactly backwards. It is the 16-bit file that appears gray (mostly) and the 32-bit file that appears striped, though unevenly. I still believe either could be argued to "appear" correct, since logically the fully-reduced file would be gray, and since the original file did have stripes, so having some stripes is arguably a closer representation.
    It does make me glad that I don't actually reduce my photos to the point of illegibility, though I'm sure some folks would suggest that it might be an improvement to do so on any given day.
     
  141. I haven't learned anything that is useable with regards to real world digital photography in this entire thread.
    I do feel like I've been trolled.
     
  142. I am interested in the initial question but the thread seemed to have headed off in another direction that I could not follow. I would like to learn more more about gamma. The following link provided on page 1 of this thread looks like another fine example from Mr Koren's web pages
    http://www.normankoren.com/makingfineprints4.html#BW_testchart
    I was wondering if anyone could provide further links that may help me get to grips with this issue as I have to be honest and say it does seem strange that we apply gamma more than once but I am not in a position to debate or discuss until I know more about it.
    Regards Andrew
     
  143. What I think I have learned is that, with few notable exceptions like Lightroom, most software we photographers use today was written in the days when gamma encoding was necessary - and it does not lend itself well to a linear workflow. Today however I believe (happy to be proven wrong, however) that a linear workflow is both possible and beneficial for us in terms of being able to a) preserve the most information that was in the original capture, and b) work on it with minimal undesired data distortion.
    The point I was hoping we would get to is that, again for us self contained photographers, we really do not need to gamma correct the underlying linear data at all, but simply apply gamma instead to the version of the 16 bit data that we pass off for output: to the video card, to the printer drivers, to the Jpeg encoder etc. - therefore keeping the underlying linear data whole. It looks like a linear workflow is definitely not ready for primetime in Capture NX2, my raw converter of choice.
    As far as sources are concerned, I have found the following very interesting (the first three especially)
    Gamma correction - Stanford Applet
    Tonal quality and dynamic range in digital cameras - Norman Koren
    Gamma FAQ - Frequently Asked Questions about Gamma - Charles Poynton
    Gamma correction - Wikipedia, the free encyclopediaGamma correction - Wikipedia, the free encyclopedia
    American Cinematographer: Color-Space Conundrum Part 2
    colorspace formulas
    as well as the Burger and Russ textbooks that I mentioned in a previous post.
     
  144. If the goal is to simulate the way image colors blur together in the eye, then the calculation should be in the linear XYZ space. Other linear spaces derived from XYZ space, such as one of the standard ICC working spaces converted to linear (with the gamma changed to 1 as described elsewhere in this thread), will work as well.
    Adding or averaging values in a gamma-corrected space give distorted results because in such a space, 2^gamma plus 2^gamma does not equal 4^gamma.
    Another link about gamma (sorry if it was mentioned before): http://www.all-in-one.ee/~dersch/gamma/gamma.html
    Also note that the Exposure Tool in Photoshop works in linear gamma.
    Linear can also give you better tones - setting black point in a linear space or with the Exposure Tool is IMO superior to using curves or levels in a gamma space. In linear, you get an effect that more-realistically subtracts any diffuse, additive flare light from the shadows without damaging the mid-tones.
     
  145. Sorry if this issue has already been dealt with, I've only had time to skim through most of the posts.
    What nobody appears to have raised is the issue of levels per density step, or levels per stop. In linear space we rapidly lose tonal resolution as we move down from full white to the darker tones. That's the main use of a gamma curve, to even out the distribution of bits over the brightness range. The 65535 levels per channel of a 16 bit space are all very well, but if we throw half of them away in the brightest stop, then that kind of defeats the object. Doesn't it?
    Personally I would like camera manufacturers to introduce a (preferably controllable) non-linear amplification function before the A/D conversion stage. Applying a curve after digitisation is a bit (pun) daft really, but better than no curve at all. Applying an analogue log amplification stage prior to digitisation would improve signal to noise by separating digital jitter and A/D converter noise from the true sensor signal. It would also automatically increase the number of digital levels allocated to the darker tones.
    Oh, and BTW Nikon et al, this constitutes timed and dated "prior publication" before you all go running off to your patent lawyers.
     
  146. @Rodeo - re incorporating a hardware log amp stage, you're too late. ;-) I suggested this in a private email to Tim L a couple of days ago, and, as I recall, mused about this a couple of years ago on photo.net, for exactly the same reason that you gave. FWIW, one can make a decent approximation to a log curve with just a couple more transistors per pixel in VLSI, so it's not as far-fetched as you might think. Unfortunately, my brother has been in intensive care for the past two weeks, so I haven't had time to participate to any extent in this thread.
    Cheers,
    Tom M
     
  147. re incorporating a hardware log amp stage, you're too late. ;-)​
    Way too late ;-)
    Applying a curve after digitisation is a bit (pun) daft really, but better than no curve at all.​
    Is it? My understanding is that by applying a gamma correction to 16 bit linear data all you are doing is shifting the existing bits around. This does not 'create' more steps, add information or reduce banding in itself. The number of steps is the same in both cases (linearly spaced in one case and exponentially spread our in the other), but it should make virtually no difference to the number of real or perceived levels per stop in the end* (don't forget, we are staying in 16 bits, not compressing down to 8, as in Jpeg). Better to keep the data linear in the working space - with sliders that give you perceptually meaningful control (i.e. logarithmic/interpolated in some cases).
    *For 16 bit images I believe this holds theoretically true up to a contrast ratio of 655:1 (why? - hint, in Photoshop it would be half as much), more in practice, with post processing software optimised for a linear workflow, which we know is rather rare today.
    @Cliff: thanks for the interesting details.
    @Tom: sorry to hear about your brother. All the best.
     
  148. It's about the preview and nothing else.
    You can't edit what you can't see.
    If the tools designed for you to see this data don't let you manipulate it in such a graceful way then it doesn't matter what state of encoding the data is in. Linear data is a bitch to edit. So what algorithm controls the preview so you can see what you're editing while placing adjustment points on the curve? You place a curve point directly on linear data someone has to map it so it can be seen on the preview as it relates to the placement on the curve. Try doing that directly on linear data.
    Once you slide that slider or adjust a point on a curve and see a change in the preview whether desired or undesired all bets are off on what it's doing to the data. No one cares. And no one can prove or connect the dots in figuring out what the algorithms are doing to the data under the hood in rendering the preview we see on screen.
    All we know is that we have a reasonable facsimile of what we saw when we tripped the shutter.
    Again! How do you explain all the clean, noiseless, bandless shadow detail assigning a 1.0 gamma sRGB profile to Jack Hogan's normalized 2.2 gamma encoded screenshot compressed to jpeg viewed at 100% in a web browser? This was the premise behind Jack Hogan's proof of linear over gamma encoded processing.
    I noticed everyone skipped over my demonstration of that point.
     
  149. Linear can also give you better tones - setting black point in a linear space or with the Exposure Tool is IMO superior to using curves or levels in a gamma space. In linear, you get an effect that more-realistically subtracts any diffuse, additive flare light from the shadows without damaging the mid-tones.​
    Cliff, can you prove this with a screenshot.
    Make sure you assign your custom display profile to the screenshot and convert to sRGB for us to see what you see on your hardware calibrated display.
     
  150. If the tools designed for you to see this data don't let you manipulate it in such a graceful way then it doesn't matter what state of encoding the data is in. Linear data is a bitch to edit.
    I noticed everyone skipped over my demonstration of that point.​
    Both points well taken!
     
  151. If the tools designed for you to see this data don't let you manipulate it in such a graceful way then it doesn't matter what state of encoding the data is in.
    Tim, I am afraid that, as far as my beloved Capture NX2 is concerned, I am coming to that sad conclusion. However, that does not mean that if the software were properly designed it wouldn’t be better for it - and linear data would be just as easy to edit as gamma-corrected data.
    I noticed everyone skipped over my demonstration of that point.
    Yeah, I think it went over most of our heads: encoded with gamma 2.2, applied (not converted to) gamma 1.0, and displayed by the color managed browser of your choice. I’ve been thinking about gamma for a couple of weeks and I think I understand it quite well, but I haven’t gotten my head quite wrapped around that case yet. I wasn’t trying to prove one was better than the other. In fact that image was just to prove what a non-color-managed application would show. However, to compare apples to apples you should apply your test to the picture showing Photoshop displaying both the Gamma 1.0 and Gamma 2.2 pictures. I would expect both to be more or less the same (the Gamma 2.2 perhaps slightly better because it is slightly brighter ta start with) –this is the breakeven scenario after all.
     
  152. Most digital cameras capture fewer than 65,535 electrons per subpixel, making 16-bit linear a good format indeed for digital raw images, theoretically allowing for no quantization error to be added by the encoding beyond what was already present in the electrons. So far, the limited dynamic range of the read circuitry has made 14-bit linear adequate for most cameras, where the 15th and 16th bit would have been mostly noise.
    Also XYZ space is not the same as LMS (Long Medium Short) space, which should be better for matching since it is based more directly on the human eye than XYZ is. I believe that the ICC is aware of the limitations of XYZ space and is discussing eventually moving to a better connection space.
    Finally, editing linear data appears to be as straightforward as using 32 Bits/Channel in Photoshop. I admit that this change in workflow is not free, but it seems to be practical in many situations.
     
  153. However, that does not mean that if the software were properly designed it wouldn’t be better for it - and linear data would be just as easy to edit as gamma-corrected data.​
    True. But someone has to prove to the engineers and bean counters that the effort is worthwhile. Code is expensive to write. Someone has to document the new functionality and tech support has to be able to answer new questions that could arise. Someone has to justify the expensive in lieu of another feature that may be far more useful to the user base. If a tiny group of end users “feel” a linear encoding workflow is better (and they have to demonstrate this), its still an uphill battle to convince a product manager its worth doing. I think I suggested this pages back, in terms of talking to the Adobe Photoshop team. But before that even happens, well spoken and respected users have to first prove there is even a need and they have to do this in a pretty specific way, at last that’s been my experience with Adobe and a few other companies. Grassy knoll theories and saying publicly that a product is “broken” isn’t the right approach. That doesn’t even get the team to consider looking at the considerations which take place long before anyone with the power to implement a feature and look at the cost vs. benefit comes into play.
    Solutions looking for problems don’t fly.
     
  154. I have determined experimentally that attempting to rebut similar statements tends to be unproductive, so I will not do so.
    I caution others that this point of view appears similar to the ones that caused the Dark Ages. Most software vendors are constantly striving to improve their products, and I suspect that includes Adobe.
    Finally, since it appears nobody else on Photo.net can or will back me against such views, I am afraid I do not fit in here and intend to head elsewere.
    Farewell to all.
     
  155. Most software vendors are constantly striving to improve their products, and I suspect that includes Adobe.​
    As an Alpha and Beta tester I can say with most certainty that Adobe strives to improve their products! Some think that they don’t go far enough because their pet project feature isn’t implemented but they haven’t thought very far as to why. Or they simply do a very poor job of communicating to the company not only their needs but think their needs have a higher value than other users needs. Or they think their needs are based on a real need!
    Over on Luminous Landscape is an interesting discussion about soft proofing and the role of the Gamut Warning. One user suggested that it would be useful if the Info palette, that today includes a feature that provides the RGB or CMYK values of the conversion to the color space from the customize proof setup were to optionally allow him to see this in Lab. His reasons for doing so seem quite sound, the implementation is hardly big engineering. Its an interesting idea and he’s done a good job expressing why such functionality would be useful (and more useful than a legacy feature, gamut warning, that’s really just around because there’s more work in removing it). Anyway, its a good conversation and one that might actually have some possibility of getting the ear of Adobe engineers!
    Finally, since it appears nobody else on Photo.net can or will back me against such views, I am afraid I do not fit in here and intend to head elsewere.​
    Ah, so its about getting backing otherwise you’ll leave the sandbox? Well OK then.
     
  156. Cliff, can you prove this with a screenshot.
    Make sure you assign your custom display profile to the screenshot and convert to sRGB for us to see what you see on your hardware calibrated display.​
    I don't see the point of a screen shot.
    Here is a simple image that demonstrates the concept - you can easily duplicate for yourself.
    The top third is the original ramp
    The second third has the black point lowered in sRGB
    The third third has the black point lowered in sRGB with a linear gamma.
    Clearly, lowering the black point in gamma has an effect that extends far up along the tonal range.
    I find that setting the black point in linear looks better on real images and contributes to a 3D look.
    00XjjP-305149584.jpg
     
  157. Joe C, I for one would like you to stay. I find your contributions interesting and as competent and well informed as any poster who is not a color scientist or does not do this as part of their full time job can be. I do agree that Photo.net can come across as holier than thou and pompous at times, but in the end they are good folks.
    Finally, editing linear data appears to be as straightforward as using 32 Bits/Channel in Photoshop.​
    I am not sure I understand. If you mean that with the increased resolution of 32 bits it doesn't really matter whether the underlying data is linear or gamma corrected, I agree entirely.
     
  158. Joe C wrote:
    Also XYZ space is not the same as LMS (Long Medium Short) space, which should be better for matching since it is based more directly on the human eye than XYZ is. I believe that the ICC is aware of the limitations of XYZ space and is discussing eventually moving to a better connection space.​
    LMS is just another linear transformation of XYZ (http://en.wikipedia.org/wiki/LMS_color_space) and will behave the same as linear ICC working spaces with regard to color mixing. Yes, LMS or "cone space" is important for chromatic adaptation or white balancing, but that's not what being discussed in this thread.
     
  159. I like to take photographs. I am pleased with the incredible technology we have today. I shoot with a canon 5d mark 2 and print on an Epson 7900. The prints rival darkroom prints from 4x5 negatives (we have done tons of testing).
    This is interesting reading and valid points are being made.
    I am blessed to sell my prints regularly even in this economy. Rarely do I sell prints to other photographers. Most of my clients are educated enough about photography to know a good print from a bad print even if they do not technically know why.
    I believe that the advances in digital photography have progressed to the point where I can just take pictures and have fun.
    My advise is just take photographs, print, and enjoy.
    I have never lost a sale on a print due to any of the technical stuff you guys are discussing. It does sound like, however, that you know a lot about it.
     
  160. Cliff, thanks for making the effort to post those gradients.
    I tried out your suggestion of setting black point in linear space using Photoshop's Exposure tool on my test image I posted earlier in this thread. It does gracefully and gradually open up shadows without inducing abrupt and grainy transitions to black as it does in a gamma encoded space. But as I noticed in your gradients using this method, it has the opposite effect on highlight tones transitioning to absolute white.
    The X-rite CC chart white patch kept clipping adjusting the Exposure and Gamma slider after first setting Offset for black point just to get a correct to scene look I could get working in ACR. The image did have a 3D feel to it but the grass in that test shot ended up being too dark for a scene that was lit by 10 AM direct sunlight.
    However, I was amazed to find the gray patches and color Lab readings were close to dead on using this method which I couldn't get in ACR without making the entire image look slightly dark and dim.
     
  161. Cliff, I assume you are talking about setting the black point in Photoshop, not in a raw converter, correct? Assuming so, after you have set your black point in Gamma=1, do you switch back to a gamma encoded space for further processing?
    @Scott: I like your eye, man. As for myself, I am afraid that the old saying applies:'Those who can, do. Those who can't' ... talk about it.
     
  162. Tim, I'm glad that you're able to see the effect! I'm not sure what you mean about the highlights, and I only see a small, manipulated Macbeth shot of yours on page 10 of this thread, so I can't try to duplicate what you're seeing with the grass.
    Jack, I'm talking about Photoshop, but (without knowing what the raw converter Black controls really do under the hood) assume it also applies to raw converters that work on the camera data linearly.
    If I'm starting with a linear file, I usually just keep it in linear until there is a reason to convert to gamma. I use a default working space with ProphotoRGB primaries and gamma 1. Once you make yourself such a working space profile, working in linear is easy. If I have some random 8bit sRGB file, I switch it to 16bit mode and convert to the linear working space. But as mentioned before, some things actually work better in gamma space, such as Curves - also some noise-reduction and sharpening methods. It all depends. If you work in 16 bits I don't think it really hurts even to switch back and forth.
     
  163. Tim, I'm glad that you're able to see the effect! I'm not sure what you mean about the highlights, and I only see a small, manipulated Macbeth shot of yours on page 10 of this thread, so I can't try to duplicate what you're seeing with the grass.​
    See the screenshots below. I'll follow with a 100% crop of shadow detail made by the two edit methods.
    I think the cause for the abrupt clipping of the CC chart white patch using the Exposure slider in linear space is probably due in part in that I had to assign a 1.0 gamma profile to a 1.8 gamma encoded image out of ACR. I adjusted the Exposure tool so that the white patch only maxed out at 250 RGB when converting to sRGB for the web. In linear space it read 241 ProPhoto 1.0 gamma RGB or 98 L in Lab space read from Photoshop's info palette.
    00XkPY-305823584.jpg
     
  164. Here's the shadow detail crop for the two different processes.
    00XkPa-305825584.jpg
     
  165. Tim, I'm confused by your process. I'm not convinced you are working with a linear file. You describe applying rather than converting to the linear profile, then applying additional gamma in the Exposure tool.
    You should just convert the output from ACR to the ProphotoRGB Linear profile - that's all you need to do to have a linear file. Then you can adjust the black point with exposure, levels, curves, whatever you want - all tools will be working in linear.
    Or am I missing something due to turkey-induced grogginess...
     
  166. You should just convert the output from ACR to the ProphotoRGB Linear profile - that's all you need to do to have a linear file.​
    Even if the data coming out of ACR has been encoded into 1.8 gamma ProPhotoRGB color space?
    I tried what you suggested by converting directly from 1.8 gamma ProPhotoRGB to 1.0 gamma ProPhotoRGB and got pretty much the same dim low contrast rendering only this time I prevented the white from clipping by reversing the Exposure settings to...
    Exposure=+1.0
    Offset=-.0012 (black point)
    Gamma=1.10
    No other slider combinations would give me the contrast and vibrance I got in ACR. And I'll admit I'm not too clear on how to use the Exposure tool. I haven't read the instructions if there are any. I guess I should google it. Maybe someone came up with a sure fire method. Hope their definition of linear is the same as what I get out of ACR.
    I really suspect imaging software's definition of linear as it relates to its editing tools are not all engineered the same or are guaranteed to work within different imaging software brands or even within their own brand.
    I have no idea what defines linear data, what it's suppose to look like on a 2.2 gamma display or how to know if it actually is linear as defined by what came off the sensor.
    I have a feeling those who write imaging software that claim linear output are reverse engineering their programming methods to match up to what ever their demosaicing software gives them. Whether that's in a linear space is anyone's guess.
     
  167. Even if the data coming out of ACR has been encoded into 1.8 gamma ProPhotoRGB color space?​
    Definitely.
    I tried what you suggested by converting directly from 1.8 gamma ProPhotoRGB to 1.0 gamma ProPhotoRGB and got pretty much the same dim low contrast rendering only this time I prevented the white from clipping by reversing the Exposure settings to...​
    When you convert to the linear profile the screen appearance should not change at all. Photoshop will display it properly. Don't know why you have a "dim low contrast rendering."
    Exposure=+1.0
    Offset=-.0012 (black point)
    Gamma=1.10​
    All three of those settings are affecting the black point. The Gamma setting is making it non-linear. Suggest you set exposure, then offset. Don't change gamma for now.
    No other slider combinations would give me the contrast and vibrance I got in ACR.​
    You're asking too much from the Exposure tool - it doesn't replace all of the ACR tools. Concentrate on the shadows - try matching an ACR rendering that has only the exposure and black point adjusted.
    The idea is simply that some flare light can be subtracted from images in a realistic way by setting black point in linear space. Setting black point in certain raw converters accomplishes this. It can also be done with the Exposure Tool which apparently operates internally in linear no matter what the gamma of the color profile. Or the image can be converted to a linear space to use Levels, Curves, etc., for setting the black point.
     
  168. I think the cause for the abrupt clipping of the CC chart white patch using the Exposure slider in linear space is probably due in part in that I had to assign a 1.0 gamma profile to a 1.8 gamma encoded image out of ACR.​
    I’d think you’d want to encode out of the converter using a 1.0 TRC working space (modified in Photoshop), not assign this to a 1.8 TRC image. You can’t do that with ACR, its hardwired for four working spaces but could with Lightroom.
    You should just convert the output from ACR to the ProphotoRGB Linear profile.​
    Right, except it will not allow this. Lightroom can. Just build the modified profile and select it in the Export dialog. Any RGB color spaces can be selected from the popup after selecting “Other” and placing a check box next to the profile you want to use.
     
  169. Cliff, the test image doesn't change when I convert to G1 ProPhotoRGB. Sorry for not being more clear. I shouldn't try to say everything in one sentence to avoid getting word reference mix ups.
    It looks dim AFTER I tried to make the image look the best it can using the Exposure tool while keeping the white patch from clipping. See below the dim look I'm referring to from setting Exposure to 1.12 and Offset -.0014. Lab reading for CC chart white patch is maxed at 98L or
    252RGB for sRGB space.
    It still looks dim and shadow definition (shadow "foggies"/flair elimination) isn't better than what I got in ACR. This seems to be way too much trouble than it's worth.
    This experiment did get me to try out zero-ing out my ACR settings to get a non-gamma corrected (darkish) linear preview so I could come up with different curves without relying only on sliders and default contrast curve.
    This method DOES open up a lot of shadow detail while keeping noise to a minimum just by grabbing a point on the curve at 5/5 and just pushing up creating a sort of gamma shaped curve. I can then apply further tweaks along this curve in shaping contrast and definition. Come back with slight tweaks to Recovery/Fill and Contrast and Clarity and I'm done. It seems to be a quicker more methodical way of working than my usual back and forth tinkering. I've been relying too much on Recovery/Fill to correct these types of high contrast shots and getting halo's along light/dark edges. This linear approach in ACR minimizes the use of these tools.
    Unfortunately these settings only work on this test image and others I took at the same time of the same scene just at different angles. I was hoping to find a one size fits all setting with this linear approach so I don't have to edit every one of my landscape and other outdoor shots from scratch. Applying saved presets creates more work than just starting from scratch. Bummer!
    Thanks for the patience and feedback on this subject.
    Back to the digital PP grind stone. A cold front passed through making for one of those super crispy, clear, not a cloud in a gorgeous blue sky days and just shot 90 Raws today. All of them high contrast and the presets from this test image make them flat and washed out. But I did use a different lens, so I guess I need to create a DNG profile for that.
    00Xkou-306111584.jpg
     
  170. This seems to be way too much trouble than it's worth.​
    Tim, I think you're misunderstanding me.
    I am not suggesting that you use the exposure tool instead of the tools in ACR. When you are setting the blacks in ACR, you are already doing it linear! So for your particular workflow using ACR, you're not going to see any advantage. You can thank the wise Adobe engineers for that.
    Once outside ACR, and you want to perform a linear operation, you can just use Exposure, without having to change to a linear color space! Once outside of ACR and you have to tweak the black point, you can get a result similar to what ACR gives by your choice of tool - how is that any "trouble?"
    But the point again in the context of the thread is that the results of setting black point in linear is visibly different vs. setting it in gamma.
     
  171. Cliff, I finally figured out what you were trying to communicate. The light bulb just went off for me and now I see exactly what you're talking about. Sorry for the misunderstanding.
    Really! I truly am! I could kick myself. I've been going about this the wrong way all this time.
    I couldn't believe the results I got using the Exposure slider to open up NORMALIZED (close to finished) previews out of ACR.
    I just now remembered you mentioning this it seems years ago over at the Adobe forums discussing linear rendering out of ACR for DNG camera profile building. The meaning of linear data and its associated preview confused me from my scanner days and from other Raw converters that have this setting. You're saying CONVERT to a linear space. The preview doesn't have to reflect a linear rendering for setting black point.
    I had always assumed you were talking about using the Exposure slider on "linearized"=(zero'ed out ACR settings) that give the familiar darkish previews in trying to open up the shadows. You're talking about using the Exposure slider to open up shadow clarity instead of relying heavily on ACR's Fill slider or pinch points in the Curve tool.
    You're right! I just used the Exposure tool on a finished image (see crop sample below) that had given me trouble a while back where I just gave up and left some cluttered rock detail along a river bank cast in dark shadow remain dark and murky. Highlight/Shadow tool is too confusing and gives abrupt transitions if you go too far. The Exposure tool seems to have given me an extra 1/2 to a full stop of useable dynamic range that would posterize using curves in ACR.
    ACR's Contrast and Fill sliders reach too far into the rest of the tonal scale of the image where they end up canceling each other out. The Exposure tool behaves much more differently.
    Thanks for the tip and sorry for the misunderstanding.
    Happy Holidays!
    00Xl7M-306387584.jpg
     

Share This Page