Advantage to using a color space larger than capture/print?

Discussion in 'Digital Darkroom' started by justin_stott, Oct 7, 2010.

  1. I've been searching and reading a lot about color spaces, especially ProPhoto. My question is: Is there a real noticeable difference in the final print switching into and out of ProPhotoRGB during editing if my camera captures, and my printer prints, AdobeRGB?
    Workflow: capture RAW on Canon 5d mk II AdobeRGB color space, import to Bridge, most corrections in ACR, open in photoshop at 300 dpi Adobe RGB for tweak, crop, and resize for print, send 16 bit file to Canon Pixma 9000 mk II, calibrated monitor, Canon papers and inks, photoshop manages color using paper profile.
    I am quite pleased with the results I get staying in Adobe RGB, but is there something I'm missing out on?
    Thanks much!
     
  2. There are some paper and ink combinations that can now print a wider color space than AdobeRGB. Monitors are also getting better at being able to display a wider space than in the past. That said, I personally use a color space smaller than AdobeRGB and different from sRGB and don't find it an issue. I have experimented over the years with different color spaces, even recently, and still prefer the one I use to those larger ones that are available (even in post, I find I have much more control getting the look I want). I think it is just a matter of preference, but if you like the results you are getting, I am not sure I would change anything.
     
  3. In camera Adobe RGB is only for its JPEGS.
    Does you printer specifically say that it's limited ta aRGB? If so it may not look right if you send ProPhoto, but if it doesn't why not send the max. gamut?
     
  4. In camera Adobe RGB is only for its JPEGS.​
    Well this is a huge jump in understanding for me!
    I'll run some tests using the same image in a couple of color spaces, I'm a fairly noobie on the printer side of things, but that one seems to handle whatever you throw at it pretty well!
     
  5. Only high end epson printer and the highend Canon printer that i know of that you can see a difference sending Pro Photo RGB vs Adobe RGB.... 8 bit vs 16bit dont do much visually at that stage.
    Just remember that Pro Photo will only look good at your monitor in Photoshop, and on your (i assume) Epson printer... on the web and if you print with a external lab the result will be worst that anything else.
    So no you are not missing anything, and i sugest you still work in Adobe RGB.... and use the correct paper icc profile to get good result.
     
  6. Reinforcing what Brad wrote -- if you're shooting raw, the color space you set in camera is irrelevant. That setting applies only to the in-camera JPEG conversion, just like the camera's settings for things like contrast, saturation, and sharpening. In your case, Bridge is the first time a color space is applied to your image.
     
  7. A color space is nothing but a container and working space for editing digital data captured by a digital device. We don't know exactly what that device is capable of capturing so we use a container to give it enough space in case if there are future improvements in digital editing software in extracting more data later on. No guarantees, just insurance.
    Remember it's only 1's and 0's by the time it reaches our computers. It's all interpreted by software so you might as well archive it in a big enough space.
     
  8. digitaldog

    digitaldog Andrew Rodney

    I am quite pleased with the results I get staying in Adobe RGB, but is there something I'm missing out on?​
    Depends on the gamut of the scene you capture and the gamut of the printer you eventually use (today or in the future). There are captures who’s gamut exceed Adobe RGB (1998) and there are printers who’s gamut exceed Adobe RGB (1998). Why reduce the gamut of something you’ve captured and can output?
     
  9. I understand color space settings for cameras are not relevent, if one is shooting in RAW. But what about film scanners?
    I have a Nikon Coolscan V. I scan directly into Photoshop using NikonScan as a TWAIN driver, then save the files in PSD format. I have a choice of either sRGB or Adobe RGB in NikonScan. I chose Adobe RGB in NikonScan and use Adobe RGB as my working space in Photoshop. Just what format does NikonScan use to pass the data to Photoshop? It cannot be JPEG since I often select a 14-bit color depth and JPEG is restricted to 8-bits. TIFF? Something else? In this case, I am working under the assumption that matching color spaces is correct. Am I correct in this assumption?
     
  10. Brooks, when you scan and the file opens in PS, then it is probably transferring in a Tiff format as that is the general default, but you will never see it and it really doesn't matter. What matters is that you are transferring into photoshop uncompressed and in 16 bit for maximum quality.
    Tim, your point makes some sense, however, I personally prefer the color space I use, which is not even as large as AdobeRGB. I am not sure it matters what the camera captures or what some output device can create as much as what I want an image to look like. Having an infinite gamut is great and many would use it if it were available, but having an image look the way you want to present it is what the medium is all about. As I said, I have used other profiles, like ProPhoto for some shots and AdobeRGB for others, when I recently tested, but I don't like them as well as I do in the space I have been using the last 12 years. One thing I find more prevalent in those color spaces is banding in the skies as well as not moving the way I like when applying curves and such. It is just a preference and, again, the purpose of it all is to get images to look the way you want, not to have the largest gamut.
     
  11. The widest gamut we have is our eyes. If we want an image to match as much as possible what we see , isn't it best to use the widest possible gamut?
     
  12. Brad, is that the goal or is the goal to present our vision?
     
  13. It depends on what we want; sometimes our vision is for black and white, sometimes we (I) want the closest thing to reality.
     
  14. digitaldog

    digitaldog Andrew Rodney

    It depends on what we want; sometimes our vision is for black and white, sometimes we (I) want the closest thing to reality.​
    True indeed. The issue is, at some point, early in the workflow (converting raw data to rendered pixels), we have to select the encoding color space. If you pick something smaller than the data you captured, or can at some point print, its like a surgical sex change operation!
     
  15. It depends on what we want....​
    That is the same thing I said if I am not mistaken--"the goal to present our vision". We choose a space that serves our needs, not anyone else's needs or wants.
     
  16. I guess the container aspect of a color space I mentioned was missed. Oh well.
    Banding is not caused by the color space used except maybe if you edit in 8 bit. I've never gotten banding shooting Raw and editing in 16 bit ACR whether I chose sRGB, AdobeRGB or ProPhotoRGB as an output encoding color space.
    I guess I don't see the value or need for switching output color spaces among hundreds of digital captures in order to improve the look of an image. It seems like an inefficient way to work IMO.
     
  17. I guess the container aspect of a color space I mentioned was missed.​
    I understand what you're saying. I'm amazed at how much content of my photos from my 40D is beyond aRGB. The ProPhoto container is larger than what my camera captures I'm sure, but I like to preserve what's in there beyond aRGB. And I guess I have to ask after reading these posts, why not?
     
  18. Tim, your container example is a little confusing, although I think I understand what you mean. The only way to truly preserve your files is to keep the raw file, otherwise any color space might be inferior to something in the future and raw processors might be improved to get more out of a file. The raw file is the only assurance you have everything still in tact that was captured.
    As to banding, it depends on how you work your file. I always use 16 bit and with ProPhoto and AdobeRGB I found that they are more prone to band in blue skies than my normal space. Generally, this banding will be eliminated when you flatten the image, however, when it isn't completely eliminated, it is an issue. On the same images where I got banding with the two above while still in layers, I did not get banding with my normal color space and the images went where I wanted them to go more efficiently.
    And I guess I have to ask after reading these posts, why not?​
    I think that is a good question and one worth following unless you find something that works more to your liking. I would certainly recommend that people use ProPhoto or AdobeRGB if they don't know what to use, but I am just saying that there are other spaces that can work better for some and fit in with their vision better. The world is not vanilla but it seems too many try to make it so.
     
  19. I fully understand the point of working in Pro Photo and stay as long as possible in that color space.. but the reality is that most of us (at least me) are working to please a client and make him happy with the end result being a print.
    Years ago i was fighting to work in Pro Photo and then convert to Adobe RGB when i deliver the job.. but when the printer / graphic designer where using the file and needed to convert it to CMYK, a lost of color was occuring and the printer / graphic designer didtn do anything to fix it or to enhenced the shot.. it was the sad truth of RGB vs CMYK he told me... the client didtn care about this conversion, all he remember is amazing color on screen, on epson print and now all he saw was dull color, or at least less vibrant than what he previously see.
    Then a friend of mine, a excellent commercial and fashion photographer that also do (or use to do) is retouching told me that he find a way of making everyone happy; sRGB.
    I was surprise because for years i had read that Adobe RGB was the way to go... but the fact was there.. when conversion result was there, i was not that happy.
    Then i start working in Adobe RGB exclusively, then convert to sRGB before giving away the file to the client and things got better, but little shift occur during this conversion and sometime when a client ask to match precisely is cloth color, this conversion was problematic... Then i ask myself, why dont i simply developed the best i can using a gray card, and all the the tools possible in the raw developement and export all the images as sRGB? since all the color / contrast / curve / etc where made on this raw, what i export muct be perfectly good and workable no?
    So i start 5 years ago exporting all my file as sRGB, and when in Photoshop i rarely have to work on color / contrast / curve / since all was fix before.. i can then duplicate the background, apply my sharpen, my retouching, my sharpen, my effect, adjust in need the contrast / brightness, add some saturation (save this as PSD) and export a flatten copy at the final size and resharpen for that size and purpose.. all that in sRGB from start to finish.
    The result? perfect all the time. Because what i see on screen, what my client see on there calibrated monitor (always make sure they have one by going there myself and calibrating it my self if they dont have a IT support) when they receive there match print little color adjustment are needed to make the print look like the real cloths (+2 +3 cyan, majenta or else to make the color closer to the original from this print).. and voila. im happy, client is very happy.. and when all is print commercialy in magazine, billboard, or else all match without conversion issue.
    Touching wood (not Tiger) until today, what i see and what is print across the globe (lucky to have client spread all over thanks for the FTP and email discovery) ... i dotn say Pro Photo is not good .. if you pritn on a Epson printer, please continue to do so of course, if you only have you to please, continue to do so.. but if you want to sleep at night knowing that your client will be happy and that the final image will be print onto a CMYK device.. dont fight it, send them sRGB and work in sRGB all along and be problem free.
    at least it work for me, and i am happy to do so.. im not there to convert anyone to sRGB , just to explain how and why i use it and that sometime, you are better to work with a smaller color space taht work for your need then working with the largest one and keep asking why your color or result are deceiving ; )
     
  20. Thanks Patrick - fascinating stuff, and potentially very useful.
     
  21. Patrick, I've read online quite a few prepress techs suggest sRGB for output for later convertion to the color crushing gamut of CMYK as a sort of a color clipping pre-visualization and prep routine especially when viewing on wide gamut displays larger than sRGB. Editing in sRGB keeps the eye from going hog wild with the saturation levels even if you Soft Proof with a CMYK profile.
    The banding from flattening layers in Photoshop as John mentioned was discussed numerous times quite a while back in Adobe forums as a preview bug. Not sure if it was ever fixed in the latest version of Photoshop. I just don't see it influenced by a particular color space, but then I don't edit images using stacks of layers later to be flattened, maybe one or two at the most.
    My point about staying in ProPhotoRGB was primarily directed toward maintaining an efficient Raw workflow strategy so as not to keep having to switch back and forth with numerous conversion routines all the way into Photoshop and beyond. I can't imagine having to do that on the hundreds of Raw images I've accumulated in the past couple of years.
    For my situation I just like the ease of setting and forgetting ACR's output to ProPhotoRGB without the concern of clipped endpoints in the histogram during editing and then opening, converting to a smaller color space doing further tweaks in Photoshop. Less work for me.
    But if Patrick is actually finding it easier with regards to preserving image quality with regards to pre-visualizing clipping converting to CMYK by working in sRGB, who am I to argue. I don't do prepress anymore. With all this complexity as evidenced in this discussion and elsewhere on the web on a wide range of other digital workflows, do you blame me for getting out?
    I think I became confused from what's been discussed so far on whether we're talking about a Raw workflow or saving the final edited file as a tiff or jpeg and subsequently the final output space. Archival aspects of preserving known or unknown data isn't a concern with Raw since there's really no definable source color space except what's determined by the conversion software to generate the default preview using the display profile. With regards to ACR the source is a linear version of ProPhotoRGB. Don't know what other conversion software uses.
     
  22. Yes I have to say that I don't do photography professionally (it's just for my walls and a few for presents), and Patrick's points are well taken for client work.
     
  23. Brad wrote:
    The widest gamut we have is our eyes. If we want an image to match as much as possible what we see , isn't it best to use the widest possible gamut?
    I'm not too sure about that: apparently, the widest available to us gamut = ProPhoto, contains "theoretical colors", i.e., colors that don't really exist. Anyway, it's the gamut of an output device -- be it a computer monitor or a print -- not of the editing space, that's the ultimate limiting factor. Best monitors can roughly match Adobe RGB; best printers may slightly exceed Adobe RGB in certain colors. Nothing can display ProPhoto.
    Besides, there's more to matching what we saw at the time of capture (as we remember it, mind you) with what we see on a monitor or print than a color space. E.g., setting white balance during post processing as well as the color temperature of the ambient light under which you look at the final output.
    I'd recommend quite an informative yet not too technical article aptly entitled ProPhoto or ConPhoto by Jeremy Daalder of Image Science.
     
  24. True in a small way it's a little like flying blind, but the onscreen colors look very close even before I had an aRGB monitor (except the Epson print preview that shows garish versions of the out-of-aRGB colors). It all prints out beautifully though, close to the screen but some of the darker colors are a little richer (not blocked out). They're so far amazing prints that indeed retain depth where it existed in real life.
    The majority here seem to be ProPhoto naysayers though, so don't listen to me.
     
  25. digitaldog

    digitaldog Andrew Rodney

    ProPhoto, contains "theoretical colors", i.e., colors that don't really exist​
    Right. Its simply someone (Kodak) specifying three chromaticity values which produce a simple triangular shape (when viewed 2D) such that to produce a space as wide as desired in this case, two primaries fall outside the spectrum locust which defines human vision. That said, “colors” (or in this case chromaticity values) that are not visible to a human isn’t really a color. It is possible to specify numerically, color values that are not visible in this space. But its the price we pay for a theoretical working space that is this size. The size is designed for a reason, scaling the three primaries produces a much smaller working space to encode data we can capture, see and output.
     
  26. But its the price we pay for a theoretical working space that is this size.​
    The only difference I see converting from ProPhotoRGB to a much smaller color space such as sRGB (which BTW CS3 Bridge previews my ProPhotoRGB Raw files through) is that bright, intense and dark, rich greens, yellows, cyans and oranges viewed in ACR will slightly shift in hue and saturation toward a noticeably duller version even when converting in Photoshop and ACR to sRGB. Converting to AdobeRGB shows a much more subtle shift.
    Now I thought this was a display gamut inducing behavior since my old iMac is closer to sRGB, but a while back Patrick proved it happens even on his wider gamut display. Is this a preview bug? Or is this gamut clipping within a matrix to matrix based working space profile conversion? Or is the math behind the theory just not that perfect at this time?
     
  27. I thought this was a display gamut inducing behavior since my old iMac is closer to sRGB, but a while back Patrick proved it happens even on his wider gamut display.
    I was under the impression that even the new LED-backlit 27" iMac monitors aren't much larger than sRGB, but anyway, since everything has to be matched to the display's gamut, I'd sooner expect any color shifts occuring due to conversion to be visually noticeable in high-end monitors (as they actually can display at least some of those colors), but that's only my logic as opposed to factual knowledge.
    What rendering intend are you using? -- sounds like you may be using Perceptual (it "squashes" everything even if there are no out-of-gamut colors).
     
  28. digitaldog

    digitaldog Andrew Rodney

    I was under the impression that even the new LED-backlit 27" iMac monitors aren't much larger than sRGB.​
    I believe that is correct.
    What rendering intend are you using? -- sounds like you may be using Perceptual (it "squashes" everything even if there are no out-of-gamut colors).​
    RGB working space to working space conversions use Relative Colorimetric as these are simple matrix profiles and don’t have the Perceptual or Saturation tables.
     
  29. Patrick does have the 27" iMac but I was referring to his older NEC he had at the time of the discussion on color shifts converting from ProPhotoRGB to sRGB.
    This color shift isn't a deal breaker in the commercial world. You only see it after converting it. It's subtle enough that if you walked away and came back to the display after your eyes adjusted you wouldn't notice the difference.
    When I enhance Raw shots I've taken of sunlit intensely colored flowers and similarly colored subjects in 16 bit ProPhotoRGB in ACR, I tend to punch it up a bit while keeping the ProPhotoRGB output histogram from clipping with no posterization (saturation blooming) showing up in the flower. I then do a test by switching to sRGB within ACR and notice the shift. Cadmium yellow, a noticeably intense yellow with a bit of red (unlike lemon yellow which has a bit of cyan), will slightly take on a kind of rust brown tint converting to sRGB.
    If I convert to my own custom iMac profile this won't happen which may point to a preview bug.
    Otherwise it's still a kind of let down when I see this happen, but I just walk away from the computer and come back to the Bridge preview and I don't notice a thing.
     
  30. FYI I still have a NEC 2690wuxi connect to iMac 27... Just change the computer from a macpro to a more powerful iMac :
    )
     
  31. What rendering intend are you using? -- sounds like you may be using Perceptual (it "squashes" everything even if there are no out-of-gamut colors).
    RGB working space to working space conversions use Relative Colorimetric as these are simple matrix profiles and don’t have the Perceptual or Saturation tables.​
    What do you mean, Andrew? -- under 'Convert to Profile' in PS one is given four options to choose from in the 'Intent' drop-down menu, and Perceptual is one of them. Not that I run any comparative tests, but they all seem to be active when I convert from ProPhoto to sRGB.
    00XSXB-289275584.jpg
     
  32. Tomek,
    ProPhoto and sRGB are matrix profiles.
    Perceptual intent does'nt exist.
    To execute the transform, CMM modify perceptual to relative.
    Jacopo
     
  33. digitaldog

    digitaldog Andrew Rodney

    What do you mean, Andrew? -- under 'Convert to Profile' in PS one is given four options to choose from in the 'Intent' drop-down menu, and Perceptual is one of them.​
    Indeed, the option is there, but with the current V2 ICC working space profiles, there is no perceptual table. Try converting using both options, subtract the two and you’ll see, they are identical.
     
  34. For those of you on the mac that want to compare spaces, open your colorsync utility(in Utilities folder in the Applications folder). You can select any profile and view it in a 3D space. If you right click on the selected profile, you can "hold it for comparison". Then, when you select any other profile, you will see how the two compare, even your custom monitor profile. The base profile turns white and the one you are comparing to stays full color, you can see where they mismatch or are contained by each other this way.
     
  35. Tomek, below are two sites that deal with the new sRGB v4 color space profile that does have a Perceptual rendering intent table as well as a Black Point Compensation option where both change the preview of the image and output histogram with regard to clipping. The second link is an Adobe forums discussion I started discussing some of its advantages. Just a warning, lots of color geek talk.
    http://www.color.org/srgbprofiles.xalter
    http://forums.adobe.com/thread/311580
     
  36. digitaldog

    digitaldog Andrew Rodney

    Even with V4 profiles, its not going to do much for you unless its the destination profile.
     
  37. " . . . on the web and if you print with a external lab the result will be worst that anything else."​
    This is not entirely true Patrick. Labs like West Coast Imaging (http://www.westcoastimaging.com) can handle ProRGB and the difference is obvious in the final print.
     
  38. well i was talking about 98% of them let say.. some high end lab (not your typical local one) can handle Adobe RGB or else because they know what they are doing.. but saddly most of them out there still require or use srgb as there color space.. and there old printer cant get something better anyway.
    As for obvious in the final print.. a sRGB well done and well saturated to the max of the printer capability could yield to similar result than a lab who use Pro Photo.. not saying its the same thing, just saying that if you saturated enough your sRGB before printing it could look like one made from Pro Photo.. i have made many test to comfirmed that ; )
     
  39. Since I print mainly to a photo inkjet printer, I have been using Bruce Lindbloom's BetaRGB color space: big enough to cover virtually all output papers (including some not covered by AdobeRGB), but not as large as humongous and virtual ProPhoto.
    As I understand it, when a color space is converted into another, the bigger the difference between the two, the more the possibility of introducing significant shifts. That's the rationale for using the smallest working space that will render all of the colors of your desired output medium. And nothing bigger. Any counter indications to BetaRGB?
     
  40. Tim wrote:
    [...] the new sRGB v4 color space profile that does have a Perceptual rendering intent table as well as a Black Point Compensation option where both change the preview of the image and output histogram with regard to clipping [..] lots of color geek talk.
    Andrew wrote:
    Even with V4 profiles, its not going to do much for you unless its the destination profile.
    Cheers guys! I have had a read and indeed, it did my head in. I wouldn't even know where to save this profile :S
    What makes me think it may not be a good idea for me to use it is the advice not to mix v4 with v2; the profiles that I did manage to find on my drive don't specify which version they are (at least it's not stated in the file names themselves).
    Anyway, I read numerous articles about rendering intents and BPC when I was trying to understand what all these options mean and how I should use them, yet, surprisingly, no-one mentioned that none of these applies when converting to the most common sRGB, so I'm really glad this insight surfaced here!
     
  41. Tomek, you don't have to use the v4 sRGB profile off that site as specified. I downloaded and assigned and/or converted to and from v2 sRGB on my six year old computer that doesn't have not one v4 profile installed except this one.
    There is a noticeable shift to the preview mainly in the shadows, but then I don't want that to happen anyway so I don't see the use of this special sRGB. I only offered it as a learning tool and tinker toy.
     
  42. Howdy Tim, I see on your profile picture that you're on an iMac as well. I'm on the aluminum one, but suspect that won't matter for the question I want to ask you: where should I save this (or any other color profiles, for that matter) on my computer? I suspect that would be the folder: Macintosh HD >> Library >> Application Support >> Adobe >> Color >> Profiles >> Recommended -- is that the correct access path?
    Thanks!
     
  43. HD>Library>Colorsync>Profiles folder. Putting it there allows the profile to be seen by all apps on my system which is OS 10.4.11 Tiger.
    Don't know what OS X version you're using. It's very easy to find this out online, but I doubt the directory for profile placement has changed in Leopard and Snow Leopard.
     
  44. Makes sense (that was the other option that crossed my mind).
    I'm on 10.5.8, which has the same folder structure; sRGB v4 is already sitting there...
    Ta!
     
  45. Justine wrote:
    I've been searching and reading a lot about color spaces, especially ProPhoto. My question is: Is there a real noticeable difference in the final print switching into and out of ProPhotoRGB during editing if my camera captures, and my printer prints, AdobeRGB?
    Up till now, I had my workflow set-and-forget to ProPhoto and have never even considered using any other profiles than these that come bundled with PS, but I might be warming up to deviating from this simple-yet-blunt approach. Late Bruce Fraser's Finessing Photoshop Color article on CreativePro explains some trade-offs between input- and output-centric philosophies in a typical for him easy-to-understand way, which is nicely supplemented by Bruce Lindbloom's RGB Working Space Information considerations of some more technical aspects. And finally, a quote from Joseph Holmes's of Natural Light Photography All About RGB Working Spaces article:

    The most important consideration by far when selecting and using a space is to avoid clipping when the image is converted into it. [...] The way that out-of-gamut colors are treated when they are mapped into working spaces is worse and more damaging, on average, than the way that out-of-gamut colors are treated when they are mapped into a printer profile. [...]
    To restate the above, when a space is designed, the gamut of the colors being mapped into it is more important to take into account than the gamut of the space into which colors will later be mapped for output. Also, it is important that even colors which will ultimately prove to be outside of your printer's gamut are not clipped first (by having entered a too-small working space), because those clipped colors will print worse than they would if merely mapped inward by the printer profile. They will tend to be more lacking in detail and shifted in hue.
    It's also the case that the working space has to give you room to work — to let you edit your images without clipping them avoidably, although giving yourself too much room to work can also backfire because printer profiles have a hard time moving colors a long ways to bring them into gamut. Care should always be taken to avoid both clipping and pushing colors way too far out of your printers' gamuts.
     
  46. I would add to the list of references the excellent http://graphics.stanford.edu/courses/cs178-10/applets/locus.html which, in addition to the theory, gives you an in-depth hands-on approach.
    My understanding is that -ideally- your camera and printer's color spaces should coincide so that you would just stay in that as your working color space. Since this is never the case, second best would be a working color space that could contain all of the colors that your camera AND printer (assuming that's your final output medium) can produce plus a bit of headroom to deal with rounding errors. ProPhoto is, according to some of the references mentioned in the post above, too big - therefore sometimes creating virtual colors during processing that are so far off the real colors that the output medium is able to produce that the conversion of some ProPhoto colors to the printer's color space is the mathematical equivalent of guesswork.
    Short of paying for Joseph Holmes' custom made profiles (which I am nevertheless considering), I have bought into Bruce Lindbloom's BetaRGB as a workable compromise. I'd be interested to hear what other forum members think about this approach, or if there are newer, better compromises available today.
     
  47. I've got one, hopefully quick, general question regarding color profiles I'd like to ask: what's in the .icm/.icc files? -- do they contain only a handful of numbers describing White Point, Gamma, and xy "coordinates" for the three primaries, as suggested by Ian Lyons's post on Adobe Forum re defining BruceRGB?
     
  48. *Edit my earlier post: the beginning of the second paragraph should read as follows*
    My understanding is that -ideally- your camera and printer's gamuts should coincide so that you would just use the smallest color space that would contain that gamut as your working color space.
     
  49. digitaldog

    digitaldog Andrew Rodney

    Digital cameras technically don’t have a color gamut, certainly one that is easily defined like that of a printer gamut.
     
  50. Jack, interesting color science link. I would need a ton of coffee in order to wrap my head around what and how to apply it to photography. I have no thoughts on using BetaRGB and I did try it out a while back. I don't use it because I didn't see any advantage in the limited amount of printing I do over using any other color space.
    As Andrew indicated about digital camera's not having a color gamut, I wonder how the color scientists that created that site measured the gamut of the Nikon D200 and Canon 30D in producing their RGB 3D gamut diagram included in the applet. From examining the applet they seem to show a digital camera having a bigger gamut than human vision. I may be reading it incorrectly, so not sure.
    The only way of possibly measuring the gamut of a digital camera's capabilities of capturing scene gamut is to measure the response of their RGB (Bayer) filters in front of the sensor, but then you'ld have to figure out what gets passed from the sensor electronics and through the camera's A/D converter to know for sure and all that's proprietary information involving voltage measuring. Also once you have to rely on software to reconstruct what's captured off the sensor then it becomes anyone's guess. Wonder what scene was used to establish gamut capture capability if that's how those color scientists measured it on that applet site.
     
  51. By Gamut I mean the range of color and tonal values that a particular device (eye, scanner, camera, monitor, printer, painter) is capable of detecting or producing. For obvious reasons we are mainly interested in the visible portion of these.
    Every camera can only capture a finite number of colors and tonal values: that set of color and tonal values is by definition the camera's unique Gamut.
    Every printer can only reproduce a finite number of colors and tonal values: that set is the printer's unique Gamut.
    In an ideal world, the two Gamuts would coincide (and also coincide with the Gamut of your Eyes and your Monitor) so that a color space could be designed to contain all detected and reproduced colors and tonal values snugly. But in fact they are VERY different.
    One approach is to use a small color space, like sRGB, that contains only the subset of colors/tones that are easy to detect/reproduce and therefore appear in BOTH the camera AND printer Gamuts. This is the easiest and the safest way to proceed, but at the price of mistreating or not having available (see link below) about two thirds of visible color/tone combinations.
    At the opposite end, an alternative approach is to use a huge color space, like ProPhotoRGB, that is so big that it is able to contain virtually anything and certainly most all of the colors/tones present in both the camera and printer Gamuts with a lot of room to spare. During processing the color/tones have a lot of room to move around in. However, when a ProPhoto document is printed, all the colors/tones need to squeeze through the printer's Gamut, potentially resulting in a lot of guesswork in translation (see here for a visual explanation of rendering http://graphics.stanford.edu/courses/cs178-10/applets/gamutmapping.html ). The further you are off the printer's gamut, the harder it is for it not to mistreat color/tones in your picture. And there is a lot of room in ProPhoto to be off.
    A third approach is to split the difference, and choose a color space that is bigger than sRGB but smaller than ProPhotoRGB: just big enough to contain most of the camera AND the printer Gamuts, with reduced room for error. BetaRGB is one such color space. There are others. What do people use and why?
     
  52. Dont know about other programs, but in Lightroom, if you send a TIF file to Photoshop, you can specify a Prophoto color space. Then, in PS you do see it as in that space, if you do not have PS set to automatically change the color space to Adobe or sRBG... Ive found that Especially in Landscapes it is beneficial to use Prophoto. With more subtle variations of color where such exists, like in autumn or sunset and sunrise images. With printers which can use it, especially printing with 1200-2400 Dots per Inch there is dramatic color
     
  53. digitaldog

    digitaldog Andrew Rodney

    Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Therefore, the measured pixel values don't even *get* a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.
    As Andrew indicated about digital camera's not having a color gamut, I wonder how the color scientists that created that site measured the gamut of the Nikon D200 and Canon 30D in producing their RGB 3D gamut diagram included in the applet.​
    Probably by taking output referred (demosaiced and rendered) data, feeding it to a product that builds an ICC profile and plotting its gamut.
     
  54. @Andrew: Interesting, I never thought about it at this level. Do human eyes have an average gamut? If not, how can we say, for instance, that VISIBLE colors fit into the LAB color space? I guess I implicitly assumed that they do, even though the brain derives its color information from three non-co-located cones (sensels) similar to the way a camera sensor/system works. Therefore would it not have a gamut too?
    I am not a color scientist, so I have no idea how they actually go about defining the camera's gamut in practice. However I can take a wild *simplified* guess, having spent some time absorbing the Stanford link from a couple of posts ago: take a representative small number of R* G* and B* sensels in a circle at the center of the camera's sensor (heck, in fact why not take the whole sensor?), and illuminate it with successive beams of uniform light of wavelength from 400 nm to 700 nm, each time measuring the level (value) of average R*, G* and B* sensels, thus obtaining R*, G* and B* sensitivity functions. Plot the corresponding locus in 3D for fun. Transform the locus to XYZ primaries through the appropriate wavelength dependent matching functions, project it onto the X+Y+Z=1 plane, et voila': a camera gamut without first having to map to a prticular RGB space. No?
     
  55. digitaldog

    digitaldog Andrew Rodney

    Do human eyes have an average gamut?​
    That horseshoe plot you see often (the CIE chromaticity diagram) is based on the “standard observer”, a theoretical human’s vision. Anything outside that plot is out of gamut, not visible. This is all based on science done in the 1930’s with a group of male volunteers, viewing samples of projected colors. Lab is a variant of CIE XYZ 1931 to account for (well attempt to account for) perceptual uniformity.
    This may help:
    In addition to device-dependent color spaces, there are also device- independent color spaces. These color spaces encompass all of human vision. The most common is called CIELAB (or L*a*b; often written as LAB, although technically the * should be used). Back in 1931, the CIE (Commission Internationale de L’Éclairage, also known as International Commission on Illumination), a group or color scientists, conducted a series of experiments and tests on humans to determine how they perceive color. The tests involved showing groups of volunteers a sample color under very controlled conditions whereby each subject adjusted the intensity of red, green, and blue lights until the mix of the three matched the sample color. This allowed the CIE to specify precisely the stimulus response of the human eye.
    The CIE came up with the term standard observer to describe a hypo- thetical average human viewer and his or her response to color. Furthermore, the results of these tests produced a mathematical model of a color space formulated not on any real-world device, but rather on how we humans (the standard observer) actually perceive color. This core color model is called CIE XYZ (1931). This is the color model from which all other device-independent color models are created. Like the RGB color model with three additive primaries, CIE XYZ uses three spectrally defined imaginary primaries: X, Y, and Z. These X, Y, and Z primaries may be combined to describe all colors visible to the standard observer. Also in 1931, a synthetic space called CIE xyY was created, which itself is derived from CIE XYZ. In 1976, CIELAB and CIELUV were added to the mix of these device-independent color spaces. The CIELAB color space is a synthetic, theoretical color space derived from XYZ. Unlike the origi- nal, CIELAB has the advantage of being perceptually uniform (sort of . . .). That simply means that a move of equal value in any direction at any point within the color space produces a similar perceived change to the standard observer.
    The XYZ color space is based on three quantities or stimuli. The geek term for describing this is tristimulus values (three stimuli). Technically the term tristimulus values refers to the XYZ values of the original CIE XYZ color model although you will often hear people describe tristimulus values when defining a color in RGB or CMY (or using any three values). This is incorrect. Since our aim is to keep the color-geek-speak to a minimum, it’s not important to know the differences in the various CIE constructed color models, but rather to recognize that a color space such as CIELAB is based on how we see color. What you should keep in mind here is that using a set of three values, any color can be specified exactly and mapped in three-dimensional space to show its location in reference to all other colors. This can be useful! There are no capture or output devices that directly reproduce CIELAB; however, this color space allows us to translate any color from one device to another.​
     
  56. digitaldog

    digitaldog Andrew Rodney

    Transform the locus to XYZ primaries through the appropriate wavelength dependent matching functions, project it onto the X+Y+Z=1 plane, et voila': a camera gamut without first having to map to a prticular RGB space. No?​
    See:
    http://www.openphotographyforums.com/forums/showthread.php?t=12600
    Also, a useful response to the question “does raw have a color space”, a conversation among a number of color geeks produced this reply from Jack Holm, former head color scientist for digital cameras at HP and a fellow member of the ICC digital camera group:
    - Unless the camera spectral sensitivities are colorimetric, they do not define the intrinsic colorimetric characteristics of an image.

    - Also, primaries are for synthesis and can cause some wrong thinking if discussed in relation to analysis.

    The second paragraph of Thomas’ response is important.

    The short answer to the question is:

    Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry.

    The same thing could be said about film negative densities.

    Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like with film scanning).

    A purist might argue that a color space not based on colorimetry is not really a color space because it is not an assignment of numerical values to colors, defining colors as a human sensation. In the standards committees we decided it is useful to be able to talk about non-colorimetric color spaces so we allow them and use “colorimetric color spaces” when appropriate.

    Jack
     
  57. Jack, that sounds like an effective way to plot the camera's gamut, but from my point of view (and I too am not a color scientist) a gamut does not a picture make. The allocation of density, hue and saturation from one pixel to the next as recorded by the sensor determines the quality of depth and realism in a captured image.
    What scene or color target and lighting arrangement presents enough color variation and gamut boundary parameters in determining how many colors can be captured by a digital camera? Andrew answered part of my question with the gamut plotting of the ICC profile derived from the camera's response after demosaicing, but he didn't include what color target was used to test this. Subjecting the camera's sensor to just white light and separate RGB targets wouldn't tell you how many variations of HSL the camera is sensitive to and THAT is what makes an image look like an image.
    Since our eyes adapt to colors next to each other within a scene similar to the way complementary colors affect our eyes in seeing richness and depth, it's important that the camera be as sensitive to this optical phenomenon. How would that be measured and is that optical effect part of color gamut plotting. If we see a rich teal transitioning to a light turquoise in a scene but the camera records only one clump of green with a tinge of cyan, how would that be factored into assessing its gamut?
    This is why all this color science stuff isn't very useful because it doesn't address, measure and calculate this VERY REAL aspect of human perception. So how do these 3D gamut plots help us in producing an image with depth, richness and clarity? I haven't seen any hard evidence that all this math going on under the hood is having any effect in allowing us to control this. There's too much hidden to know for sure.
     
  58. @Tim (edit: this refers to your post in the previous page - didn't see your newer post until after I posted this one): The camera's gamut is bigger than the gamut of human vision because its sensor is able to detect some wavelengths that we don't (e.g. infrared).
    I think that Andrew meant that you can't say that a camera has an RGB gamut off the bat because the sensitivity functions (R*, G* and B*) derived from the three types of differently-filtered sensels unique to each camera sensor do not correspond to those derived from a standard set of primaries (correct?). However, IMHO, it can have a gamut in XYZ space, and that's really what we are after.
    In my uneducated humble opinion, a camera has a unique locus in non-standard R*G*B* space which can be transformed into a locus in standard, positive-only XYZ space which, in various forms, represents the range of color/tonal values that the camera is able to capture: its gamut. This XYZ locus can for instance be plotted on chromaticity diagrams in the shape of a two-dimensional gamut outline and superimposed onto human vision, printer and other gamuts to help choose an appropriate working color space. So, who has done this for a D90 .-)?
    As far as measurement goes, to determine the camera's gamut I believe that all you would need to do in the thought experiment above is to read the RAW data of each sensel every time a beam of different wavelength is shone on the sensor (while of course knowing the layout of R*,G* and B* filters on the chip) so that the relative R*, G* and B* sensitivity functions could be derived.
    @Andrew: Thanks for the info.
     
  59. Clear enough explanation, Jack, but I still don't see how this tells us what color boundaries any given digital sensor is going to capture. When I look at a scene that has saturated colors I still don't know if and how far the camera is going to screw them up and whether I'll be able to fix it in post and/or set exposure low enough NOT to induce saturation blooming/clipping capturing in Raw.
    Shooting intensely orange Pomegranate flowers lit directly by late afternoon sun on a cloudless day can be a challenge especially with establishing correct exposure.
    Here's other's take capturing the same flower. See below what I had to deal with shooting jpeg.
    http://www.google.com/images?hl=en&newwindow=1&safe=off&nfpr=1&q=pomegranate+flower&um=1&ie=UTF-8&source=univ&ei=IFS7TLytEoWBlAeK4andDQ&sa=X&oi=image_result_group&ct=title&resnum=1&ved=0CCUQsAQwAA
    Notice on that link all the orange variances caused by lighting, exposure and probably incamera processing. What's the color gamut of that flower scene under the lighting conditions I described above and what I had to do to recover shooting Raw demonstrated below.
    00XV5K-291315684.jpg
     
  60. In my uneducated humble opinion, a camera has a unique locus in non-standard R*G*B* space which can be transformed into a locus in standard, positive-only XYZ space which, in various forms, represents the range of color/tonal values that the camera is able to capture: its gamut.​
    That only applies at a specific white point...the spectral response of a camera will be different at different light source color temps...that's why it's folly to try to claim a camera has a fixed gamut. It doesn't..
    Pro Photo RGB is the only color space that can possibly contain ALL of the colors a camera can capture in raw and ALL the colors a modern printer can print. That's what makes Pro Photo RGB in 16 bit a really useful color space. All the other discussion is interesting-particularly the attempts at trying to pick a working space that maximizes the actual usable colors. But considering both Lightroom and Camera Raw use Pro Photo RGB color coordinates (and a linear gamma), any raw capture will be first processed in the pipeline to be Pro Photo RGB. Any other transform will be a secondary transform which is not optimal. Even if you want to use Beta RGB, you'll first need to process into Pro Photo RGB working space and then transform into Beta RGB.
    Processing into sRGB and even Adobe RGB ensures some colors that your camera can capture will be clipped. Some of those colors that are clipped might be able to be printed...the Epson 78/9800 and beyond Epson printers can already print colors outside of Adobe RGB–let alone the newer 79/9900 printers with orange and green to extend the printer's gamut.
    Also, to be clear, while it's useful to try to maintain an optimized RGB working space that isn't "too big", it's been my experience that trying to maintain an "efficient" working space simply isn't worth the hassle. Pro Photo RGB in 16 bit has been my working space for about 8 years and I've never found any problems or issues based on Pro Photo RGB as my working space.
    So, if you are looking for a working space that provides the ability to contain all of the colors a camera can capture in raw and output to recent inkjet printers (let alone future printers) there is only one choice-Pro Photo RGB in 16 bit.
     
  61. Jeff Schewe wrote:
    All the other discussion is interesting-particularly the attempts at trying to pick a working space that maximizes the actual usable colors. But considering both Lightroom and Camera Raw use Pro Photo RGB color coordinates (and a linear gamma), any raw capture will be first processed in the pipeline to be Pro Photo RGB. Any other transform will be a secondary transform which is not optimal. Even if you want to use Beta RGB, you'll first need to process into Pro Photo RGB working space and then transform into Beta RGB.
    Not that I've ever tried anything other than ACR, but believe some RAW converters do allow a user to pick any color space -- would that address the problem of "double-handling"?
    How about Adobe RGB and sRGB options inside ACR? -- if a given picture has a small gamut that would fit inside sRGB, would ACR develop RAW *directly* into this color space, or is ProPhoto always the first step? (Have read somewhere that ProPhoto is ACR's "native" color space, but it wasn't explained what does it mean in terms of ACR's internal workings.)
    Also, to be clear, while it's useful to try to maintain an optimized RGB working space that isn't "too big", it's been my experience that trying to maintain an "efficient" working space simply isn't worth the hassle. Pro Photo RGB in 16 bit has been my working space for about 8 years and I've never found any problems or issues based on Pro Photo RGB as my working space.
    This sounds very reassuring -- I was starting to seriously doubt the soundness of my bigger-is-better set-and-forget approach -- so cheers Jeff! However, the whole debate was very educational for me, so I will start making exceptions to the rule whenever I feel it justified/beneficial.
     
  62. digitaldog

    digitaldog Andrew Rodney

    Not that I've ever tried anything other than ACR, but believe some RAW converters do allow a user to pick any color space -- would that address the problem of "double-handling"?​
    So does ACR. Jeff’s referring to the underlying RGB color space for processing (ProPhoto with 1.0 TRC). Every raw processor has some such space. Few tell us what they are using. I believe that in Aperture, its Adobe RGB primaries. The idea being, if ACR and LR use ProPhoto RGB 1.0 for processing, stick with ProPhoto for encoding the data after rendering. From there, you can select other color spaces (or export from LR or ACR to other spaces if you wish). But you’re dealing with ProPhoto whether you like it or not <g>.
     
  63. Gotcha Andrew! -- so even if I select sRGB within ACR itself, it will still first develop my RAW image into ProPhoto, and only then convert it to sRGB, meaning there is no way around this double handling (other than sticking with ProPhoto). Please excuse my lay and most likely not entirely appropriate use of terminology.
     
  64. digitaldog

    digitaldog Andrew Rodney

    You got it Tomek.
     
  65. ...and, further more, the same is true about any other RAW converter, some of which may not even disclose what this "original" color space is. Such a "secret" doesn't really inspire my confidence in a given app. Do RD and/or SilkyPix fall into this category, or would you happen to know which color space is their "native" one?
     
  66. @Tim, Andrew and Jeff: This discussion has been most helpful. If I understand correctly: nice in theory but the incremental benefit is not worth the hastle, so just stick with ProPhoto. However, since we put all this effort in understanding things so far, I'd be curious to learn a bit more from an academic standpoint, if the forum will humor me.
    1) Jeff: if there were no 'double handling' penalty, would your answer still be the same?
    2) Andrew: are you talking about color space or color coordinates based on color primaries? My understanding is that there is no double conversion. If there is I stand corrected, but if Aperture is internally restricted to the rather smallish Adobe RGB color space, it seems like a product-limiting choice. On the other hand, if we are talking about choosing a set of axes based on specific primaries for raw calculations, this needs to be done and certainly does not create a 'double handling' penalty per se. If Aperture used color coordinates based on AdobeRGB primaries for intermediate calculations while allowing for negative intermediate results there would NOT be any penalty for 'double handling' once a new color space is chosen by the user, because they simply would be three axes to work with, with no boundaries: every operation at this level would not be constrained to a color space and it would be virtually completely reversible. Nothing would be 'converted' into another color space and no 'rendering intent' would mistreat your data.
    3) Jeff: my understanding is that for us to draw the locus of spectral colors of a camera from raw data in XYZ coordinates (and hence the camera's gamut) we do not need to select a white point. The only variable needed is the wavelength of the light used in subsequent steps to collect the data - and a number of set operations that need to allow for negative intermediate results. A white point is instead needed the moment that a color space has been chosen, with its set of standard primaries, in order to define the relative RGB working cube. In other words, when the camera locus is projected onto the X+Y+Z=1 plane (or xy chromaticity diagrams) it shows the shape of the gamut without specifying where the white point falls - but it does show the camera's gamut. Do I understand correctly?
    4) Lastly, I'd be interested to know if integer or floating point math is used at this level.
    Cheers,
    Jack
     
  67. digitaldog

    digitaldog Andrew Rodney

    2) Andrew: are you talking about color space or color coordinates based on color primaries?​
    Both.
     
  68. Andrew, what do you mean by both? A color space defines a finite space. A set of 3D coordinates (based on standard primaries or not) defines an unlimited space. In one case you are limiting yourself and mistreating data that falls outside of it. In the other you are not (except perhaps in rounding). Which do you mean?
     
  69. ...and, further more, the same is true about any other RAW converter, some of which may not even disclose what this "original" color space is.​
    Tomek, the more I think about it the more I don't believe that the vast majority of raw converters out there CONVERT the raw data to a secret internal calibrated color space before converting it again to the default one that can be chosen by the user. It is possible that Andrew is referring to an intrinsic camera color space, effectively a 3D map of its gamut at various intensities, but that would not require any lossy conversion. It would be the equivalent of wrapping a boundary around raw data in the shape of the 3D gamut of the camera to ensure that in early raw processing (we are beyond raw data now) no data could escape the gamut that the camera can physically produce. Anyone know?
     
  70. 3) Jeff: my understanding is that for us to draw the locus of spectral colors of a camera from raw data in XYZ coordinates (and hence the camera's gamut) we do not need to select a white point... Do I understand correctly?​
    I think I have found the answer to question 3) indirectly here: contrary to the reflective situation of, say, printers and scanners that require a reference illuminant (with a reference color temperature/white point), the camera gamut that can be calculated by performing the procedure described a few posts ago is solely a function of the wavelength because it does not need a reference illuminant in order for it to be computed. The camera's gamut will have a native 'neutral' point, just like monitors have a native color temperature before manufacturers adjust it to a standard reference.
    Therefore you can calculate a camera's gamut without having to know or choose a reference white point.
     
  71. The camera's gamut will have a native 'neutral' point, just like monitors have a native color temperature before manufacturers adjust it to a standard reference.​
    I'm not so sure about that if I understand it correctly which I doubt I do, but my logic says a camera is somewhat similar to a scanner as far as sensors go. The scanner needs a source input profile of the medium it's capturing and so I'ld think a camera would need the same when capturing a scene.
    Scanners pass on the Raw data differently than camera's. There's even a blog somewhere online that attempts to use a scanner just like a digital camera though not as elegant.
    Profiling a scanner uses a color target such as an IT8 either reflective or transmissive that takes into account different light emitting influences such as the scanner light source and film/photo paper medium substrate and pigments to arrive at an ICC profile whose color gamut can be measured. However, this is more about measuring the gamut of the medium over the scanner's true color capturing potential.


    A digital camera would need a color target whose gamut would include a wider range of spectral data than an IT8 target and a light source having full spectral characteristics but at what color temperature. A transmissive target would probably be the best, but is daylight the only light source with full spectral capability that coincides with D50 or D65 to be compatible with the ICC transform math going on?
    You have to ask without creating this ICC based camera profile what is being referenced in forming the previews we DO see in Raw converters? The only app I've come across that allows an ICC based interactive demonstration of this is the Mac only Raw Developer.
    RD allows a crude ICC based XYZ manual manipulation of the Raw image preview after demosaicing in allowing the user to see how each number can drastically change HSL in the image. It is similar to what can be done in Photoshop's Color Settings CustomRGB on untagged images. The main limitation is it relies on our eyes instead of a spectro to tell us if we're close to mapping and characterizing the color capturing capabilities of the camera.
    Below is a demonstration of how changing theses numbers influences the preview, but not so drastically in relation to illuminant. It's corrected for gamma and so the preview doesn't reflect a linear characteristic. Just for show folks.
    00XVte-291983584.jpg
     
  72. Tim, that's brilliant software. I don't have a Mac. What camera are those the coordinates for? Is there an equivalent for PC?
    Profiling a scanner uses a color target such as an IT8 either reflective or transmissive that takes into account different light emitting influences such as the scanner light source and film/photo paper medium substrate and pigments to arrive at an ICC profile whose color gamut can be measured.​
    Yes, a scanner needs to know the characteristics of its light (with a color temperature and reference white) in order to define its gamut, because the illuminant does not contain all wavelengths in the visible spectrum, but only the subset emitted by the scanner's light which makes it through the various transmissive/reflective layers to the sensor. Therefore a simplified answer to the question 'what is the gamut of my scanner/printer?' is 'it depends on the color of the light that the scanner uses, its intensity, and the properties of the media'. Going to an extreme, if the light in a weird scanner were a single frequency red laser, the gamut of the scanner would be represented by a dot in the red region of the xy chromaticity diagram, no matter what fantastic color pictures were scanned. Another scanner with a green laser as a light would show a small green dot as its gamut.
    Cameras are different because they do not have a built in light source of constant and known properties. The light source can be of any color and intensity and it changes from picture to picture, spanning the whole visible spectrum. In the second and third of your examples the gamut of the camera (defined by the triangle with the same R, G and B coordinates) has not changed, but the white point has, under presumed different lighting conditions. Under the unknown lighting conditions that the shot was taken in, that triangle represents the range of colors that the camera captured, the gamut. To define the triangle we did not need to know the properties of the source light or its white point. [We would need to know it, instead, if we wanted to reproduce as accurately as possible the color we captured: if the light's white point were D65, then the real color was approximately this; if instead it were D50, then it was that. But in this discussion we are not trying to get accurate colors to reproduce a particular scene. We are trying to determine the range of colors the sensor/camera can capture in any condition].
    A color target with a wider range than shown would have to be used to better measure the camera's color capabilities.​
    Exactly. I would add: and use a light source that includes all wavelengths in the visible spectrum. Do that, feed it to your brilliant piece of software and you would have the camera's gamut. What color temperature was that light source? We don't care, we have our gamut. What if the light source were a single red laser? it would show that the camera is able to capture red. But would that be the camera's gamut? No, the camera is able to capture and reproduce many more colors as described above.
     
  73. The scanner needs a source input profile of the medium it's capturing and so I'ld think a camera would need the same when capturing a scene.​
    The camera does not need a source input profile to define its gamut because the medium is the light itself. On the other hand the scanner is a closed system made up of a light source, various reflective/refractive/transmissive surfaces that change/limit the light as it travels from the light source to reach - finally - the sensor. Think of the camera as the sensor inside the scanner, without the constraint of a fixed light source and all the other light changing factors.
    A digital camera would need a color target whose gamut would include a wider range of spectral data than an IT8 target and a light source having full spectral characteristics but at what color temperature.​
    Instead of bouncing your source light off a target, why not shine it directly into the camera, being careful not to saturate the sensor? If I understand how your software works correctly, you do not need a specific target to define the gamut, just enough light from it to cover the entire visible spectrum. What better than the light from the source itself, without it being selectively absorbed by your target? Remember, intensity is not a variable in xy chromaticity diagrams, and the color temperature of the light is irrelevant (as long as it contains wavelenghts from the whole visible spectrum). Alternatively you can use the procedure I outlined in a previous post, shining light directly on the sensor, one wavelength at a time.
    You have to ask without creating this ICC based camera profile what is being referenced in forming the previews we DO see in Raw converters?​
    In a raw coverter the default color space and automatically generated reference white point determine what you see in the previews. You can of course change these at will, since the raw data is independent of both.
     
  74. Jack, the Raw Developer custom ICC input profile is a very crude tool/toy. It relies on the user's visual judgement in getting a more pleasant and/or accurate to scene color as a way of finding a color glove that fits the unknown color descriptors of that particular scene captured by the camera's sensor response. It doesn't guarantee that all colors are covered and will look correct by each different scene captured using this profile of scenes shot even under the same light source. The tool also requires color management to be on for the preview to change as it does. Not so sure this is a very accurate way to go about this, but it seems to work.
    The app allows turning off color management giving a dark linear response preview much like you get profiling a scanner, but then the ICC tool is disabled. The linear preview changes the saturation and hue noticeably compared to fiddling with the XY locus graph of a normalized gamma corrected preview.
    It's been a while since I played around with RD and this tool, but I found out fiddling with it today that the previews are being generated by assigning the newly created input profile XY coordinates to the display profile instead of what I thought was the output profile which I have selected as ProPhotoRGB. I tested this by selecting Linear RIMMRGB v4 (ProPhotoRGB XY colorant/WP numbers with 1.0 gamma) which shows up as a selection space in the tool drop down menu and the preview's saturation goes off the scales as if assigning ProPhotoRGB to sRGB, but in this case it's assigning to my iMac profile which is close to sRGB. (SEE BELOW) This limits the use of this tool to the display's gamut, so it may not be very accurate for all possible colors captured.
    I think it would be difficult using this tool on just a photograph of a full spectrum light source as a target. Not much of a preview to visually go on. You'ld need to photograph a color target (transmissive preferably...refracted light from a prism?) containing a wide range of colors transitioning into other hues with wide gradations instead of colors butting up against each other. Any daylight source such as from a window or direct sunlight would suffice. All the math calculation matrices expect a neutral looking light source IOW no captures at sunset.
    It's a neat learning tool though.
    00XWHq-292307584.jpg
     
  75. This limits the use of this tool to the display's gamut, so it may not be very accurate for all possible colors captured.​
    Ah, of course. Never mind the software experiments, then, and change references to the gamut in the previous posts from that 'of the camera' to that 'of the camera that the display is able to show'.
     

Share This Page

1111