Jump to content

Advantage to using a color space larger than capture/print?


Recommended Posts

<p>By Gamut I mean the range of color and tonal values that a <em>particular </em>device (eye, scanner, camera, monitor, printer, painter) is capable of detecting or producing. For obvious reasons we are mainly interested in the visible portion of these.</p>

<p>Every camera can only capture a finite number of colors and tonal values: that set of color and tonal values is by definition the camera's unique Gamut.<br /> Every printer can only reproduce a finite number of colors and tonal values: that set is the printer's unique Gamut.</p>

<p>In an ideal world, the two Gamuts would coincide (and also coincide with the Gamut of your Eyes and your Monitor) so that a color space could be designed to contain all detected and reproduced colors and tonal values snugly. But in fact they are VERY different.</p>

<p>One approach is to use a small color space, like sRGB, that contains only the subset of colors/tones that are easy to detect/reproduce and therefore appear in BOTH the camera AND printer Gamuts. This is the easiest and the safest way to proceed, but at the price of mistreating or not having available (see link below) about two thirds of visible color/tone combinations.<br /> At the opposite end, an alternative approach is to use a huge color space, like ProPhotoRGB, that is so big that it is able to contain virtually anything and certainly most all of the colors/tones present in both the camera and printer Gamuts with a lot of room to spare. During processing the color/tones have a lot of room to move around in. However, when a ProPhoto document is printed, all the colors/tones need to squeeze through the printer's Gamut, potentially resulting in a lot of guesswork in translation (see here for a visual explanation of rendering <a href="http://graphics.stanford.edu/courses/cs178-10/applets/gamutmapping.html">http://graphics.stanford.edu/courses/cs178-10/applets/gamutmapping.html</a> ). The further you are off the printer's gamut, the harder it is for it not to mistreat color/tones in your picture. And there is a lot of room in ProPhoto to be off.</p>

<p>A third approach is to split the difference, and choose a color space that is bigger than sRGB but smaller than ProPhotoRGB: just big enough to contain most of the camera AND the printer Gamuts, with reduced room for error. BetaRGB is one such color space. There are others. What do people use and why?</p>

Link to comment
Share on other sites

  • Replies 74
  • Created
  • Last Reply

Top Posters In This Topic

<p>Dont know about other programs, but in Lightroom, if you send a TIF file to Photoshop, you can specify a Prophoto color space. Then, in PS you do see it as in that space, if you do not have PS set to automatically change the color space to Adobe or sRBG... Ive found that Especially in Landscapes it is beneficial to use Prophoto. With more subtle variations of color where such exists, like in autumn or sunset and sunrise images. With printers which can use it, especially printing with 1200-2400 Dots per Inch there is dramatic color</p>
Link to comment
Share on other sites

<p>

Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Therefore, the measured pixel values don't even *get* a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.

 

 

<blockquote>As Andrew indicated about digital camera's not having a color gamut, I wonder how the color scientists that created that site measured the gamut of the Nikon D200 and Canon 30D in producing their RGB 3D gamut diagram included in the applet.</blockquote>

Probably by taking output referred (demosaiced and rendered) data, feeding it to a product that builds an ICC profile and plotting its gamut.

</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>@Andrew: Interesting, I never thought about it at this level. Do human eyes have an average gamut? If not, how can we say, for instance, that VISIBLE colors fit into the LAB color space? I guess I implicitly assumed that they do, even though the brain derives its color information from three non-co-located cones (sensels) similar to the way a camera sensor/system works. Therefore would it not have a gamut too?</p>

<p>I am not a color scientist, so I have no idea how they actually go about defining the camera's gamut in practice. However I can take a wild *simplified* guess, having spent some time absorbing the Stanford link from a couple of posts ago: take a representative small number of R* G* and B* sensels in a circle at the center of the camera's sensor (heck, in fact why not take the whole sensor?), and illuminate it with successive beams of uniform light of wavelength from 400 nm to 700 nm, each time measuring the level (value) of average R*, G* and B* sensels, thus obtaining R*, G* and B* sensitivity functions. Plot the corresponding locus in 3D for fun. Transform the locus to XYZ primaries through the appropriate wavelength dependent matching functions, project it onto the X+Y+Z=1 plane, et voila': a camera gamut without first having to map to a prticular RGB space. No?</p>

Link to comment
Share on other sites

<blockquote>

<p>Do human eyes have an average gamut? </p>

</blockquote>

<p>That horseshoe plot you see often (the CIE chromaticity diagram) is based on the “standard observer”, a theoretical human’s vision. Anything outside that plot is out of gamut, not visible. This is all based on science done in the 1930’s with a group of male volunteers, viewing samples of projected colors. Lab is a variant of CIE XYZ 1931 to account for (well attempt to account for) perceptual uniformity. </p>

<p>This may help:</p>

<blockquote>

<p>In addition to device-dependent color spaces, there are also device- independent color spaces. These color spaces encompass all of human vision. The most common is called CIELAB (or L*a*b; often written as LAB, although technically the * should be used). Back in 1931, the CIE (Commission Internationale de L’Éclairage, also known as International Commission on Illumination), a group or color scientists, conducted a series of experiments and tests on humans to determine how they perceive color. The tests involved showing groups of volunteers a sample color under very controlled conditions whereby each subject adjusted the intensity of red, green, and blue lights until the mix of the three matched the sample color. This allowed the CIE to specify precisely the stimulus response of the human eye.<br>

The CIE came up with the term standard observer to describe a hypo- thetical average human viewer and his or her response to color. Furthermore, the results of these tests produced a mathematical model of a color space formulated not on any real-world device, but rather on how we humans (the standard observer) actually perceive color. This core color model is called CIE XYZ (1931). This is the color model from which all other device-independent color models are created. Like the RGB color model with three additive primaries, CIE XYZ uses three spectrally defined imaginary primaries: X, Y, and Z. These X, Y, and Z primaries may be combined to describe all colors visible to the standard observer. Also in 1931, a synthetic space called CIE xyY was created, which itself is derived from CIE XYZ. In 1976, CIELAB and CIELUV were added to the mix of these device-independent color spaces. The CIELAB color space is a synthetic, theoretical color space derived from XYZ. Unlike the origi- nal, CIELAB has the advantage of being perceptually uniform (sort of . . .). That simply means that a move of equal value in any direction at any point within the color space produces a similar perceived change to the standard observer.<br>

The XYZ color space is based on three quantities or stimuli. The geek term for describing this is tristimulus values (three stimuli). Technically the term tristimulus values refers to the XYZ values of the original CIE XYZ color model although you will often hear people describe tristimulus values when defining a color in RGB or CMY (or using any three values). This is incorrect. Since our aim is to keep the color-geek-speak to a minimum, it’s not important to know the differences in the various CIE constructed color models, but rather to recognize that a color space such as CIELAB is based on how we see color. What you should keep in mind here is that using a set of three values, any color can be specified exactly and mapped in three-dimensional space to show its location in reference to all other colors. This can be useful! There are no capture or output devices that directly reproduce CIELAB; however, this color space allows us to translate any color from one device to another.</p>

 

</blockquote>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>Transform the locus to XYZ primaries through the appropriate wavelength dependent matching functions, project it onto the X+Y+Z=1 plane, et voila': a camera gamut without first having to map to a prticular RGB space. No?</p>

</blockquote>

<p>See:<br>

http://www.openphotographyforums.com/forums/showthread.php?t=12600<br>

Also, a useful response to the question “does raw have a color space”, a conversation among a number of color geeks produced this reply from Jack Holm, former head color scientist for digital cameras at HP and a fellow member of the ICC digital camera group:</p>

 

<blockquote>

<p>- Unless the camera spectral sensitivities are colorimetric, they do not define the intrinsic colorimetric characteristics of an image.<br /><br />- Also, primaries are for synthesis and can cause some wrong thinking if discussed in relation to analysis.<br /><br />The second paragraph of Thomas’ response is important.<br /><br />The short answer to the question is:<br /><br /><strong>Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry.</strong><br /><br />The same thing could be said about film negative densities.<br /><br /><strong>Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion</strong> (unless the “scene” you are photographing has only three independent colorants, like with film scanning).<br /><br />A purist might argue that a color space not based on colorimetry is not really a color space because it is not an assignment of numerical values to colors, defining colors as a human sensation. In the standards committees we decided it is useful to be able to talk about non-colorimetric color spaces so we allow them and use “colorimetric color spaces” when appropriate.<br /><br />Jack<br /><br /></p>

</blockquote>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Jack, that sounds like an effective way to plot the camera's gamut, but from my point of view (and I too am not a color scientist) a gamut does not a picture make. The allocation of density, hue and saturation from one pixel to the next as recorded by the sensor determines the quality of depth and realism in a captured image. </p>

<p>What scene or color target and lighting arrangement presents enough color variation and gamut boundary parameters in determining how many colors can be captured by a digital camera? Andrew answered part of my question with the gamut plotting of the ICC profile derived from the camera's response after demosaicing, but he didn't include what color target was used to test this. Subjecting the camera's sensor to just white light and separate RGB targets wouldn't tell you how many variations of HSL the camera is sensitive to and THAT is what makes an image look like an image.</p>

<p>Since our eyes adapt to colors next to each other within a scene similar to the way complementary colors affect our eyes in seeing richness and depth, it's important that the camera be as sensitive to this optical phenomenon. How would that be measured and is that optical effect part of color gamut plotting. If we see a rich teal transitioning to a light turquoise in a scene but the camera records only one clump of green with a tinge of cyan, how would that be factored into assessing its gamut?</p>

<p>This is why all this color science stuff isn't very useful because it doesn't address, measure and calculate this VERY REAL aspect of human perception. So how do these 3D gamut plots help us in producing an image with depth, richness and clarity? I haven't seen any hard evidence that all this math going on under the hood is having any effect in allowing us to control this. There's too much hidden to know for sure.</p>

 

Link to comment
Share on other sites

<p>@Tim (edit: this refers to your post in the previous page - didn't see your newer post until after I posted this one): The camera's gamut is bigger than the gamut of human vision because its sensor is able to detect some wavelengths that we don't (e.g. infrared).</p>

<p>I think that Andrew meant that you can't say that a camera has an <strong><em>RGB</em></strong> gamut off the bat because the sensitivity functions (R*, G* and B*) derived from the three types of differently-filtered sensels unique to each camera sensor do not correspond to those derived from a standard set of primaries (correct?). However, IMHO, it can have a gamut in XYZ space, and that's really what we are after.</p>

<p>In my uneducated humble opinion, a camera has a unique locus in non-standard R*G*B* space which can be transformed into a locus in standard, positive-only XYZ space which, in various forms, represents the range of color/tonal values that the camera is able to capture: its gamut. This XYZ locus can for instance be plotted on chromaticity diagrams in the shape of a two-dimensional gamut outline and superimposed onto human vision, printer and other gamuts to help choose an appropriate working color space. So, who has done this for a D90 .-)?</p>

<p>As far as measurement goes, to determine the camera's gamut I believe that all you would need to do in the thought experiment above is to read the RAW data of each sensel every time a beam of different wavelength is shone on the sensor (while of course knowing the layout of R*,G* and B* filters on the chip) so that the relative R*, G* and B* sensitivity functions could be derived.</p>

<p>@Andrew: Thanks for the info.</p>

Link to comment
Share on other sites

<p>Clear enough explanation, Jack, but I still don't see how this tells us what color boundaries any given digital sensor is going to capture. When I look at a scene that has saturated colors I still don't know if and how far the camera is going to screw them up and whether I'll be able to fix it in post and/or set exposure low enough NOT to induce saturation blooming/clipping capturing in Raw.</p>

<p>Shooting intensely orange Pomegranate flowers lit directly by late afternoon sun on a cloudless day can be a challenge especially with establishing correct exposure.</p>

<p>Here's other's take capturing the same flower. See below what I had to deal with shooting jpeg.</p>

<p>http://www.google.com/images?hl=en&newwindow=1&safe=off&nfpr=1&q=pomegranate+flower&um=1&ie=UTF-8&source=univ&ei=IFS7TLytEoWBlAeK4andDQ&sa=X&oi=image_result_group&ct=title&resnum=1&ved=0CCUQsAQwAA</p>

<p>Notice on that link all the orange variances caused by lighting, exposure and probably incamera processing. What's the color gamut of that flower scene under the lighting conditions I described above and what I had to do to recover shooting Raw demonstrated below.</p><div>00XV5K-291315684.jpg.7b0b7f00c3325499dde89b8bda4d9a92.jpg</div>

Link to comment
Share on other sites

<blockquote>

<p>In my uneducated humble opinion, a camera has a unique locus in non-standard R*G*B* space which can be transformed into a locus in standard, positive-only XYZ space which, in various forms, represents the range of color/tonal values that the camera is able to capture: its gamut. </p>

</blockquote>

<p>That only applies at a specific white point...the spectral response of a camera will be different at different light source color temps...that's why it's folly to try to claim a camera has a fixed gamut. It doesn't..</p>

<p>Pro Photo RGB is the only color space that can possibly contain ALL of the colors a camera can capture in raw and ALL the colors a modern printer can print. That's what makes Pro Photo RGB in 16 bit a really useful color space. All the other discussion is interesting-particularly the attempts at trying to pick a working space that maximizes the actual usable colors. But considering both Lightroom and Camera Raw use Pro Photo RGB color coordinates (and a linear gamma), any raw capture will be first processed in the pipeline to be Pro Photo RGB. Any other transform will be a secondary transform which is not optimal. Even if you want to use Beta RGB, you'll first need to process into Pro Photo RGB working space and then transform into Beta RGB.</p>

<p>Processing into sRGB and even Adobe RGB ensures some colors that your camera can capture will be clipped. Some of those colors that are clipped might be able to be printed...the Epson 78/9800 and beyond Epson printers can already print colors outside of Adobe RGB–let alone the newer 79/9900 printers with orange and green to extend the printer's gamut.</p>

<p>Also, to be clear, while it's useful to try to maintain an optimized RGB working space that isn't "too big", it's been my experience that trying to maintain an "efficient" working space simply isn't worth the hassle. Pro Photo RGB in 16 bit has been my working space for about 8 years and I've never found any problems or issues based on Pro Photo RGB as my working space.</p>

<p>So, if you are looking for a working space that provides the ability to contain all of the colors a camera can capture in raw and output to recent inkjet printers (let alone future printers) there is only one choice-Pro Photo RGB in 16 bit.</p>

Link to comment
Share on other sites

<blockquote>

<p>Jeff Schewe wrote:<br>

<em>All the other discussion is interesting-particularly the attempts at trying to pick a working space that maximizes the actual usable colors. But considering both Lightroom and Camera Raw use Pro Photo RGB color coordinates (and a linear gamma), any raw capture will be first processed in the pipeline to be Pro Photo RGB. Any other transform will be a secondary transform which is not optimal. Even if you want to use Beta RGB, you'll first need to process into Pro Photo RGB working space and then transform into Beta RGB.</em></p>

</blockquote>

<p>Not that I've ever tried anything other than ACR, but believe some RAW converters do allow a user to pick any color space -- would that address the problem of "double-handling"?</p>

<p>How about Adobe RGB and sRGB options inside ACR? -- if a given picture has a small gamut that would fit inside sRGB, would ACR develop RAW *directly* into this color space, or is ProPhoto always the first step? (Have read somewhere that ProPhoto is ACR's "native" color space, but it wasn't explained what does it mean in terms of ACR's internal workings.)</p>

<blockquote>

<p><em>Also, to be clear, while it's useful to try to maintain an optimized RGB working space that isn't "too big", it's been my experience that trying to maintain an "efficient" working space simply isn't worth the hassle. Pro Photo RGB in 16 bit has been my working space for about 8 years and I've never found any problems or issues based on Pro Photo RGB as my working space.</em></p>

</blockquote>

<p>This sounds very reassuring -- I was starting to seriously doubt the soundness of my bigger-is-better set-and-forget approach -- so cheers Jeff! However, the whole debate was very educational for me, so I will start making exceptions to the rule whenever I feel it justified/beneficial.</p>

Link to comment
Share on other sites

<blockquote>

<p>Not that I've ever tried anything other than ACR, but believe some RAW converters do allow a user to pick any color space -- would that address the problem of "double-handling"?</p>

</blockquote>

<p>So does ACR. Jeff’s referring to the underlying RGB color space for processing (ProPhoto with 1.0 TRC). Every raw processor has some such space. Few tell us what they are using. I believe that in Aperture, its Adobe RGB primaries. The idea being, if ACR and LR use ProPhoto RGB 1.0 for processing, stick with ProPhoto for encoding the data after rendering. From there, you can select other color spaces (or export from LR or ACR to other spaces if you wish). But you’re dealing with ProPhoto whether you like it or not <g>.</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Gotcha Andrew! -- so even if I select sRGB within ACR itself, it will still first develop my RAW image into ProPhoto, and only then convert it to sRGB, meaning there is no way around this double handling (other than sticking with ProPhoto). Please excuse my lay and most likely not entirely appropriate use of terminology.</p>
Link to comment
Share on other sites

<p>...and, further more, the same is true about <a href="00WQhq?start=30">any other RAW converter</a>, some of which may not even disclose what this "original" color space is. Such a "secret" doesn't really inspire my confidence in a given app. Do RD and/or SilkyPix fall into this category, or would you happen to know which color space is their "native" one?</p>
Link to comment
Share on other sites

<p>@Tim, Andrew and Jeff: This discussion has been most helpful. If I understand correctly: nice in theory but the incremental benefit is not worth the hastle, so just stick with ProPhoto. However, since we put all this effort in understanding things so far, I'd be curious to learn a bit more from an academic standpoint, if the forum will humor me.</p>

<p>1) Jeff: if there were no 'double handling' penalty, would your answer still be the same?</p>

<p>2) Andrew: are you talking about color space or color coordinates based on color primaries? My understanding is that there is no double conversion. If there is I stand corrected, but if Aperture is internally restricted to the rather smallish Adobe RGB color space, it seems like a product-limiting choice. On the other hand, if we are talking about choosing a set of axes based on specific primaries for raw calculations, this needs to be done and certainly does not create a 'double handling' penalty per se. If Aperture used color coordinates based on AdobeRGB primaries for intermediate calculations while allowing for negative intermediate results there would NOT be any penalty for 'double handling' once a new color space is chosen by the user, because they simply would be three axes to work with, with no boundaries: every operation at this level would not be constrained to a color space and it would be virtually completely reversible. Nothing would be 'converted' into another color space and no 'rendering intent' would mistreat your data.</p>

<p>3) Jeff: my understanding is that for us to draw the locus of spectral colors of a camera from raw data in XYZ coordinates (and hence the camera's gamut) we do not need to select a white point. The only variable needed is the wavelength of the light used in subsequent steps to collect the data - and a number of set operations that need to allow for negative intermediate results. A white point is instead needed the moment that a color space has been chosen, with its set of standard primaries, in order to define the relative RGB working cube. In other words, when the camera locus is projected onto the X+Y+Z=1 plane (or xy chromaticity diagrams) it shows the shape of the gamut without specifying where the white point falls - but it does show the camera's gamut. Do I understand correctly?</p>

<p>4) Lastly, I'd be interested to know if integer or floating point math is used at this level.</p>

<p>Cheers,<br>

Jack</p>

Link to comment
Share on other sites

<p>Andrew, what do you mean by both? A color space defines a finite space. A set of 3D coordinates (based on standard primaries or not) defines an unlimited space. In one case you are limiting yourself and mistreating data that falls outside of it. In the other you are not (except perhaps in rounding). Which do you mean?</p>
Link to comment
Share on other sites

<blockquote>

<p>...and, further more, the same is true about <a rel="nofollow" href="00WQhq?start=30">any other RAW converter</a>, some of which may not even disclose what this "original" color space is.</p>

</blockquote>

<p>Tomek, the more I think about it the more I don't believe that the vast majority of raw converters out there CONVERT the raw data to a secret internal calibrated color space before converting it again to the default one that can be chosen by the user. It is possible that Andrew is referring to an intrinsic camera color space, effectively a 3D map of its gamut at various intensities, but that would not require any lossy conversion. It would be the equivalent of wrapping a boundary around raw data in the shape of the 3D gamut of the camera to ensure that in early raw processing (we are beyond raw data now) no data could escape the gamut that the camera can physically produce. Anyone know?</p>

 

Link to comment
Share on other sites

<blockquote>

<p>3) Jeff: my understanding is that for us to draw the locus of spectral colors of a camera from raw data in XYZ coordinates (and hence the camera's gamut) we do not need to select a white point... Do I understand correctly? </p>

</blockquote>

<p>I think I have found the answer to question 3) indirectly <a href="http://www.brucelindbloom.com/Eqn_Spect_to_XYZ.html">here</a>: contrary to the reflective situation of, say, printers and scanners that require a reference illuminant (with a reference color temperature/white point), the camera gamut that can be calculated by performing the procedure described a few posts ago is solely a function of the wavelength because it does not need a reference illuminant in order for it to be computed. The camera's gamut will have a native 'neutral' point, just like monitors have a native color temperature before manufacturers adjust it to a standard reference.</p>

<p>Therefore you can calculate a camera's gamut without having to know or choose a reference white point.</p>

Link to comment
Share on other sites

<blockquote>

<p>The camera's gamut will have a native 'neutral' point, just like monitors have a native color temperature before manufacturers adjust it to a standard reference.</p>

</blockquote>

<p>I'm not so sure about that if I understand it correctly which I doubt I do, but my logic says a camera is somewhat similar to a scanner as far as sensors go. The scanner needs a source input profile of the medium it's capturing and so I'ld think a camera would need the same when capturing a scene.</p>

<p>Scanners pass on the Raw data differently than camera's. There's even a blog somewhere online that attempts to use a scanner just like a digital camera though not as elegant.</p>

<p>Profiling a scanner uses a color target such as an IT8 either reflective or transmissive that takes into account different light emitting influences such as the scanner light source and film/photo paper medium substrate and pigments to arrive at an ICC profile whose color gamut can be measured. However, this is more about measuring the gamut of the medium over the scanner's true color capturing potential.<br /> <br /> <br />A digital camera would need a color target whose gamut would include a wider range of spectral data than an IT8 target and a light source having full spectral characteristics but at what color temperature. A transmissive target would probably be the best, but is daylight the only light source with full spectral capability that coincides with D50 or D65 to be compatible with the ICC transform math going on?</p>

<p>You have to ask without creating this ICC based camera profile what is being referenced in forming the previews we DO see in Raw converters? The only app I've come across that allows an ICC based interactive demonstration of this is the Mac only Raw Developer.</p>

<p>RD allows a crude ICC based XYZ manual manipulation of the Raw image preview after demosaicing in allowing the user to see how each number can drastically change HSL in the image. It is similar to what can be done in Photoshop's Color Settings CustomRGB on untagged images. The main limitation is it relies on our eyes instead of a spectro to tell us if we're close to mapping and characterizing the color capturing capabilities of the camera.</p>

<p>Below is a demonstration of how changing theses numbers influences the preview, but not so drastically in relation to illuminant. It's corrected for gamma and so the preview doesn't reflect a linear characteristic. Just for show folks.</p>

<p> </p><div>00XVte-291983584.thumb.jpg.21299ab522c7f26fdeac4c436f6ed184.jpg</div>

Link to comment
Share on other sites

<p>Tim, that's brilliant software. I don't have a Mac. What camera are those the coordinates for? Is there an equivalent for PC?</p>

 

<blockquote>

<p>Profiling a scanner uses a color target such as an IT8 either reflective or transmissive that takes into account different light emitting influences such as the scanner light source and film/photo paper medium substrate and pigments to arrive at an ICC profile whose color gamut can be measured.</p>

</blockquote>

<p>Yes, a scanner needs to know the characteristics of its light (with a color temperature and reference white) in order to define its gamut, because the illuminant does not contain all wavelengths in the visible spectrum, but only the subset emitted by the scanner's light which makes it through the various transmissive/reflective layers to the sensor. Therefore a simplified answer to the question 'what is the gamut of my scanner/printer?' is 'it depends on the color of the light that the scanner uses, its intensity, and the properties of the media'. Going to an extreme, if the light in a weird scanner were a single frequency red laser, the gamut of the scanner would be represented by a dot in the red region of the xy chromaticity diagram, no matter what fantastic color pictures were scanned. Another scanner with a green laser as a light would show a small green dot as its gamut.</p>

<p>Cameras are different because they do not have a built in light source of constant and known properties. The light source can be of any color and intensity and it changes from picture to picture, spanning the whole visible spectrum. In the second and third of your examples the gamut of the camera (defined by the triangle with the same R, G and B coordinates) has not changed, but the white point has, under presumed different lighting conditions. Under the unknown lighting conditions that the shot was taken in, that triangle represents the range of colors that the camera captured, the gamut. To define the triangle we did not need to know the properties of the source light or its white point. [We would need to know it, instead, if we wanted to reproduce as accurately as possible the color we captured: if the light's white point were D65, then the real color was approximately this; if instead it were D50, then it was that. But in this discussion we are not trying to get accurate colors to reproduce a particular scene. We are trying to determine the range of colors the sensor/camera can capture in any condition].</p>

<blockquote>

<p>A color target with a wider range than shown would have to be used to better measure the camera's color capabilities.</p>

</blockquote>

<p>Exactly. I would add: and use a light source that includes all wavelengths in the visible spectrum. Do that, feed it to your brilliant piece of software and you would have the camera's gamut. What color temperature was that light source? We don't care, we have our gamut. What if the light source were a single red laser? it would show that the camera is able to capture red. But would that be the camera's gamut? No, the camera is able to capture and reproduce many more colors as described above.</p>

 

Link to comment
Share on other sites

<blockquote>

<p>The scanner needs a source input profile of the medium it's capturing and so I'ld think a camera would need the same when capturing a scene.</p>

</blockquote>

<p>The camera does not need a source input profile to define its gamut because the medium is the light itself. On the other hand the scanner is a closed system made up of a light source, various reflective/refractive/transmissive surfaces that change/limit the light as it travels from the light source to reach - finally - the sensor. Think of the camera as the sensor inside the scanner, without the constraint of a fixed light source and all the other light changing factors.</p>

<blockquote>

<p>A digital camera would need a color target whose gamut would include a wider range of spectral data than an IT8 target and a light source having full spectral characteristics but at what color temperature. </p>

</blockquote>

<p>Instead of bouncing your source light off a target, why not shine it directly into the camera, being careful not to saturate the sensor? If I understand how your software works correctly, you do not need a specific target to define the gamut, just enough light from it to cover the entire visible spectrum. What better than the light from the source itself, without it being selectively absorbed by your target? Remember, intensity is not a variable in xy chromaticity diagrams, and the color temperature of the light is irrelevant (as long as it contains wavelenghts from the whole visible spectrum). Alternatively you can use the procedure I outlined in a previous post, shining light directly on the sensor, one wavelength at a time.</p>

 

<blockquote>

<p>You have to ask without creating this ICC based camera profile what is being referenced in forming the previews we DO see in Raw converters?</p>

</blockquote>

<p>In a raw coverter the default color space and automatically generated reference white point determine what you see in the previews. You can of course change these at will, since the raw data is independent of both.</p>

Link to comment
Share on other sites

<p>Jack, the Raw Developer custom ICC input profile is a very crude tool/toy. It relies on the user's visual judgement in getting a more pleasant and/or accurate to scene color as a way of finding a color glove that fits the unknown color descriptors of that particular scene captured by the camera's sensor response. It doesn't guarantee that all colors are covered and will look correct by each different scene captured using this profile of scenes shot even under the same light source. The tool also requires color management to be on for the preview to change as it does. Not so sure this is a very accurate way to go about this, but it seems to work.</p>

<p>The app allows turning off color management giving a dark linear response preview much like you get profiling a scanner, but then the ICC tool is disabled. The linear preview changes the saturation and hue noticeably compared to fiddling with the XY locus graph of a normalized gamma corrected preview.</p>

<p>It's been a while since I played around with RD and this tool, but I found out fiddling with it today that the previews are being generated by assigning the newly created input profile XY coordinates to the <em>display profile</em> instead of what I thought was the output profile which I have selected as ProPhotoRGB. I tested this by selecting Linear RIMMRGB v4 (ProPhotoRGB XY colorant/WP numbers with 1.0 gamma) which shows up as a selection space in the tool drop down menu and the preview's saturation goes off the scales as if assigning ProPhotoRGB to sRGB, but in this case it's assigning to my iMac profile which is close to sRGB. (SEE BELOW) This limits the use of this tool to the display's gamut, so it may not be very accurate for all possible colors captured.</p>

<p>I think it would be difficult using this tool on just a photograph of a full spectrum light source as a target. Not much of a preview to visually go on. You'ld need to photograph a color target (transmissive preferably...refracted light from a prism?) containing a wide range of colors transitioning into other hues with wide gradations instead of colors butting up against each other. Any daylight source such as from a window or direct sunlight would suffice. All the math calculation matrices expect a neutral looking light source IOW no captures at sunset.</p>

<p>It's a neat learning tool though.</p><div>00XWHq-292307584.jpg.8b66c31403e1900fa934e01c60a43210.jpg</div>

Link to comment
Share on other sites

<blockquote>

<p>This limits the use of this tool to the display's gamut, so it may not be very accurate for all possible colors captured.</p>

</blockquote>

<p>Ah, of course. Never mind the software experiments, then, and change references to the gamut in the previous posts from that 'of the camera' to that 'of the camera that the display is able to show'.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...