Jump to content

What does it mean to calibrate a monitor?


Recommended Posts

hello,

 

this might seem like a strange question, but i went through the process of calibrating my monitors using an i2 and coloreyes software.

monitors almost match all a good thing. my question though: what have ai calibrated the monitors to? i assume to sRBG but i am not quite

certain or i simply misunderstand what the *target* or *standard* is that a calibration tries to achieve. my assumption was that when

calibrating a monitor, i should see a photo with embedded sRBG profile on the monitor the same way as i would by proofing it in

Photoshop. is that correct?

 

thank you for your time...

Link to comment
Share on other sites

thank you henry for your post. in my case the problem is rather to get my slides to look on screen the way they look on film

in terms of color rendition. so from what you say i take the following to be correct: if i print in Adobe RGB the goal of

monitor calibration is to get my display to display images on screen in Adobe RGB as they will be printed?

Link to comment
Share on other sites

The last thing in the world you want to do is calibrate a monitor to sRGB.

 

The purpose of calibrating a monitor is to create a description of the largest possible color space the monitor can

reproduce. This profile allows an application like Photoshop to be able to know what the monitor can show you, and

make adjustments to the way a given image is displayed to show you as many of the images colors the most accurately

it can. sRGB is a document color space, and a small one (by today's standards) mostly suitable for web images. You

monitor profile is a device profile, that instead describes the capabilities of a particular piece of hardware.

Proofing is an entirely different issue. When you proof, you're faking the monitor (or printer) into showing you what an

image light look like on a different piece of hardware, such as how it might look on newsprint.

Link to comment
Share on other sites

christina: thank you for this explanation! now it starts making a lot more sense to me. i didn't understand the difference

between document and device profile. it makes total sense to me. i assume that a scanner or printer once calibrated, will

also have such a device profile. what does "working space" mean in this context? when i open a file with an embedded

profile (like Adobe RGB) in photoshop i get the option to either open in it in Adobe RGB or the Monitor Profile or Not to color

Manage. Does that mean that i instruct PS to fake the display to show it in Adobe RGB (like proofing) and wouldn't the last

2 options be the same?

Link to comment
Share on other sites

What you mean by calibration is actually two steps: calibration -bringing the display or monitor to a known

state -- and then profiling the characteristics of the individual combination of graphics card and the display

itself.<P>

 

Here is how the system works. There are two parts of the calibration/profiling system: the software and the

hardware: either a colorimeter or a photospectrometer. <P>

 

Initially the software resets the display & graphics processor to the factory defaults, you then enter the

parameters you want to calibrate your graphics card and display to: gamma (contrast curve), color (6500 K,

etc.), contrast( if your system lets you control that); and luminance (brightness). <P>

 

Once the calibration process is finished the profiling process begins. The software sends a series of signals of

known value through the video processor and display. The photo cells in the hardware (colorimeter) read those

color patches and the software compares the values of those color patches to the signals it sent and essentially

reprograms the

graphics processor to make the values the display displays so that the values the colorimeter is reading match

-- or at least as closely match as the display is capable of -- the values of the initial signal. You are

creating a self correcting feedback loop. <P>

 

The purpose of the process is to introduce consistency throughout your work, to give you a standardized base to

ground your decisions on. <P><I>

 

"If I print in Adobe RGB the goal of monitor calibration is to get my display to display images on screen in

Adobe RGB as they will be printed?"</I><P>

 

You don't actually print in Adobe RGB(1998), sRGB, Pro Photo or any other of the device independent color spaces

(AKA "working spaces"). These are mathematical constructs that are essentially neutral holding places for the data

that makes up the colors in your photograph.

 

When talking about an inkjet or other type of printer, you have to consider the specific combination of printer,

ink and paper as a unique device, one that needs to be profiled so your program knows that output devices

(printer+ink+paper) capabilites. Some printers these days with good papers can print a slightly larger (in some

colors) gamut than Adobe RGB(1998) can contain while some printer+ink+paper combinations have a gamut that is

smaller than sRGB. <P>

 

The goal of color management is exactly what it says: to get the colors you see on your display to print the way

you see them on your display. <P>

 

Most displays available today, except for those at the very high end, have a gamutthat is roughly equivalent to

the small sRGB gamut. But don't be confused: if you have a good graphics card, a decent LCD display and use a

computer operating sytem (OS that has a really good CMS (Color Management System) all of the pieces work together

so that what is rendered on the screen will actually be a fairly good interpretation of what the prints will look

like IF the profiles for the display and the output device ( remember that is the unique combination of your

printer + inks + paper) are good. <P>

 

Ellis Vener

Link to comment
Share on other sites

ellis: thank you also for this very concise and detailed comment. i think i will have to reread it a few times to fully

understand, but i think i understand better what is happening. at this point i am still struggling in even getting my scans

to show properly on screen. i haven't even gone down the route of printing yet. i started by calibrating my LCDs and now

ordered IT8 targets for the scanner in the hope of getting better results. with this i will have those device dependent

profiles to get started with. if i understand you correctly sRGB and Adobe RGB or ProPhoto really do not mean much as

they are just containers which describe a color space within which the devices can co-exist and be simulated. i realize

that this is a rather complex matter and source of much greater frustration that i expected :-) but in a agood way i

suppose...

 

thank you again very much!

Link to comment
Share on other sites

"if i understand you correctly sRGB and Adobe RGB or ProPhoto really do not mean much as they are just containers which describe a color space within which the devices can co-exist and be simulated. "

 

Actually they mean a great deal. To use the container metaphor, sRG is like a 1 quart jar, Adobe RGB(1998) is like a half gallon container, and Pro Photo is like a gallon jug. the larger the workign space the more colors, especially the more saturated colros can be distinguished and defined from one another. With a smaller working space than what the original exists in those differences are crushed (AKA "clipped") to the limits of the smaller space.

 

A very large working space like Pro Photo or Joseph Holmes' Chrome 100 and D-Cam spaces (http://www.josephholmes.com/profiles.html ) are so large they need to be defined using a 16 bit per channel ( 65,536 gradations per channel) measuring scale compared to the 8 bit (256 gradations per channel) scale needed for sRGB. Adobe RGB(!998) is thought of as a reasonably large working space: It is small enough to use the 8 bit per channel scale but is also large enough to use a 16 bit per channel scale.

 

The problem with using Adobe RGB(1998) for scanning transparencies and slides is that both Kodachrome and E-6 films are quite capable of having gamuts that extend beyond Adobe RGB(1998)s edges. Pro Photo is thought to be unduly large however and that is where Holmes' Chrome 100 and D-Cam spaces come in very handy as they are more tailored to the gamut of transparency films.

Link to comment
Share on other sites

I found "Understanding Color Management" by Abhay Sharma to be very helpful. Of course, there are many other books on color management.

 

Elllis: Once calibration is over, I don't think profiling makes any more changes in the hardware settings. It just creates an icc profile that is later used to map RGB signals to CIE Lab values, and Lab values to RGB signals, for the monitor that has been profiled. Do you think this is a correct statement?

Link to comment
Share on other sites

BTW: if you break it down to simple elements it makes sense. The problem is that when you put all of those pieces together it gets hard to grasp. It is also a bit of kludge. I really hope that one day someone will come up with a less complicated system.

 

Read the stuff Joe Holmes has on the link I embedded. He is really smart and a very clear headed writer.

Link to comment
Share on other sites

<b>"if i understand you correctly sRGB and Adobe RGB or ProPhoto really do not mean much as they are just

containers which describe a color space within which the devices can co-exist and be simulated."</b><p>

 

Well, not exactly. If you have an image in ProPhoto and you convert it to sRGB, you are asking the CM system to

down-sample (throw out) certain colors to make the image fit into a smaller space. Various kinds of logic come into play

to make the image look the way you want it to, but a smaller space means less color, period. Once you have down

sampled a file, you can't get those colors (and by extension, dynamic range and latitude) back.<p>

 

Device profiles, on the other hand, do not change the image color space*, they only interpret it, and convert it based on

what the device is capable of. <p>

 

(*unless you tell them to!)

Link to comment
Share on other sites

<b>benny, christina, ellis:</b> thank you so much for the relevant and helpful information. i very much appreciate it and

will read those links and try to get a copy of that book. so i will try to summarize what i understand so far within the

context of my workflow. btw. is a document space the same as a work space, as opposed to a device space?

 

1.) scanning

 

After calibrating my scanner i will have a device profile which defines the gamut of what the device is capable of

recording. i scan my slide and the resulting file will have the device profile embedded into it, unless i tell the scan

software to convert it to a document space large enough to capture all the colors (Pro Photo or Chrome 100 et a.)

 

2.) Photoshop

 

As i open the file i get told that the file has a certain profile embedded in it and that it doesn't match my work space or

document space. if i told my scan software to use Pro Photo i open it as such. I can now edit the file in photoshop and

potentially print from there. photoshop will take care of translating/converting the Pro Photo to the printer device profile.

 

i guess what i don;t fully understand here and probably because i do not fully understand it is, why do i even need this

container? couldn't i just simply work of the device profile from the scanner to the LCD to the Printer? what is the

advantage if photoshop does all the converting?

 

thanks again for your patience and time!

Link to comment
Share on other sites

I'll field this one.

 

The reason you need these containers is simple - there is no reason to be limited to the color space of any old device

your image happens to run through.

 

Think about your color space as a 5 gallon bucket full of paint. That's a lot of color in your image if it takes 5 whole

gallons of paint to draw it. Super! Now let's say you want to open your image up and look at it. Well, your monitor can

only hold, say, 4 gallons of paint. You have to dump some of that paint out, just to see your image! So you dump out

some paint to fit it into the monitor bucket, and now it's open on screen and looks (nearly) the same as it did before.

 

Okay, so now you want to print your image. Well, sadly, your desktop printer can only handle 3 gallons of paint. Again, you have to

throw out a gallon to fit your image into the smaller color space of your printer. So now you have 3 gallons of paint to

create an image that used to use 5. It's starting to look kinda thin in places, but it's still recognizable.

 

Okay, so let's say Modern Painter magazine wants to use your image for their November cover. Sweet! It's going to be

a high-quality 6-color print run on fancy stock. This cover will handle 4.5 gallons of color. But instead of that, you only

have 3 gallons, because you threw out all that color to squeeze the image into your desktop printer. When the image goes to press, the

higher quality of the cover stock reveals all those compromises you had to make with your image earlier. Modern Painter

doesn't think the IQ is good enough for their cover, they pull your image, and you become depressed and decide to sell

all your equipment and become wandering mendicant who lives in a box and makes balloon animals for change.

 

Nobody wants this. That's why we have Color Management.

 

CM lets you keep every last drop of paint in that original bucket. When a device comes along that requires a different

color space, CM is the system that let's your image talk to those devices so they know which colors to keep and which

ones they can throw out. But you never have to lose anything, because you're just converting the original file for its intended

use.

 

A color space is just a palette. The bigger the palette, the more colors you have to work with. If you have a lot of colors in your image,

but the printer or monitor or whatever can only show a few, there has to be some kind of mechanism to make those color-converting decisions. A profile is just a description of that palette, and CM uses that info to make those decisions.

 

whew!

 

-c

Link to comment
Share on other sites

<b>christina</b>: "Modern Painter doesn't think the IQ is good enough for their cover, they pull your image, and you

become depressed and decide to sell all your equipment and become wandering mendicant who lives in a box and makes

balloon animals for change." :-))) thank you for warning me as top what will happen to me! no seriously for making this so

clear, as to what happens in the different stages of CM. i think it ties very closely into what i asked in #2 above, as to what

happens at what stages in photoshop.

 

so i assume when you say throwing away colors of the original file you mean the "conversion" action in photoshop. that is

in essence telling PS to throw away data in order to convert it to a target color space of a specific device. so in keeping

your wonderful analogy of the paint buckets. after i scan a slide and it has the device profile embedded, it basically means

it has those 5 million colors. so far so good. now, when i open it in photoshop the problem is that my LCD can only display

4 million colors. So at this point i have to choose a container. The container and this is the important part i guess is the

"document or workspace color model" which will not throw away the 1 million colors that the monitor cannot show but rather

define which ones will be left out. The difference of the containers is essentially the decision as to which 1 million colors will

get clipped for display on the monitor, or by which algorithm this choice is being made. It is that same workspace or

document space that will also define what colors to print if i can only use an old gutenberg print-press or if i can splurge a

really bad laser color printer from the 80s. regardless the CM will say throw away those colors and not those. and every

container will have a different definition, ultimately with the same goal to reproduce the images as accurately as possible,

however a choice needs to be made.

 

so if i understand this right it means that the scanned slide in this example, will always have the original device profile

attached to it, or rather unless i tell a software to convert it to another container, in which case i would loose color from the

bucket, unless the space is big enough to keep it all. so ideally wouldn't i always want to keep the device profile and simply

use a document profile?

 

i am truly sorry if i sound really dense here :P and i promise i will get the book and i will check out the link above later

tonight :-)

 

thank you all of you so much for your time and patience. it is very helpful. hopefully to other readers too.

Link to comment
Share on other sites

<i>"The container and this is the important part i guess is the "document or workspace color model" which will not throw

away the 1 million colors that the monitor cannot show but rather define which ones will be left out."</i><p>

 

Gold star! You're a quick learner, Patrick.<p>

 

<i>"so if i understand this right it means that the scanned slide in this example, will always have the original device

profile attached to it, or rather unless i tell a software to convert it to another container, in which case i would loose color

from the bucket, unless the space is big enough to keep it all. so ideally wouldn't i always want to keep the device profile

and simply use a document profile?"</i><p>

 

 

Not necessarily. When you shoot an image with most digital cameras, you get an 8 or at most a 14 bit image. But ProPhoto 16 bit is

bigger than that, and this allows you to play with more colors as you do your curves and colors and contrast and stuff. You have more

room on that palette than you had with the original capture, so use it while you can! The less you down-sample, the more latitude you save

for yourself later on.

Link to comment
Share on other sites

<b>christina</b>: thank you for your kindness. i think it starts dawning and all making sense to me now.

 

<br><br>

 

basically, each device (scanner, printer, LCD) will represent a specific color in their own RGB values. i think i know

understand what the CIE spaces are: the CIE spaces literally define or label a specific color, name them so to speak in

absolute terms. each device however will use their own RGB values to generate this specific color. So "bubble wrap

blue" on my scanner will have a different RGB value than on my LCD or printer. by calibrating a device and hence

creating a device profile i essentially as a first step make sure that a color in CIE space has a defined and understood

RGB value. At this point the document space or workspace comes in. Ideally i would want to use one that holds at the

minimum the RGB values range of all my devices, plus as extra bonus some wiggle room upwards to add colors for post

production. What this document space also does is translate or map the RGB values across the different devices.

Basically make sure that said "bubble wrap blue" as scanned by my scanner is actual shown as "bubble wrap blue" on

LCD and eventually on my printer.

 

<br><br>

 

so far so good. i think i understand that part now and it makes sense. what i am still struggling with is this. when are

those device profiles actually assigned to a file and what would it actually mean if i scan in a non color managed mode.

if i dare to think out loud it would mean that i basically get a file with a bunch of RGB values in it that might or might not

look like my original slide. so a specific blue on the slide might actually not be the right blue. If i tell photoshop to display

this non color managed file using the scanners device profile i would think i get the right colors. however, my LCD still

would not necessarily display it right unless it uses a calibrated profile which determines again the right color. so now

comes the part for the document space. i would then convert in photoshop the file with it's device profile to a larger

document space such as Pro Photo or Holmes or whatever. This then will allow me to process the file further outside of

the limited realm of what the scanner might be able to produce. Same accounts for printers but probably the other way

around as they might not print as many colors as a large document space might allow to.

 

<br><br>

 

so do i hear a crack pipe or am i on target?

 

<br><br>

 

thanks again so much!

Link to comment
Share on other sites

Just some clarity about these working spaces that wasn't mentioned. They are intended as the word suggests "Working"

spaces meaning editing, not just as a container.

 

Everything about digital imaging is about the amount and quality of capture first and foremost determined primarily by the

quality of the hardware and software as current technology allows. The main issue is that the data we've captured gets

transferred in the form of "1's" and "0's" so the only way we KNOW what we've captured with current systems is through the 8

bit video preview of our display and video card system. Not the best but good enough for our current needs.

 

We really don't know 100% what all we've captured so these container "Working Spaces" were created in making sure for

future higher quality capture, viewing and editing technologies will have something to work with if need be down the road. You

may have captured a scene which has its own gamut shape and size combined with the capture device that can grab all of it

that may or may not fit in sRGB and AdobeRGB so you want to make sure you've stored it in a container that doesn't throw

anything away.

Link to comment
Share on other sites

<b>tim</b>: thank you for your clarification. just as a further question on this. after i scan a file and it has a device profile

embedded and i convert it into a work space, what happens to the device profile at that point? does a file have both profiles

embedded or does the device profile get permanently translated into a work space? it seems that if one would want to

preserve the original quality of a file or scan, one wouldn't ever want to loose this original information.

 

thanks again.

Link to comment
Share on other sites

Patrick,

 

The data that's captured if converted to a large enough space to contain all that data won't loose all this original information as

long as the file is kept in 16bit at the point of capture through to the converting and editing stage.

 

You'll only see this level of data integrity on a pixel level while editing. Richness of hues transitioning from lights to darks will

show a more variety of color similar to what is seen in iridescent objects that add depth to an image where an 8bit capture

converted and edited in sRGB might show less variety of pixel hues and slight color inaccuracies per pixel from what was

actually captured but it would be subtle viewed at 100%. And even if captured in 16bit again we're having to view that data in a

8bit video system so there may still be data contained but not seen.

 

You never want to edit in the scanner source space because it's a table based profile full of all the kinks required to correct for

the limitations, like lack of proper neutrality, inherent within the scanner hardware and software. Converting to a

mathematically clean, neutral matrix based working space to apply additional edits is a must.

 

Just make sure the working space is large enough to contain film gamut. Joseph Holme's Ektaspace is an ideal working space

for scanner capture because it's not too big and not too small for the majority of film gamut in 16bit. But ProPhoto can work as

well also. Different scene's captured will add to the varied size of gamut on top of this maxing out at what the film can capture.

Link to comment
Share on other sites

<b>@tim:</b> thanks for the details. so that basically means that my 48bit scanner will produce a file which has a specific

device dependent profile embedded. as i open the file in PS CS3 for example i will convert it there to a larger working

space, such as the Ektaspace ($100 ouch! is it worth it over using ProPhoto?) for additional editing. is there value in having

the scanner software doing the conversion to ektaspace or is this better left to PS?

Link to comment
Share on other sites

You're probably referring to the Ektaspace Dcam chroma variant sets that Joe charges over at his site to get better saturation

over using PS's Hue/Sat dialog box which is not the best tool for doing this on some images. I was referring to the free version

which was the original that came out several years ago. It's easy to find and download.

 

If you want to get the most data out of your scanner it's best to scan in raw and assign, convert and edit in Photoshop. The

benefits will be determined by the quality of your scanner and its ability to actually provide true raw and not some sRGB

rendition like some Epson flatbeds.

 

Anytime raw data goes through software editing be it with the scanner's or Photoshop data gets dropped at some point. At

least in Photoshop you'll be able to see all of it before it goes through the scanner's software which hides certain portions of

this data by applying adjustments to make the image look pleasing to the eye and how it renders the preview on screen which

may or may not show artifacts and detail as good as working in Photoshop.

 

But some scanner software is quite good and faster in regards to capturing data that counts and weeding out stuff you don't

want in the image. Some film scanners are good at this. You'll have to determine this on your own and pick the workflow that

works best providing the best image possible.

Link to comment
Share on other sites

<b>tim:</b> thanks again for clarifying the ektaspace free version, i will give it a try. there is also a set for $99 for those

slide version not DC available with the same chroma variants. i shall see, may be i'll buy it if i can find out if it is worth

the money. :-)

 

when you say RAW from the scanner, what exactly do you mean by that? basically turning everything off, like color

management, gamma, brightness, contrast, etc. basically telling the software to do *nothing* with it, including not to

embed a profile. i do have an epson flatbed to scan MF slides, and i am about to calibrate the scanner with IT8 targets i

bought. would i convert the RAW file then in photoshop at first using the scanners profiles and then convert it into the

larger ekatchrome space to work with?

 

thanks again for your time in helping me :-)

Link to comment
Share on other sites

I haven't used my Epson scanner for quite some time since purchasing a DSLR and researching raw processing, so I'm a bit

rusty on how to set the Epson scanner for raw output. There's plenty of instruction on the web for building a profile for your

Epson scanner using an IT8 target.

 

Just assign the newly made scanner profile in Photoshop and convert to the working space of your choice and start editing if

needed. With the Epson flatbed scanners there were questions on whether it could produce a true raw output since the

preview and the data are basically in sRGB.

 

This page illustrates how to test to see if a scanner can output a true raw file, but I never tried on my Epson. That page also

indicates you have to use Vuescan, a third party scanning software to actually pull it off on Epson scanners.

 

http://www.c-f-systems.com/Scanners.html

Link to comment
Share on other sites

<b>tim:</b> thanks again for the link and tip. i will try and see what i can get out of epson scan, though i think to remember that i could turn off color management and with it all of the curve, brightness etc. settings grey out, which

suggests that nothing is being done to the file... :-)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...