Jump to content

A question for you computer guys about bit depth per color channel


Recommended Posts

It is known that the average healthy human visual system (eye & brain) sees about 500 gradations of red, green & blue color -- or approximately 9 bits (512 gradations) per R, G & B channel. So why is the most common digital imaging format - standard JPEG and virtually all printers limited to 8 bit depth (256 gradations) depth per color channel instead of 9 bit?

 

Is the 8 bit per channel standard a legacy from days when computer resources were expensive, or was it an arbitrary ("that's good enough for us") decision?

Link to comment
Share on other sites

My guess: information is stored in computer by bytes. 1 byte = 8 bits.

Suppose we represent each channel by 9 bits, then each channel will take

2 bytes with 7 bits empty. This also results in 6 bytes per pixel, which

doubles the raw file size.

Since most people are happy with 8 bits per channel result, so maybe that is the way they decided to go at the beginning.

 

I am still looking for my first scanner and see there are 14-bits available.

Link to comment
Share on other sites

My understanding of 8-bit and 16-bit colour depth is that it is a purely mathematical function. You have the option of 2 colours, 4 colours ('CGA' in the old PC days), 16 colours ('EGA'), 256 colours ('VGA'), 65536 colours, 16.7 million colours, and so on. There's x^2 increase with each step. It probably has something to do with the way the computer addresses data & memory. I suspect you are right in saying that 8 bits per channel is a legacy of computer design rather than any conscious decision on the part of digital imaging programmers!
Link to comment
Share on other sites

Alot of earlier computer processors; ie simple ones were 4 bit. These are/were TV remote controlls; microwave oven "brains"; etc. 4 bits is a nibble. 2 nibbles is a byte. Data in most computers has been 4, 8, 16 bits for each chunk; over many decades. The 8 bit is just the closest normally used data chunk. If the device outputs greater than 4 bits; and equal or less than 8 bits; it is stored as an 8 bit chunk; with null bits. Here one of our old 11x17 RGB 400dpi scanners is a 6 bit device; that the files appear in photoshop as 8 bits. Our old 2700 film scanner is a 12 bit device; the output is 16 bits. Photoshop works with 1,8, 16 bit images. In recording of magnetic tape; 9 bits is sometimes used; with the extra bit/guy usually a check bit; unusable by the user. Memory modules that are ECC usually have 9 modules; instead of 8 on the memory stick. Servers typically use ECC memory. The 8 bit photoshop and digital is from the computer; and standard for alot of data chunk sizes..
Link to comment
Share on other sites

If you built a 9 bit camera/sensor; the output file would of course be a 16bit size; for photoshop. Here the file size doubles; with some real gain in quality.<BR><BR> The marketablity of items outputing 9 bits would be poor; one would get half the shots per card! Thus scanners and cameras with 9 or 10 bits are viewed as abit wastefull of memory and storage; and 14 and 16 bits viewed as being better in using resources. 9 bits would be like printing images that are just larger than a normal paper; and thus alot of trimming and scrap is created.
Link to comment
Share on other sites

<i>Is the 8 bit per channel standard a legacy from days when computer resources were expensive, or was it an arbitrary ("that's good enough for us") decision?</i>

<p>

Neither. It has nothing at all to do with visual perception. It only has to do with digital electronics. It turns out that it's easy to package data, move it around, operate on it, in units of 8, which results in hexidecimal arithmetic, which is extrememly easy for computers to use (humans too, with some practice ;-). These choices were made long before anyone ever thought of digitizing images.

<p>

The 8 bit byte became the defacto standard "smallest" useful data size on computers. Often quantities smaller than 8 bits, such as simple 0 or 1 (on or off, etc.) have the entire byte dedicated to them.

<p>

Once you have 8 bit bytes, the natural progression is to put two or more together in 2, 4, 8 or more byte "words." The more you can transfer together, the faster your data transfer is.

<p>

When it came time to digitize images, this is the system that was available to do the work. It was natural to digitize into three channels (R, G, and B) of one byte each. Taken together, this gives us 24 bit color. This is often seen as "good enough" quality. The next step up is to use 2 bytes for each channel, giving 48 bit color, which is pretty "high fidelity" to "real life."

<p>

While 48 bit color might be overkill for what the eyes can see, it's not over kill for what computers can do. If you use a photo editor like Photoshop to do much manipulation of the image, such as color corrections for those images shot at dusk, the "extra" information in 48 bit color provides headroom for making those manipulations. This results in prints without posterization, which is a good thing.

<p>

As to JPEG, this standard come into being early on, when memory was scarce and disk storage expensive. It is, IMHO, no longer needed, but it <b>is</b> a standard, and is being kept alive solely by inertia. As long as it is "good enough" as you put it, it will probably continue on.

Link to comment
Share on other sites

Remember that the JPEG convention was put together many years ago. The latest version of JPEG (JPEG2000) supports 16bit. At the time the original standard was created (1986) there were barely any computers that could display the full quality provided by the JPEG standard. Back then, the Amiga was the graphical workhorse of the industry.

 

In essence, it's like asking why hard drive sizes were limited to ~500MB in the late 80s. It wasn't a matter of "that's good enough for us". It was more a limitation of the hardware at the time.

Link to comment
Share on other sites

Computers are most efficient when dealing with values that fit in 8, 16 or 32 bits. It's a space vs. speed tradeoff.

 

As for the number of gradations, it's much more complex than that. In the blue range, you're not likely to distinguish 500 levels of luminance of the exact same blue color, however you will be able to separate shades of blue that have slightly different colors even if they have the same brightness. This alone requires much more precision than a plain "500 levels of red, green, blue" approximation would suggest.

Link to comment
Share on other sites

Just to add more confusion:

 

<P>

Although we can see 500 gradations, it is 500 gradations in a <I>logarithmic</I> space, ie. large jumps in the dark areas are not noticable to our eyes, and we can see small steps in bright areas.

 

<P>

The 8-bit encoding used in computers is linear, so it is actually wasting steps in the dark regions, and does not have enough in the bright areas.

 

<P>

Kodak developed the Cineon image format for use in scanning and manipulating motion picture film. The Cineon format is 10-bit log, and the encoding is based on Kodak's studies of the human visual perception system. Since image processing algorithms do not work efficiently in log space, the data is converted to 14-bit linear for processing.

 

<P>

As to the legacy of 8-bit channels: ten years ago, working with 16-bit channels just wasn't practical. Since computers use multiples of 8 bits internally, using 9-bits has the same efficiency (or inefficiency) as using 16-bits. In the past, there were computers that used 9-bit (Honeywell) and 10-bit (DEC PDP-10) "bytes", but the 8-bit unit has become standard.

Link to comment
Share on other sites

It is extremly unlikely that any display device you own can show all 500 gradations *simultaneously*. The human eye can quickly adapt to changes in brightness.

<p>

Jpeg is designed to emphasize the most important parts of any given scene. Jpeg is also gamma encoded (semi-logarithmic). See my article: <a href="http://www.photo.net/learn/jpeg/">Jpeg Compression</a> (http://www.photo.net/learn/jpeg/).

Link to comment
Share on other sites

There's also the issue of word boundaries. If the data doesn't all fit in the same 32 bit word or two you have to spread it across two words and it takes twice as long to access. This is why they will take a 24-bit color and pad it into a 32-bit memory location, similary a 48-bit color will go in 64 bits of memory.
Link to comment
Share on other sites

Basically, yes, both, because it was good enough and more would be wasteful when bits were more expensive. An 8-bit value was most likely chosen because that is what is usually referred to as a byte, and that just makes things easier to think about and it was a unit of data that the computers of the time used. I doubt that the earlier graphics machines could make use of more than that anyway. So, 8 bits probability seemed extravagant at the time. And it made the math for storage work out easier than an odd number or a value that's not a power of two. Earlier 'standards' had fewer bits, like 4 (EGA).

 

8 bits to a byte is purely arbitrary, could have been 4 or 16 as well. Some even wanted an odd number, like 7 because that is what ASC II is based on. But, a power of two works out better for us humans and 8 bits works out great for octal and hexadecimal notation, which is purely for human readability. The custom right now is to use multiples of a byte value to make things easier for us to relate to, and a power of two is preferable. That is: 8 16, 32, 64,128. This custom has been used to design computer hardware for several decades now. When 8 bit channels was decided the computers of the time were referred to as 8 bit or 16 bit based upon how many bits are transferred around in the circuitry in parallel. So, it fit well with the hardware. Machines now are either 32 or 64 bits, and some are even 128 with in the CPU, and sometimes larger in the connecting hardware. The new JPG standard is just taking advantage of the hardware that's available.

 

Remember, digital photography is still evolving. 16 bits per RGB channel will seem too low some day as well. And, people can throw around a lot of numbers to justify the way the currently do things. Personally I don't think 16 bits per channel in a RGB schema is enough, but it's a lot better than we have now. Hopefully we will one day have a 'standard' that is based on how we really see light and not just what the hardware can support.

 

I've currently read that most CRT's and LCD's can't even replicate the full 8 bits per RGB channel right now. So, we have a VERY long way to go before digital reaches its potential in both hardware and software. And, its pretty good right now!

Link to comment
Share on other sites

Important reading is Bob Atkins article on <a href="http://www.photo.net/learn/raw/">RAW, JPEG and TIFF</a> (http://www.photo.net/learn/raw/), which covers issues very relevant to this thread (16-bit, dynamic range, etc).

<p>

I can't overemphasize the point the most 8-bit images are gamma encoded (2.2 for PC 1.8 for MAC) - this extends the dynamic range considerably, almost to the point where it equals that of 12-bit linear, though of course not the same quality across the range.

<p>

This 'simple' question would justify a full lentgh article to give a complete answer, but basically 8-bit (gamma encoded) is 'good enough' for almost all purposes.

Link to comment
Share on other sites

Several others have mentioned that the 8-bit quantization is mostly an effect of that being a convenient number given the way most computers are designed. However, there have been computer systems with other word lengths around -- for example, Digital Equipment (the remains of which is now part of HP), had a series of mainframes and minis with 36-bit words. If those had caught on, we might have had 9-bit channels in our images today. :)
Link to comment
Share on other sites

Two related factors are spatial density and chroma sensitivity:

 

1. It may be possible to distinguish more than 256 graduations in brightness, but this applies to extended areas, not immediately adjacent pixels (the human eye is not that sensitive at normal viewing distances). Adjusting the RGB values slightly for adjacent

pixels (plus or minus 1) could give 3 times the sensitivity for mostly monochrome images

(see below).

 

2. The human eye is more sensitive to sharp differences in brightness (edges), than abrupt colour transitions (in the real world these seldom occur). Jpeg compression takes advantage of this (amongst other things), and a full-range of 16.7 million colours (24-bit)

for every pixel is overkill (IMO).

 

The GIF standard which uses a palette of only 256 colours, can produce moderate quality images by dithering adjacent pixels, without exactly reproducing every detail. (For monochome images GIF reproduces the full range of detail, though with generally larger

sizes than Jpeg).

 

When Jpeg was developed the standards documents indicate that there was pressure from specialists (such as X-ray radiographers) to include a 12-bit version, and although this is available it is seldom used (AFAIK).

 

For most real-world images the old high-colour display mode (16-bit or 65000 colours) was surprisingly good (with proper dithering in viewers such as IE4). You can test this for yourself by decreasing the colour depth of an image (with error diffusion).

Link to comment
Share on other sites

I'm sure you realize our eyes are more sensitive to some hue and tone changes than others. For instance, we're better at identifying miniscule changes in skin hue than say relative saturation levels of primaries (e.g. RGB 0,0,254 vs RGB 0,0,255). The color spaces are linear though, and can't have higher resolution around say skin and foliage hues than out in the highly saturated border areas. The bigger the color space, the coarser the resolution. sRGB is probably about as big as you'd like to go without adding bit depth to maintain resolution around tans, green-blues, dark browns, and so forth. Where we're most sensitive is also ethnically influenced. (We get better at noticing changes to what we deal with every day of our lives.)

 

On a more technical level, what we can discern is not only a colorimetric issue but spectrophotometric as well.

Link to comment
Share on other sites

"I'm sure you realize our eyes are more sensitive to some hue and tone changes than others. For instance, we're better at identifying miniscule changes in skin hue than say relative saturation levels of primaries"

 

I thought the eye was most sensitive to green. Supposedly because evolutionary natural selection favoured sophisticated analysis of foliage, and what might be hiding in it!

Link to comment
Share on other sites

"It is known that the average healthy human visual system (eye & brain) sees about 500 gradations of red, green & blue color"

 

Ellis, I'm always sceptical about these basic visual assumptions. Incidentally, I've read supposedly authoritative statements that the eye can discriminate between 200 and 300 levels of grey. Which one is right, 200, 300, or 500? Maybe none!

 

Where ever possible I'd prefer some sort of empirical test that I can evaluate with my own eyes rather than the endless pontificating that feeds Photo.net. In the case of colour measurement Photoshop and an inkjet printer provide the perfect laboratory.

 

If I set up two swatches and fill one with Red 0, Green 200, Blue 0 I can not distinguish, either on the monitor or on a print, between that and Red 1, Green 200, Blue 0. Nor for that matter from Red 0, Green 201, Blue 0.

 

My take out from that simple test is that 8 bit is fine for a finished print, at least for my own critical evaluation!

 

As another example I'm sure you've read the endless references that the eye can only resolve about 6 line pairs per millimeter. Well, the rub must reside in how you define "resolve" because I've a high powered loupe with a flat glass plate on the bottom engraved with a millimeter measuring scale. The finest graduations are tenths of a millimeter, the equivilant of 5 line pairs per millimeter.

 

If I flip the loupe upside down and put it on a lightbox my tired, middle-aged eyes can, completely unaided, easily see that 5 lppm is a series of individual lines. I might not be able to accurately count them but there's absolutely no doubt that they are individual lines rather than a continuous grey tone. And if that holds true at 5 lppm I'd be pretty sure it still holds true at 6 lppm and beyond.

Link to comment
Share on other sites

Indeed multiples of 8 bits are easier to move around on computers with 8 bit bytes (using a bit rough terminology here.) In the days before RISC concepts becoming the norm some CPUs saved space with encoding instructions on bit rather than multiple of 8 bit boundaries. This creates complexities in hardware and is generally "not nice". The IBM S/360 introduced the 8 bit byte in the mid 60's. One result is that high level programming languages typically have scalar types of multiples of 8 bits: a 24 bit RGB image can easily be represented as an array of three 8 bit integers (chars in C) for each pixel - doing this for 9 bits would be inconvenient and easily waste bits (encoding three 9 bit channels into 32 bits, for example.)

 

And in the old days (even the 1980's) memory cost a fortune!

Link to comment
Share on other sites

Re <I>"It is known that the average healthy human visual system (eye & brain) sees about 500 gradations of red, green & blue color"</i><BR><BR>When color TV evolved; alot of study was done on colors; the eyes resolution versus color; etc. The RF bandwidth of the standard color TV channels; ie old analog is different for each color; so the valuable RF was used more wisely. The orange/cyan has more bandwidth RF space than green/purple. RCA and CBS had different systems; RCA was forward workable with B&W TV's.<BR><BR>Alot of early color TV's had no degausing coils. Kids would place a magnet by the screen; and all the colors would get messed up; until the repair guy came with degausing hoop.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...