Jump to content

Mega Pixel Vs Spectrum of Light


Recommended Posts

Mega Pixel Vs Spectrum of Light

How do I explain to other non Photographers when ask what is a Pixel that would be like trying to explain a Spectrum of Light bouncing off a Subject on a sunny day using a 50mm lens on f/5.6 at 1/500 Sec. How the light goes on to the memory card and transforms it into a picture and when you ask them what Lens they are using they have no clue.

Edited by miss.annette_leigh_haynes
Link to comment
Share on other sites

Most people are now familiar with computer screens (or at least televisions). Each "picture element" (pixel) defines a (usually) rectangular area and is (for this part of the conversation) a single colour. Just as a film or video is not constant motion but made up of lots of frames shown one after the other, the picture on a computer screen isn't a smooth continuous representation of what you're looking at, but made up of a grid of pixels. When you look at them from a distance, they make up an image. This isn't just computer screens - the same is true for half toning on newspapers (and most other print). If you don't require the grid to be regular, the same is true of pointillist painters (e.g. Seurat), the microscopic grain of slide film, and the cells in the human eye. Computer images record one colour per pixel when they're stored (although there's usually some clever maths to reduce the space it takes).

 

The pixels (or "sensor sites", or "sensor elements" - sensels) in a digital camera capture light hitting parts of the image and record the intensity in the image file - which gives you something you can display on screen or print. The more pixels you capture, the more detail you can record.

 

This is as there is to a monochrome camera (like the Leica Monochrom). However, most cameras capture a colour image, and that's more complicated (next post).

Link to comment
Share on other sites

If you look more closely at the pixels on a computer monitor or TV (TVs are usually bigger...) you'll see they're made up of different colours - usually red, green and blue, often in vertical stripes. The pixels don't display an arbitrary colour - they can only control intensity, so to make up actual colours you need to record different intensities of primary colours.

 

The human eye also isn't sensitive to infinite wavelengths - there are different cells which are sensitive to different frequencies. Ignoring rods (which are highly sensitive to monochrome for night vision but don't do much in daylight), there are usually three types of cones (occasionally four or five - there are variations that are encoded on the X chromosome, so a few women end up with a mix). One type of cone is mostly sensitive to blue light, one to reddish-yellow and one to greenish-yellow. Since all the colours most of us can see are identified as a mix of the response of the three types of cone cells, you can fool the eye that it's looking at any colour by giving it a mix of three wavelengths that stimulate these cells.

 

That's why artists can create a range of colours by mixing paints together, why printing ink has a few different ink colours, and why coloured lights you may see are made up of a small number of colours. This wouldn't work so well if the eye was different - mantis shrimps can see around 16 different wavelengths (it's more complicated - their eyes are weird) and they're probably not impressed by TV.

 

This trick of breaking down wavelengths only works when you're considering what the eye can see. If you mix a coloured light with filters (such as the printed page), it's the actual mix of wavelengths that matters. This is why things can look different under cheap fluorescent lights, why bluebells look a very different colour in sunlight and shade, etc. The effect is, if I'm not abusing the terminology, known as "metamerism". One of the most spectacular demonstrations is with the gem alexandrite (worth looking up). Mostly, though, the bigger problem is the overall tint of the light - and you can fix that with "white balance" by changing the relative amounts of red, green and blue.

 

Because of the mechanics of how the light is produces, TVs and monitors generally don't produce a mix of their three primary colours at exactly the same place - each pixel is effectively several little coloured lights next to each other, and you see a continuous colour because they're close enough together that it looks like a single area of colour. It helps that the eye is more sensitive to small changes in brightness than in colour (which is part of how the visual system deals with the eye having colour-specific cells).

 

OLED and plasma TVs (and older CRTs) actually emit light at each sub-pixel colour location. LCD TVs and monitors generally have a white backlight with little coloured filters over it, and control the light coming through each filtered area by partly blocking it. This isn't perfect, by the way, which is why LCDs generally can't quite do a perfect black - some light leaks through. On the other hand it's easier to make them brighter than other screens. But I digress (and there are other, more obscure tv technologies too).

 

Most cameras are only sensitive to the intensity of light a single sensor site. To get colour, manufactures put colour filters over the pixels so only (usually) red, green or blue light gets through. The exact behaviour of these filters is carefully tuned, but still doesn't quite match the eye sometimes.

 

By having red, green and blue filters close together, the camera can do some processing to create the final colour which the pixel should be. Because real-world objects tend to have fairly constant colours, you can get detail nearly at the full sensor resolution by applying some heuristics, even though you only have one colour recorded per site.

 

The most common pattern, Bayer, is a repeating 2×2 grid of red, green/green, blue - the eye is more sensitive to green, so that's the colour which gets doubled. If you take a photo of something with repeating detailed lines (like a distant railing) the camera can get confused and think it's recording colours instead of detail, because they look the same on the sensor. Fuji use a more complex "X-Trans" pattern to reduce this problem.

 

Sigma have a "Foveon" sensor technology which actually does capture multiple colours at one pixel, by having light of different frequencies penetrate to different layers of the sensor (as colour film records amounts of light and has built-in filters at different layers). Foveon has some compromises, so it's not widely adopted. Some cameras with Bayer sensors instead have the ability to shift the sensor slightly, and capture multiple exposures to get a full set of red, green and blue at each location - of course, this assumes nothing is moving in the scene.

 

So... Break the colours down into primaries, record them at or near each pixel, and you have an image.

Edited by Andrew Garrard
Link to comment
Share on other sites

That's the non-technical answer.

 

The technical answer probably involves saying that pixels represent sampling points on a continuous intensity plane, and that viewing them involves a reconstruction of the original data naturally limited by the sampling frequency; the eye does this for you, because that's how its photoreceptors work. The visual system does some further processing to generate a colour-difference representation, generated from the filtered combination of the interaction between incoming wavelengths and the response curve of the cone cells. Approximating this response at the camera sensor (or film) and recreating it using primary pixel colours produces the same response in the eye as looking at a continuous detail, infinite spectrum scene - mostly.

 

Fortunately the visual system is good at adapting to colour and brightness shifts, because otherwise it would be impossible to recognise things in sunlight or shade (etc.) We can tweak a lot with calibration of equipment, but mostly it doesn't matter: within limits, the eye itself will make everything look right.

 

Does that help?

Link to comment
Share on other sites

The word 'Pixel' is a contraction of Picture Cell. It's the smallest element that makes up a digital image.

 

You can think of it like the 'grains' of a B&W film image, or the little dye clouds that make up a colour slide or photographic colour print.

 

The difference is that B&W film grains are pure black, opaque bits of silver, while a single pixel can show 255 shades of grey.

 

Likewise with colour: Film dye clouds are little blobs of Cyan, Yellow and Magenta dots of dye (each about 2 to 3 microns in diameter). A single colour pixel can be any one of over 16 million different colour hues or shades.

 

With camera pixels having a size of around 5 microns these days, it can be seen that the tone or colour refinement of a digital camera can be orders of magnitude better than similarly sized film.

 

The other difference, of course, is that pixels are virtual. They only exist in the context of a display device - be that a computer monitor, phone LCD screen, paper print, Television screen or whatever. When not being displayed they live inside memory devices as digital data.

Edited by rodeo_joe|1
Link to comment
Share on other sites

The word 'Pixel' is a contraction of Picture Cell. It's the smallest element that makes up a digital image.

 

I'm standing by "picture element" being the official etymology I've heard in my 20+ years of working in computer graphics, but "picture cell" works too if you prefer. :-)

 

The difference is that B&W film grains are pure black, opaque bits of silver, while a single pixel can show 255 shades of grey.

 

Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure. So film can do better than on-or-off per grain, just as digital pixels usually represent multiple shades.

 

A single colour pixel can be any one of over 16 million different colour hues or shades.

 

Just to square that with my previous claim about red, green and blue being next to each other and not usually colocated: It's normal for the computer to store three values with each pixel in memory (or on disk), one each for red, green and blue. Sometimes the representation is more complicated, usually to save storage space, but "pixels" usually get treated as having a single location. Technology like ClearType (for fonts) bend this rule, and show better detail than the display would normally be capable of by treating the red, green and blue sub-pixels of the display as being in different places for the purposes of smoothing out text - if you look closely, you may see some text has coloured edges for this reason.

 

Displays still typically don't display all the colour components of a pixel in the same place. A common exception is that many projectors do show all the colours in one place, interleaving red, green and blue over time (so you can often see coloured stripes if you move your head while looking at a digital projector). Colours add together, so showing red, blue and green next to each other either on the screen or over time gives the same result as blending them.

 

The other difference, of course, is that pixels are virtual.

 

These days, aren't we all? :-)

Link to comment
Share on other sites

Why do you need to explain this to anyone? Let them take out their phones and "google" the word pixel if they are so interested in an explanation. With respect to lenses, apertures and shutter speeds, I don't see why non-photographers (or people who use their phones for all their photographic requirements) need to know about or care about any of that. I think you will find that serious photographer, notwithstanding what type of equipment or recording medium they use, are going to be quite knowledgeable on those subjects.
Link to comment
Share on other sites

Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure.

 

What changes is not the size of the grains, per se(for a given set of conditions), but rather the number that form in a particular location depending on the amount of light hitting there.

 

There is SOME variability in size within a given image, but by and large the size is fairly consistent. With B&W films, you have two different grain types in common use. There are "traditional" grains and "tabular" grains-the latter are called T-grain by Kodak and marketed as TMAX films while Ilford markets them under the "Delta" brand name(they can have a triangular shape). Tabular grains are more consistent in size and more sensitive for a given size. Many folks-myself included-don't like some of the other quirks that come with them and instead prefer traditional films. Kodak Tri-X is my "drug of choice" but I also like Ilford FP-4+ . Ilford HP5+ is an incredibly popular choice in this category also. BTW, developer type and dilution also play a significant role in the appearance of grain. Increasing the developing time DOES result in overall larger grains.

 

All modern color films are T-grain, and the resultant dye clouds are incredibly consistent in size.

  • Like 1
Link to comment
Share on other sites

"Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure. So film can do better than on-or-off per grain, just as digital pixels usually represent multiple shades."

 

- Afraid not. The silver 'grains' start life as individual crystals of silver halide (mainly bromide I believe). Exposure to enough light renders (some of) those crystals developable; such that the whole crystal is reduced to metallic silver by the developer. There's no half measure. It's all or nothing - in effect making film more 'digital' than a pixel.

 

What's weirder, is that apparently the silver is ejected, or extrudes from the halide crystal as a filament that may or may not curl up into a tight ball. So the actual image is formed randomly adjacent to the crystal that received the light. On a microscopic scale the image is spatially displaced from where it should be.

 

And purists claim that film is more 'authentic' than digital imaging! They should educate themselves.

Link to comment
Share on other sites

Glad to help, Annette. And I'd say Wikipedia is your friend for this kind of thing, although there are other good online explanations (and the occasional confused one). Just, if you Google "pixel", don't Google "Google Pixel", because that's the name of a type of phone. :-)

 

RJ: thank you; I'll do more research on silver halide. I may have been thinking of crystals clumping together or something? I'll try to educate myself before I spread more misinformation. Fortunately, I'm hoping that how film works is the bit that Annette was already confident in!

Link to comment
Share on other sites

"Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure. So film can do better than on-or-off per grain, just as digital pixels usually represent multiple shades."

 

- Afraid not. The silver 'grains' start life as individual crystals of silver halide (mainly bromide I believe). Exposure to enough light renders (some of) those crystals developable; such that the whole crystal is reduced to metallic silver by the developer. There's no half measure. It's all or nothing - in effect making film more 'digital' than a pixel.

 

What's weirder, is that apparently the silver is ejected, or extrudes from the halide crystal as a filament that may or may not curl up into a tight ball. So the actual image is formed randomly adjacent to the crystal that received the light. On a microscopic scale the image is spatially displaced from where it should be.

 

And purists claim that film is more 'authentic' than digital imaging! They should educate themselves.

 

It's the reason I hate to call film camera as analog.

Link to comment
Share on other sites

For completeness, I've done a little research on the film grain side of things. This article is quite interesting and informative.

 

To clear up a bit of my confusion: film grain is not a fixed size - but grain is a combination of physical clumps and an optical effect (much like moire). Silver halide particles are smaller, and are indeed binary (black or not, when developed). Assuming the particles are all roughly the same size, the "grain" is the result of the random distribution of particles, and caused by bundles of silver halide particles rather than individual ones. So the grain is not, of itself, a constant size and either on or off - it's made up of smaller particles which are. (Question: is there a non-linearity such that the exposure or development of one silver halide particle incites or inhibits other nearby particles? Just wondering if there is, in fact, a grain-level effect, or whether it's all down to particles.)

 

Digital sensors have a regular grid which avoids uneven clumping of colour samples, so the basic cause of grain is missing.

 

Colour film appears to use the silver halide particles to activate dye, and the irregular spacing of dye clouds causes grain in colour film. The dye clouds have indistinct edges, and aren't digital.

 

So... silver halide particles aren't "analogue" (they're either activated by a photon or not). Film grains, being made up of multiple particles, are analogue, in as much as they're not restricted to being fully black or fully transparent.

 

Have I got that right?

Link to comment
Share on other sites

I think a better analogue (oops!) of film grain would be to compare it to charcoal or pencil drawing. The marks made on the white paper have a pretty consistent density, but their distribution and the paper texture give an illusion of continuous tone.

 

Since metallic silver is opaque, it's obviously not capable of being other than 'black' on a transparent base. As I said, the best research I could find says that the silver is produced in the form of threads or filaments. So any appearance of 'grains' is caused by those threads curling up or intertwining.

 

Surprisingly, nobody appears to have bothered to do any electron-microscopy on film emulsions, and optical microscopy doesn't reveal much more than a blobby mess looking like the surface of sandpaper.

 

Incidentally, the 'fuzzy edges' of dye clouds are a slight illusion. In reality they're 3 dimensional spheroids. The edges are thinner than the centre and so appear less dense. The spread of dye is deliberately contained within tiny oily globules. Otherwise there'd be nothing to stop dyes migrating between colour layers and contaminating each other.

Link to comment
Share on other sites

Surprisingly, nobody appears to have bothered to do any electron-microscopy on film emulsions, and optical microscopy doesn't reveal much more than a blobby mess looking like the surface of sandpaper.

 

If I had access to TEM(or rather if it didn't cost a small fortune) I'd do it.

 

Perhaps I can sweet talk someone into doing an SEM for me or do an AFM myself. I can't do the SEM as I never got "phase 2" training in doing it. I can do AFM, but am at the mercy of scavenging a good used tip...

Link to comment
Share on other sites

Having just read the article, I don't find it very scholarly - you can't equate undeveloped crystal size with silver 'particle' size for example.

 

However, the electron micrograph of halide crystals does show the filamentary nature of silver produced on development or at development sites. 'Fluff balls' of silver would probably be as good a description as any.

 

My own measurement of dye clouds reveals them to be fairly consistent at 2 to 3 microns diameter, but with a tendency to cluster into groups 5 to 10 microns across.

Link to comment
Share on other sites

Surprisingly, nobody appears to have bothered to do any electron-microscopy on film emulsions, and optical microscopy doesn't reveal much more than a blobby mess looking like the surface of sandpaper.

 

There are some electron micrographs of film in the PDF I linked to (for anyone who hadn't read it).

 

Incidentally, the 'fuzzy edges' of dye clouds are a slight illusion. In reality they're 3 dimensional spheroids.

 

Oh - fair enough. I'd assumed we wanted the dye cloud to diffuse slightly through the surrounding media, but I believe you. Unless the dyes react chemically, I'm not sure that blending between layers after processing would necessarily be a problem, but I'm much happier with the nice simple digital process of image production than the chemical approach!

 

Having just read the article, I don't find it very scholarly - you can't equate undeveloped crystal size with silver 'particle' size for example.

 

I can certainly believe that. I did struggle to get anything coherent out of much of the other stuff I could find online, though, so I offered it as "better than nothing"!

 

My own measurement of dye clouds reveals them to be fairly consistent at 2 to 3 microns diameter, but with a tendency to cluster into groups 5 to 10 microns across.

 

That seems to make sense. For the purposes of understanding grain, I'm curious whether the clustering is purely a random and/or optical effect (white noise vs blue noise, in computer graphics terms) or whether there's a chemical reason for the clustering (or activation of adjacent silver halide crystals).

 

(Annette: Apologies for hijacking the thread; I hope the original answers covered your question from the digital perspective, and I'm now attempting to fill in the holes in my own knowledge about the film side of things.)

Link to comment
Share on other sites

That seems to make sense. For the purposes of understanding grain, I'm curious whether the clustering is purely a random and/or optical effect...

 

Some years back a former Kodak guy, Dick Dickerson, had an interesting article in Photo Technique [sp] magazine, where he simulated "grain" using an Excel spreadsheet. (Try making two columns with a random function, =RAND(), then show results on an x-y graph.) The result was shown on a 2D graph, and had a striking resemblance to "grain" patterns. (Hit the recalc button to see different patterns.) So it seems likely, or at least plausible, that what we see as film grain can be largely explained statistically as a somewhat random distribution. Real film is not limited to a single plane, so grain patterns can be much more complicated.

 

A second comment relates to what we see as "grain" through a microscope. Years back, circa 1980(?) I had set up some QC procedures to verify "adequate" preset focus on a proprietary camera system. Essentially, resolution targets were photographed, and the processed film was rated via a microscope - somewhere around 50 to 100 X as I recall. I still recall initially seeing the "grain" (actually it was color neg film) and thinking that it would limit the resolved detail. Then being surprised to see that the resolved detail was much finer than what seemed possible.

 

I won't try to explain except to say that anyone looking at color film "grain" under a microscope without an underlying "image" may be easily fooled as to the limiting effect on recorded detail. After this, I looked at grain as not setting a limit to fine detail, but being more akin to a coarser overlay that could obscure some of the underlying fine detail.

 

Ps, if it matters, the film used was the Kodak professional color neg of the day, either VPSII or III.

  • Like 2
Link to comment
Share on other sites

"Unless the dyes react chemically, I'm not sure that blending between layers after processing would necessarily be a problem.."

 

- This is getting deep to esoteric now!

 

The 'dyes' used in colour film are really dye-couplers - that's half a molecule of dye in simplistic terms. The other half of the (cyan, yellow or magenta) dye is provided by a developer oxidation product, which is common to all 3 dyes. Therefore any mobility of dye-couplers across the 3 colour sensitive layers before development could cause colour pollution.

 

To prevent this, the couplers are mixed with an oily substance that reduces their mobility, and also restrains the size of individual dye clouds.

 

An homogenous mix of dye-coupler, AgX crystals and gelatine matrix just wouldn't work very well.

 

I don't pretend to know the exact details of the chemicals or processes used to anchor the dye-couplers in place. This is just information I've gleaned from various text books and articles over time. AFAIK, it's reasonably accurate.

Link to comment
Share on other sites

Interesting, Joe. Okay, thank you. I learn another thing! (Having started with digital, one of these days I'll get all the way back to self-developing film and mastering effective aperture adjustments with focal length on a large format camera. For everyone who finds digital scary, I assure you that going the other way is much more complicated.)
  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...
It's the reason I hate to call film camera as analog.

 

I agree. Analog refers to electronic technology. Audio tape is analog. The signal recorded on the tape varies as an analog of the pitch and amplitude of the original sound. a CD is digital audio. The original sound is sampled at a very high rate and reduced to a number. That number is etched as a binary number onto the substrate of the blank CD.

 

Photographic film is a CHEMICAL process.

 

So, the opposite of digital photography is not analog photography, but chemical photgraphy.

Link to comment
Share on other sites

Paul: I'm not sure I'd buy that distinction. An analogue computer is still an analogue computer if it works with hydraulics (there are a few examples). Babbage's analytical engine is a digital, mechanical computer. EDSAC and Baby weren't less digital just because they used mercury delay lines and CRT phosphors.

 

The opposite of (or at least, alternative to) chemical photography is electronic photography. Older TV cameras (the output of which was recorded on film or tape) were electronic, but definitely analogue. Once everything is quantised to a set of distinct numbers, then it's digital. It's natural numbers vs integers.

 

The distinction in this thread was that the activation of silver halide crystals is (as I understand it) digital, specifically binary: they're activated by a photon or they're not. Effectively, the silver halide forms a very fine halftone screen. The complication is that the distribution of silver halide crystals is non-uniform so you don't get a neat grid as with a digital camera. In graphics terms, it's a non-ordered dither. We don't tend to look at the individual crystals, so the resulting image appears to have continuous tone (affected by grain), but at the level of the recording medium, others are right to say it's "digital".

 

A photocell, in contrast, actually captures a continuous value (quantised only because of the number of electrons in it); the result is explicitly quantised by an analogue-to-digital converter during sensor read-out, after which the image is "digital" (but not usually bi-level). It then gets processed numerically, of course.

 

Of course, it's all quantum if you look closely enough.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...