Mega Pixel Vs Spectrum of Light

Discussion in 'Beginner Questions' started by miss.annette_leigh_haynes, Jan 28, 2018.

1. miss.annette_leigh_haynes

Mega Pixel Vs Spectrum of Light
How do I explain to other non Photographers when ask what is a Pixel that would be like trying to explain a Spectrum of Light bouncing off a Subject on a sunny day using a 50mm lens on f/5.6 at 1/500 Sec. How the light goes on to the memory card and transforms it into a picture and when you ask them what Lens they are using they have no clue.

Last edited: Jan 28, 2018
2. Andrew Garrard

Most people are now familiar with computer screens (or at least televisions). Each "picture element" (pixel) defines a (usually) rectangular area and is (for this part of the conversation) a single colour. Just as a film or video is not constant motion but made up of lots of frames shown one after the other, the picture on a computer screen isn't a smooth continuous representation of what you're looking at, but made up of a grid of pixels. When you look at them from a distance, they make up an image. This isn't just computer screens - the same is true for half toning on newspapers (and most other print). If you don't require the grid to be regular, the same is true of pointillist painters (e.g. Seurat), the microscopic grain of slide film, and the cells in the human eye. Computer images record one colour per pixel when they're stored (although there's usually some clever maths to reduce the space it takes).

The pixels (or "sensor sites", or "sensor elements" - sensels) in a digital camera capture light hitting parts of the image and record the intensity in the image file - which gives you something you can display on screen or print. The more pixels you capture, the more detail you can record.

This is as there is to a monochrome camera (like the Leica Monochrom). However, most cameras capture a colour image, and that's more complicated (next post).

3. Andrew Garrard

If you look more closely at the pixels on a computer monitor or TV (TVs are usually bigger...) you'll see they're made up of different colours - usually red, green and blue, often in vertical stripes. The pixels don't display an arbitrary colour - they can only control intensity, so to make up actual colours you need to record different intensities of primary colours.

The human eye also isn't sensitive to infinite wavelengths - there are different cells which are sensitive to different frequencies. Ignoring rods (which are highly sensitive to monochrome for night vision but don't do much in daylight), there are usually three types of cones (occasionally four or five - there are variations that are encoded on the X chromosome, so a few women end up with a mix). One type of cone is mostly sensitive to blue light, one to reddish-yellow and one to greenish-yellow. Since all the colours most of us can see are identified as a mix of the response of the three types of cone cells, you can fool the eye that it's looking at any colour by giving it a mix of three wavelengths that stimulate these cells.

That's why artists can create a range of colours by mixing paints together, why printing ink has a few different ink colours, and why coloured lights you may see are made up of a small number of colours. This wouldn't work so well if the eye was different - mantis shrimps can see around 16 different wavelengths (it's more complicated - their eyes are weird) and they're probably not impressed by TV.

This trick of breaking down wavelengths only works when you're considering what the eye can see. If you mix a coloured light with filters (such as the printed page), it's the actual mix of wavelengths that matters. This is why things can look different under cheap fluorescent lights, why bluebells look a very different colour in sunlight and shade, etc. The effect is, if I'm not abusing the terminology, known as "metamerism". One of the most spectacular demonstrations is with the gem alexandrite (worth looking up). Mostly, though, the bigger problem is the overall tint of the light - and you can fix that with "white balance" by changing the relative amounts of red, green and blue.

Because of the mechanics of how the light is produces, TVs and monitors generally don't produce a mix of their three primary colours at exactly the same place - each pixel is effectively several little coloured lights next to each other, and you see a continuous colour because they're close enough together that it looks like a single area of colour. It helps that the eye is more sensitive to small changes in brightness than in colour (which is part of how the visual system deals with the eye having colour-specific cells).

OLED and plasma TVs (and older CRTs) actually emit light at each sub-pixel colour location. LCD TVs and monitors generally have a white backlight with little coloured filters over it, and control the light coming through each filtered area by partly blocking it. This isn't perfect, by the way, which is why LCDs generally can't quite do a perfect black - some light leaks through. On the other hand it's easier to make them brighter than other screens. But I digress (and there are other, more obscure tv technologies too).

Most cameras are only sensitive to the intensity of light a single sensor site. To get colour, manufactures put colour filters over the pixels so only (usually) red, green or blue light gets through. The exact behaviour of these filters is carefully tuned, but still doesn't quite match the eye sometimes.

By having red, green and blue filters close together, the camera can do some processing to create the final colour which the pixel should be. Because real-world objects tend to have fairly constant colours, you can get detail nearly at the full sensor resolution by applying some heuristics, even though you only have one colour recorded per site.

The most common pattern, Bayer, is a repeating 2×2 grid of red, green/green, blue - the eye is more sensitive to green, so that's the colour which gets doubled. If you take a photo of something with repeating detailed lines (like a distant railing) the camera can get confused and think it's recording colours instead of detail, because they look the same on the sensor. Fuji use a more complex "X-Trans" pattern to reduce this problem.

Sigma have a "Foveon" sensor technology which actually does capture multiple colours at one pixel, by having light of different frequencies penetrate to different layers of the sensor (as colour film records amounts of light and has built-in filters at different layers). Foveon has some compromises, so it's not widely adopted. Some cameras with Bayer sensors instead have the ability to shift the sensor slightly, and capture multiple exposures to get a full set of red, green and blue at each location - of course, this assumes nothing is moving in the scene.

So... Break the colours down into primaries, record them at or near each pixel, and you have an image.

Last edited: Jan 28, 2018
4. Andrew Garrard

The technical answer probably involves saying that pixels represent sampling points on a continuous intensity plane, and that viewing them involves a reconstruction of the original data naturally limited by the sampling frequency; the eye does this for you, because that's how its photoreceptors work. The visual system does some further processing to generate a colour-difference representation, generated from the filtered combination of the interaction between incoming wavelengths and the response curve of the cone cells. Approximating this response at the camera sensor (or film) and recreating it using primary pixel colours produces the same response in the eye as looking at a continuous detail, infinite spectrum scene - mostly.

Fortunately the visual system is good at adapting to colour and brightness shifts, because otherwise it would be impossible to recognise things in sunlight or shade (etc.) We can tweak a lot with calibration of equipment, but mostly it doesn't matter: within limits, the eye itself will make everything look right.

Does that help?

5. rodeo_joe|1

The word 'Pixel' is a contraction of Picture Cell. It's the smallest element that makes up a digital image.

You can think of it like the 'grains' of a B&W film image, or the little dye clouds that make up a colour slide or photographic colour print.

The difference is that B&W film grains are pure black, opaque bits of silver, while a single pixel can show 255 shades of grey.

Likewise with colour: Film dye clouds are little blobs of Cyan, Yellow and Magenta dots of dye (each about 2 to 3 microns in diameter). A single colour pixel can be any one of over 16 million different colour hues or shades.

With camera pixels having a size of around 5 microns these days, it can be seen that the tone or colour refinement of a digital camera can be orders of magnitude better than similarly sized film.

The other difference, of course, is that pixels are virtual. They only exist in the context of a display device - be that a computer monitor, phone LCD screen, paper print, Television screen or whatever. When not being displayed they live inside memory devices as digital data.

Last edited: Jan 28, 2018
6. Andrew Garrard

I'm standing by "picture element" being the official etymology I've heard in my 20+ years of working in computer graphics, but "picture cell" works too if you prefer.

Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure. So film can do better than on-or-off per grain, just as digital pixels usually represent multiple shades.

Just to square that with my previous claim about red, green and blue being next to each other and not usually colocated: It's normal for the computer to store three values with each pixel in memory (or on disk), one each for red, green and blue. Sometimes the representation is more complicated, usually to save storage space, but "pixels" usually get treated as having a single location. Technology like ClearType (for fonts) bend this rule, and show better detail than the display would normally be capable of by treating the red, green and blue sub-pixels of the display as being in different places for the purposes of smoothing out text - if you look closely, you may see some text has coloured edges for this reason.

Displays still typically don't display all the colour components of a pixel in the same place. A common exception is that many projectors do show all the colours in one place, interleaving red, green and blue over time (so you can often see coloured stripes if you move your head while looking at a digital projector). Colours add together, so showing red, blue and green next to each other either on the screen or over time gives the same result as blending them.

These days, aren't we all?

7. Ken Katz

Why do you need to explain this to anyone? Let them take out their phones and "google" the word pixel if they are so interested in an explanation. With respect to lenses, apertures and shutter speeds, I don't see why non-photographers (or people who use their phones for all their photographic requirements) need to know about or care about any of that. I think you will find that serious photographer, notwithstanding what type of equipment or recording medium they use, are going to be quite knowledgeable on those subjects.

8. James G. DainisModerator

Get out your magnifying glass or loupe and take a look at the smiley here. You will see that he is made up of red. blue and green elements. Not a single yellow element there. I wouldn't want to try to explain that to anyone.

9. ben_hutcherson

What changes is not the size of the grains, per se(for a given set of conditions), but rather the number that form in a particular location depending on the amount of light hitting there.

There is SOME variability in size within a given image, but by and large the size is fairly consistent. With B&W films, you have two different grain types in common use. There are "traditional" grains and "tabular" grains-the latter are called T-grain by Kodak and marketed as TMAX films while Ilford markets them under the "Delta" brand name(they can have a triangular shape). Tabular grains are more consistent in size and more sensitive for a given size. Many folks-myself included-don't like some of the other quirks that come with them and instead prefer traditional films. Kodak Tri-X is my "drug of choice" but I also like Ilford FP-4+ . Ilford HP5+ is an incredibly popular choice in this category also. BTW, developer type and dilution also play a significant role in the appearance of grain. Increasing the developing time DOES result in overall larger grains.

All modern color films are T-grain, and the resultant dye clouds are incredibly consistent in size.

Andrew Garrard likes this.
10. Andrew Garrard

Thanks for the correction, Ben. That does sound familiar, now you mention it. I know (more or less) my digital optical theory, but chemistry isn't my strong point!

11. miss.annette_leigh_haynes

I will have to give you all an (A+) for your great answer I think one of you said why explain anything let them google pixel!!
Thanks again to all.

12. rodeo_joe|1

"Indeed - and those grains (as I, as a digital person, understand it) grow according to the level of light exposure. So film can do better than on-or-off per grain, just as digital pixels usually represent multiple shades."

- Afraid not. The silver 'grains' start life as individual crystals of silver halide (mainly bromide I believe). Exposure to enough light renders (some of) those crystals developable; such that the whole crystal is reduced to metallic silver by the developer. There's no half measure. It's all or nothing - in effect making film more 'digital' than a pixel.

What's weirder, is that apparently the silver is ejected, or extrudes from the halide crystal as a filament that may or may not curl up into a tight ball. So the actual image is formed randomly adjacent to the crystal that received the light. On a microscopic scale the image is spatially displaced from where it should be.

And purists claim that film is more 'authentic' than digital imaging! They should educate themselves.

13. Andrew Garrard

Glad to help, Annette. And I'd say Wikipedia is your friend for this kind of thing, although there are other good online explanations (and the occasional confused one). Just, if you Google "pixel", don't Google "Google Pixel", because that's the name of a type of phone.

RJ: thank you; I'll do more research on silver halide. I may have been thinking of crystals clumping together or something? I'll try to educate myself before I spread more misinformation. Fortunately, I'm hoping that how film works is the bit that Annette was already confident in!

14. BeBu Lamar

It's the reason I hate to call film camera as analog.

15. ben_hutcherson

There's no ifs, ands, or buts about it-film is not an analog medium.

16. Andrew Garrard

To clear up a bit of my confusion: film grain is not a fixed size - but grain is a combination of physical clumps and an optical effect (much like moire). Silver halide particles are smaller, and are indeed binary (black or not, when developed). Assuming the particles are all roughly the same size, the "grain" is the result of the random distribution of particles, and caused by bundles of silver halide particles rather than individual ones. So the grain is not, of itself, a constant size and either on or off - it's made up of smaller particles which are. (Question: is there a non-linearity such that the exposure or development of one silver halide particle incites or inhibits other nearby particles? Just wondering if there is, in fact, a grain-level effect, or whether it's all down to particles.)

Digital sensors have a regular grid which avoids uneven clumping of colour samples, so the basic cause of grain is missing.

Colour film appears to use the silver halide particles to activate dye, and the irregular spacing of dye clouds causes grain in colour film. The dye clouds have indistinct edges, and aren't digital.

So... silver halide particles aren't "analogue" (they're either activated by a photon or not). Film grains, being made up of multiple particles, are analogue, in as much as they're not restricted to being fully black or fully transparent.

Have I got that right?

17. rodeo_joe|1

I think a better analogue (oops!) of film grain would be to compare it to charcoal or pencil drawing. The marks made on the white paper have a pretty consistent density, but their distribution and the paper texture give an illusion of continuous tone.

Since metallic silver is opaque, it's obviously not capable of being other than 'black' on a transparent base. As I said, the best research I could find says that the silver is produced in the form of threads or filaments. So any appearance of 'grains' is caused by those threads curling up or intertwining.

Surprisingly, nobody appears to have bothered to do any electron-microscopy on film emulsions, and optical microscopy doesn't reveal much more than a blobby mess looking like the surface of sandpaper.

Incidentally, the 'fuzzy edges' of dye clouds are a slight illusion. In reality they're 3 dimensional spheroids. The edges are thinner than the centre and so appear less dense. The spread of dye is deliberately contained within tiny oily globules. Otherwise there'd be nothing to stop dyes migrating between colour layers and contaminating each other.

18. ben_hutcherson

If I had access to TEM(or rather if it didn't cost a small fortune) I'd do it.

Perhaps I can sweet talk someone into doing an SEM for me or do an AFM myself. I can't do the SEM as I never got "phase 2" training in doing it. I can do AFM, but am at the mercy of scavenging a good used tip...

19. rodeo_joe|1

Having just read the article, I don't find it very scholarly - you can't equate undeveloped crystal size with silver 'particle' size for example.

However, the electron micrograph of halide crystals does show the filamentary nature of silver produced on development or at development sites. 'Fluff balls' of silver would probably be as good a description as any.

My own measurement of dye clouds reveals them to be fairly consistent at 2 to 3 microns diameter, but with a tendency to cluster into groups 5 to 10 microns across.

20. Andrew Garrard

There are some electron micrographs of film in the PDF I linked to (for anyone who hadn't read it).

Oh - fair enough. I'd assumed we wanted the dye cloud to diffuse slightly through the surrounding media, but I believe you. Unless the dyes react chemically, I'm not sure that blending between layers after processing would necessarily be a problem, but I'm much happier with the nice simple digital process of image production than the chemical approach!

I can certainly believe that. I did struggle to get anything coherent out of much of the other stuff I could find online, though, so I offered it as "better than nothing"!

That seems to make sense. For the purposes of understanding grain, I'm curious whether the clustering is purely a random and/or optical effect (white noise vs blue noise, in computer graphics terms) or whether there's a chemical reason for the clustering (or activation of adjacent silver halide crystals).

(Annette: Apologies for hijacking the thread; I hope the original answers covered your question from the digital perspective, and I'm now attempting to fill in the holes in my own knowledge about the film side of things.)