Jump to content

Mega Pixel Vs Spectrum of Light


Recommended Posts

There's no ifs, ands, or buts about it-film is not an analog medium.

 

We live in a quantum world, where quantum mechanics applies, not the continuum mechanics that Newton believed.

 

Note, though, that the characteristic curves for film curve down, such that the response decreases with increased spatial frequency, not the sharp cutoff that Nyquist indicates for digital.

-- glen

Link to comment
Share on other sites

"Note, though, that the characteristic curves for film curve down, such that the response decreases with increased spatial frequency, not the sharp cutoff that Nyquist indicates for digital."

 

- By 'characteristic curve' I take it that MTF response is meant, not the H&D curve, obviously.

 

Very few digital camera/lens combinations reach the theoretical Nyquist spatial frequency, with the lens being the main limiting factor in the case of sensors with a =< 5 micron photosite spacing. Yet they still convey a far greater impression of sharpness than any film/lens combo.

 

Those sensors having a low-pass AA filter are naturally restricted to a spatial frequency below the Nyquist limit.

 

The resolution limit for film is set by the average 'grain' size and IME very rarely exceeds 100 lppmm with a very good lens and in the centre of the image circle for slow/medium speed films. Anything with a finer grain structure is almost useless for general pictorial use, and limited to tripod use with static and (usually boring) subject matter and using unreliable processing techniques.

 

I still fail to see why the inaccurate use of the word 'analog(ue)' has controversially been introduced, and is being so vehemently defended. It's obviously simply a device to 'hipsterise' the use of film. Such easily swayed individuals rarely have a real commitment to whatever currently 'cool' entertainment catches their interest.

 

So what's wrong with the simple, uncontroversial and unambiguous phrase - film photography? Especially when the end output is often scanned, and then definitely enters the digital realm. Thus totally losing any dubious claim to being called analogue.

Edited by rodeo_joe|1
Link to comment
Share on other sites

It's obviously simply a device to 'hipsterise' the use of film.

Of course, especially in this day and age, and especially on the Internet, a lot of things people claim to be obvious should be taken to be very suspicious. Giving such motives to those who most likely much more innocuously use the term “analog” is probably more projection than obvious.

We didn't need dialogue. We had faces!
Link to comment
Share on other sites

Of course, especially in this day and age, and especially on the Internet, a lot of things people claim to be obvious should be taken to be very suspicious. Giving such motives to those who most likely much more innocuously use the term “analog” is probably more projection than obvious.

 

- Well, someone, or some organisation invented, promoted and propagated the stupid term 'analog(ue) photography'. It's not the sort of phrase that pops into use unbidden and without an agenda.

 

I doubt that, at first hearing, the average person-in-the-street even connects the phrase with film photography at all.

 

I stand by the hipsterisation theory.

Link to comment
Share on other sites

Well, someone, or some organisation invented, promoted and propagated the stupid term 'analog(ue) photography'. It's not the sort of phrase that pops into use unbidden and without an agenda.

Conspiracy theory much? Pretty natural sounding term to me.

I stand by the hipsterisation theory.

No doubt.

We didn't need dialogue. We had faces!
Link to comment
Share on other sites

"Pretty natural sounding term to me."

 

- Oh yes, I well remember everyone prior to a few years ago walking around with the phrase 'analogue photography' hanging off their lips..... not!

 

This inaccurate description only came into use after digital cameras became widespread. Presumably invented to imply some superior smoothness of tone or other refinement not possible in the digital domain. A just plain wrong assertion.

 

Before this Luddite-promoted schism, nobody called photography anything other than photography. And there really isn't any need to distinguish the media used now. I don't see history recording a war of words between wet and dry-plate users, or between proponents of film and glass bases, ortho versus pan, nitrate versus acetate, etc.

 

It's just ridiculous to attempt to put into public use an inaccurate, cumbersome and misleading phrase, when a simple and unambiguous alternative already exists - and that's 'film photography'. Easily identifying the medium from, say, wet-plate photography, or the use of a digital sensor. A phrase that's simple, easy to say, carrying no agenda, but probably not terribly cool-sounding.

 

Conspiracy? A conspiracy of idiocy only.

  • Like 2
Link to comment
Share on other sites

Conspiracy? A conspiracy of idiocy only.

Well, yes, that's exactly what I was thinking about your conspiracy theory but I didn't want to go there. Thank you for saying it, though.

 

Of course it's new. Duh! Before new technology replaces existing, there's not much need to use adjectives which will distinguish the two. Since there was no other kind of photography except for film at the time, we didn't refer to it as film photography either. We simply called it photography. Of course! It would be fine with me if we still did that, regardless of what one used to make their photos. But people do like and sometimes need to make distinctions.

 

"Analog" is a word that's being used contemporarily. Some people say "vinyl" when referring to my old records. Others say "analog recordings." I don't assume any of them have a particular agenda when using either. You're reading WAY too much into this, in order to distress yourself over some ill-conceived conspiracy that someone is trying to control our minds.

Edited by Norma Desmond
We didn't need dialogue. We had faces!
Link to comment
Share on other sites

 

(I wrote)

"Note, though, that the characteristic curves for film curve down, such that the response decreases with increased spatial frequency, not the sharp cutoff that Nyquist indicates for digital."

 

- By 'characteristic curve' I take it that MTF response is meant, not the H&D curve, obviously.

 

(snip)

 

Yes MTF. I was thinking about characteristic (H&D) from some other post.

 

Note that analog watches move the hands in discrete steps, from either a balance wheel or divided down crystal oscillator.

The display is still analog, based on theoretical continuous positon of the hands.

 

While film has discrete grains that are either developed or not, the readout is optical analog, limited only by the quantum nature of light.

Grains contain a large number of atoms, such that the size is close to continuous on a human scale. The position of grains is also pretty much continuous.

 

Compare to digital, where predefined pixels, samples in Nyquist sampling, are defined, and also the sample values are quantized.

Neither predefined pixel positions nor predefined quantization levels exist for analog film photography.

-- glen

Link to comment
Share on other sites

Note that analog watches move the hands in discrete steps, from either a balance wheel or divided down crystal oscillator.

The display is still analog, based on theoretical continuous positon of the hands.

 

The Seiko spring drive does continuously move the second hand. It essentially lets the mainspring continuously unwind in a controlled fashion via electronic braking(as opposed to being in discrete steps as happens with an escapement/balance wheel/hairspring system).

 

You also have synchronous motor AC clocks, which move continuously.

Link to comment
Share on other sites

The Seiko spring drive does continuously move the second hand. It essentially lets the mainspring continuously unwind in a controlled fashion via electronic braking(as opposed to being in discrete steps as happens with an escapement/balance wheel/hairspring system).

 

You also have synchronous motor AC clocks, which move continuously.

 

I had thought about synchronous motors when I wrote that, but didn't know about Seiko.

 

But it now occurs to me, that is besides the point.

 

A system is analog if one physical quantity is represented by (is analogous to) another quantity.

 

Film photography is unusual in that the two quantities are related.

 

Consider analog television, where a voltage (in a cable), or the amplitude of a radio signal, is analogous to light intensity.

 

Or, to make it interesting, the optical sound track on movies, where light intensity is analogous to audio signal voltage, analogous to the pressure component of a sound wave.

 

In the case of a clock or watch, the time is represented by, is analogous to, the angle of the hands with respect to vertical. That is true if they move continuously, or in discrete steps.

 

Also, note that it is not sampling that makes digital signals digital. One can sample in an analog system. As an example, there are audio storage systems, I believe used in recordable talking greeting cards, that store analog voltage samples on a series of capacitors. The signal is sampled, but quantized only by the fact that only whole electrons charge the capacitor.

 

In the case of film, the optical transmittance at any (diffraction limited) point is pretty much a continuous function. No analog or digital system has infinite bandwidth.

 

And in this week of the beginning of baseball season, how many different velocities can a baseball pitcher pitch? Consider the stadium a quantum well, and consider the QM wave nature of a baseball.

-- glen

Link to comment
Share on other sites

And in this week of the beginning of baseball season, how many different velocities can a baseball pitcher pitch? Consider the stadium a quantum well, and consider the QM wave nature of a baseball.

 

I managed to scrape by when I took quantum mechanics, and that's a part of my graduate education I'm happy to leave well behind me! :) I do remember the baseball pitcher problem, though.

Link to comment
Share on other sites

Not to fuzzy things up in this good discussion, but I've become a staunch believer and member of a small but growing group that is critical of current RGB sensor technology being based on 3 colors.

 

Be it CCD, CMOS or other there are hard limits to the optical filters used on optical sensors. The filter itself has a particular bandwidth for each color and isn't entirely discrete. A good deal of color interpolation is used to keep thing level, and the end results tend to be increasingly non linear as luminance and saturation levels increase. Note the universal complaints about dSLRs having trouble with dense reds and oranges. That's because the red filter on the camera has to sort 625nm vs 660nm light when the filter doesn't distinguish like our eyes do. If you make the filters denser to improve accuracy then you sacrifice sensitivity. Increasing to 4 or 5 sensor colors at acquisition level would vastly improve noise and color accuracy with digital sensors.

Link to comment
Share on other sites

Not to fuzzy things up in this good discussion, but I've become a staunch believer and member of a small but growing group that is critical of current RGB sensor technology being based on 3 colors.

 

Be it CCD, CMOS or other there are hard limits to the optical filters used on optical sensors. The filter itself has a particular bandwidth for each color and isn't entirely discrete. A good deal of color interpolation is used to keep thing level, and the end results tend to be increasingly non linear as luminance and saturation levels increase. Note the universal complaints about dSLRs having trouble with dense reds and oranges. That's because the red filter on the camera has to sort 625nm vs 660nm light when the filter doesn't distinguish like our eyes do. If you make the filters denser to improve accuracy then you sacrifice sensitivity. Increasing to 4 or 5 sensor colors at acquisition level would vastly improve noise and color accuracy with digital sensors.

 

I completely agree there Scott. The RGB filters in nearly every Bayer array are too closely cut. You can see this if you try to capture a continuous spectrum with a digital camera. All you get are three red, green and blue bands. Also, the excess of green sensors in a Bayer array is extremely light-inefficient.

 

I've previously suggested substituting cyan and yellow filters for the two greens. This would naturally fill in any spectral gaps. The green channel is then synthesised by subtracting the red level from yellow, and the blue level from cyan. This gives two green levels that can in turn be cross subtracted from the cyan and yellow signals to render additional red and blue information. Voila! You effectively get twice the RGB information from the same 4 photosites, with no spectral gaps and 2x the sensitivity.

 

Alternatively, an array could be built using true triads of RGB. The geometry is a challenge, but at least it would get rid of Bayer's stupid surplus of green.

 

There's also a question mark over the validity of tri-colour theory altogether. The rather un-scientific fudge applied to the CIE 'horseshoe' in 1931(!) should have rung some alarm bells. Together with the fact that no real-world primaries can encompass said horseshoe.

Edited by rodeo_joe|1
Link to comment
Share on other sites

There have been cameras (notably from Sony) that implemented RGBE ("emerald") filters. I've wondered about alternating patterns of slightly different RGB filters, giving you a low-frequency set of additional chroma information. Triangular sensor sites are a little awkward to fit a microlens over, I suspect, although BSI might help with that; you could align a square delta-nabla grid over triangular read-out lines (although I don't know whether running wires at an angle is electrically a good idea) - I'm sure something is solvable. I looked at Penrose tiling (which you can three-colour) as a way of avoiding moire at one point, though it does make for fairly non-optimal triangles. and messy read-out. A few people (including some recent phones) combine a monochrome sensor with an RGB sensor, and arguably you should be able to get more information (in addition to the extra light gathering) by cross-referencing the spectral response of the plain sensor with the filters.

 

Of course, if we move away from tri-colour theory, that's going to test the implementation of a lot of image formats. There are a few designed to cope (in computer graphics, if you want to model dispersion properly for ray tracing diamonds etc. you need to look at more spectral components). Whether it's worth it is another matter - a relatively small portion of the population (sorry, the mostly ladies in question) are tetrachromats (and fewer pentachromats), so artistic tweaking gets you most of the way there. Unless your target audience is a mantis shrimp.

 

Interesting subject, but I have to be careful what I say in a public forum in case I come up with something I'm legally obliged to have given to my employers...

Link to comment
Share on other sites

I completely agree there Scott. The RGB filters in nearly every Bayer array are too closely cut. You can see this if you try to capture a continuous spectrum with a digital camera. All you get are three red, green and blue bands. Also, the excess of green sensors in a Bayer array is extremely light-inefficient.

 

I've previously suggested substituting cyan and yellow filters for the two greens. This would naturally fill in any spectral gaps. The green channel is then synthesised by subtracting the red level from yellow, and the blue level from cyan. This gives two green levels that can in turn be cross subtracted from the cyan and yellow signals to render additional red and blue information. Voila! You effectively get twice the RGB information from the same 4 photosites, with no spectral gaps and 2x the sensitivity.

 

Alternatively, an array could be built using true triads of RGB. The geometry is a challenge, but at least it would get rid of Bayer's stupid surplus of green.

 

There's also a question mark over the validity of tri-colour theory altogether. The rather un-scientific fudge applied to the CIE 'horseshoe' in 1931(!) should have rung some alarm bells. Together with the fact that no real-world primaries can encompass said horseshoe.

 

Since our eyes are more sensitive to green, it makes some sense to have more green in the array.

 

The luminance signal in NTSC color TV is created as:

 

Luma Y′ = 0.299 R′ + 0.587 G′ + 0.114 B′

 

such that G' (green) is the biggest part.

 

(For added complication, R', G' and B' are the gamma corrected values)

 

It would be interesting to have an array with four colors, following four spots on the CIE diagram. Then all we need is display (and printing) devices to follow them.

-- glen

Link to comment
Share on other sites

The eye does get the majority of its luminance information from the green area of the spectrum (not least because there's a big overlap in the response of red and green cones). The specific weightings have a lot to do with the (original) phosphors used in colour TV - and the original NTSC phosphors were very dim but very saturated and rapidly replaced by brighter but less saturated ones, which is why "NTSC gamut" was quite a challenge even for modern displays. Due to the changes in phosphors (and filters) over time, different standards have modified those weightings - although it turns out that the BT.2020 HDTV gamut was a bit optimistic and can (last I heard) only really be covered by laser projectors. A better match for the eye's spectral response is the "LMS" coordinate system used as part of the ICtCp encoding for HDR TV, although the axes of the CIE XYZ colour space are also supposed to be aligned to perceptual brightness and colour difference. There's more information in the later sections of version 1.2 of the Khronos Data Format Spec, which I happen to edit (and Wikipedia, although sometimes less authoritatively).

 

The system is complicated by there being more than one kind of some of the cones, with slightly different spectral responses. Since the codings for which you get is on the X chromasome, that's why you get some women who can actually distinguish four colour channels (that is, differentiate colours which are metamers in trichromatic vision), and a few who can distinguish five. After processing by the visual system the result is relatively subtle, so I believe it's generally not considered to be useful to retain this distinction for image transmission. (Just as the eye can technically just about distinguish polarisation, albeit not as well as mantis shrimps can, but we don't bother preserving it.) There are, of course, display devices that use additional colours to expand the gamut - there have been relatively recent TVs with red/green/blue/yellow sub-grid, I've seen projectors with six filters in the colour wheel, and of course photo printers have been using larger numbers of inks for a while - and photographic films have used a large number of colour dyes in different layers.

 

Arguably the better use for capturing multiple colour channels is to allow relighting with a different illumination spectrum. There was a paper at SIGGRAPH last year on capturing images for composition with additional lighting channels, which allowed better insertion into a scene. It doesn't have to get all the way to final reproduction to be useful. If it were less of a faff, I'd gladly have a selection of colour filters to use over a monochrome sensor to build up a colour image. For cameras with sensor-shift multi-shot ability to capture RGB at each pixel, it would be nice to see additional filters available for additional colour channels for this reason. Especially with bluebell season coming up!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...