Mega Pixel Vs Spectrum of Light

Discussion in 'Beginner Questions' started by miss.annette_leigh_haynes, Jan 28, 2018.

  1. There have been cameras (notably from Sony) that implemented RGBE ("emerald") filters. I've wondered about alternating patterns of slightly different RGB filters, giving you a low-frequency set of additional chroma information. Triangular sensor sites are a little awkward to fit a microlens over, I suspect, although BSI might help with that; you could align a square delta-nabla grid over triangular read-out lines (although I don't know whether running wires at an angle is electrically a good idea) - I'm sure something is solvable. I looked at Penrose tiling (which you can three-colour) as a way of avoiding moire at one point, though it does make for fairly non-optimal triangles. and messy read-out. A few people (including some recent phones) combine a monochrome sensor with an RGB sensor, and arguably you should be able to get more information (in addition to the extra light gathering) by cross-referencing the spectral response of the plain sensor with the filters.

    Of course, if we move away from tri-colour theory, that's going to test the implementation of a lot of image formats. There are a few designed to cope (in computer graphics, if you want to model dispersion properly for ray tracing diamonds etc. you need to look at more spectral components). Whether it's worth it is another matter - a relatively small portion of the population (sorry, the mostly ladies in question) are tetrachromats (and fewer pentachromats), so artistic tweaking gets you most of the way there. Unless your target audience is a mantis shrimp.

    Interesting subject, but I have to be careful what I say in a public forum in case I come up with something I'm legally obliged to have given to my employers...
     
  2. Since our eyes are more sensitive to green, it makes some sense to have more green in the array.

    The luminance signal in NTSC color TV is created as:

    Luma Y′ = 0.299 R′ + 0.587 G′ + 0.114 B′

    such that G' (green) is the biggest part.

    (For added complication, R', G' and B' are the gamma corrected values)

    It would be interesting to have an array with four colors, following four spots on the CIE diagram. Then all we need is display (and printing) devices to follow them.
     
  3. The eye does get the majority of its luminance information from the green area of the spectrum (not least because there's a big overlap in the response of red and green cones). The specific weightings have a lot to do with the (original) phosphors used in colour TV - and the original NTSC phosphors were very dim but very saturated and rapidly replaced by brighter but less saturated ones, which is why "NTSC gamut" was quite a challenge even for modern displays. Due to the changes in phosphors (and filters) over time, different standards have modified those weightings - although it turns out that the BT.2020 HDTV gamut was a bit optimistic and can (last I heard) only really be covered by laser projectors. A better match for the eye's spectral response is the "LMS" coordinate system used as part of the ICtCp encoding for HDR TV, although the axes of the CIE XYZ colour space are also supposed to be aligned to perceptual brightness and colour difference. There's more information in the later sections of version 1.2 of the Khronos Data Format Spec, which I happen to edit (and Wikipedia, although sometimes less authoritatively).

    The system is complicated by there being more than one kind of some of the cones, with slightly different spectral responses. Since the codings for which you get is on the X chromasome, that's why you get some women who can actually distinguish four colour channels (that is, differentiate colours which are metamers in trichromatic vision), and a few who can distinguish five. After processing by the visual system the result is relatively subtle, so I believe it's generally not considered to be useful to retain this distinction for image transmission. (Just as the eye can technically just about distinguish polarisation, albeit not as well as mantis shrimps can, but we don't bother preserving it.) There are, of course, display devices that use additional colours to expand the gamut - there have been relatively recent TVs with red/green/blue/yellow sub-grid, I've seen projectors with six filters in the colour wheel, and of course photo printers have been using larger numbers of inks for a while - and photographic films have used a large number of colour dyes in different layers.

    Arguably the better use for capturing multiple colour channels is to allow relighting with a different illumination spectrum. There was a paper at SIGGRAPH last year on capturing images for composition with additional lighting channels, which allowed better insertion into a scene. It doesn't have to get all the way to final reproduction to be useful. If it were less of a faff, I'd gladly have a selection of colour filters to use over a monochrome sensor to build up a colour image. For cameras with sensor-shift multi-shot ability to capture RGB at each pixel, it would be nice to see additional filters available for additional colour channels for this reason. Especially with bluebell season coming up!
     

Share This Page

1111