Total pixel count vs image quality (noise)

Discussion in 'Casual Photo Conversations' started by baivab, Feb 24, 2008.

  1. Leaving aside, software/driver manipulations, is it true that by cramming higher # of pixels into the same area of the CMOS sensor, effectively introduces more noise? Thus, a XTi vs. a XSTi will have higher pixel count in XSTi but also have higher noise, thus requiring more aggressive noise suppression? (Keeping aside such stuff like differences in the CMOS arch. - CMOS-II/CMOS-III, features of the body itself, etc. etc.)
    Just a thought, since the darn area of the CMOS is still the same, right?
    Is the trend my manufacturers is to cram more megapixels and sacrifice on quality, since that's what avg. consumer seems to demand?
     
  2. Cramming more into the same space does likely lead to more noise ALL THINGS BEING EQUAL. But all things are not equal. The technology keeps improving so that new sensors routinely produce less noise than their lower pixel count predecessors.
     
  3. Yes, that's what I understand is the case. Generally, the more megapixels in a smaller area
    means each individual pixel well has a smaller size. This smaller size means a smaller
    number of photons that strike the individual pixel, which means the signal to noise ratio is
    skewed in the direction of less signal, more noise.

    I'm sure someone else can explain it a bit more in depth, but that's the gist of it, all else
    unconsidered (pixel lenses, whatever software manipulation there might be, etc).
     
  4. Here is some data for you to draw your own conclusions:

    http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/index.html

    Beware that a couple of the charts are theoretical rather than practical - i.e. based on extrapolations that aren't valid given other constraints of the camera.
     
  5. Check the theories against actual test results. At least when going from 8MP to 10MP on crop
    bodies the general consensus was that Canon (and others) did it without a noise increase.

    Also note that this "problem" would exist for FF sensors as well, and I haven't heard of too
    many new 1DsMKIII owners complaining about excessive noise...
     
  6. zml

    zml

    This is an ancient debate (in many ways going back to the beginnings of 35 mm film...) without a clear answer, because one has to acknowledge technological progress. Provided that sensor and processing technology stands still, yeah, notion that cramming more pixels on a given area may increase the level of "noise" is plausible, but the sensor and processing technology has been steadily progressing...
     
  7. Put simply halving the spacing of the pixels cuts their area by a factor of four, beaning that they are four times more susceptible to noise all things being equal.

    However, when producing an image for output, say on the web or as a print a high resolution image will need to be down sampled, this will introduce some averaging thus lowering the average noise in the output pixels.

    So taking a simple view of the situation a higher resolution image, with higher noise per pixel, does not necessarily mean the output image at lower resolution is noisier than it would have been if a lower resolution lower noise image had been used to start with.

    This situation is complicated by non-linear signal processing in the demosaic and sharpening algorithms used to produce the high resolution image and in the down sampling algorithms used to produce an image at the required output resolution.
     
  8. Light efficiency could easily be improved by binning the dumb Bayer pattern and adopting a more logical 3 pixel cluster approach.

    Why do sensor manufacturers insist on throwing away sensitivity by having twice as many green sensors as red or blue? This simply means that the gain of the red and blue channels has to be boosted twice as much as the green channel.

    The usual lame, and totally illogical, explanation is that the human eye is most sensitive to green light, but this is like saying that someone who's abnormally sensitive to pain ought to have more pain in their lives!

    Hello camera makers! You're using RGB sensors, the clue is in counting the letters. There are THREE of them in case you're in any doubt, not 4.

    Using a 3 group pixel cluster would also automatically improve the effective resolution of a sensor. And if you still can't figure it out, look at the arrangement of phosphor dots on a CRT screen.
     
  9. Put simply halving the spacing of the pixels cuts their area by a factor of four, beaning that they are four times more susceptible to noise all things being equal.
    Signal increases proportionally to pixel area, noise to root area, so SNR in the image would increase by a factor of 2 by quadrupling the pixel area. All other things assumed to be equal.
     
  10. There are twice as many green channels as red or blue because that approximates the relative sensitivity of the human eye.

    While being a good talking point, the "all else being equal" rule is the most broken rule in the book. I have a D2h and D2x. The later has three times the number of pixels, yet has better color, less noise and a lot better resolution. By all reports the D3 and D300 are better yet, while having the same nominal resolution as the D2x.
     
  11. >>> Check the theories against actual test results. At least when going from 8MP to 10MP
    on crop bodies the general consensus was that Canon (and others) did it without a noise
    increase.

    That's because most people go with the simple notion that larger pixels produce lower
    noise mantra. But that notion reflects only one of many noise sources that drive system
    performance. And ignores many other factors in the sweep. Also, many people who talk
    freely and make judgements about what drives sensor performance usually are not
    engineers and have no real-life experience in designing sensor-based systems. The end
    result is thus pearls of wisdom and mantras that are often wrong.

    Wonder if people are similarly opinionated on the latest techniques in the field of
    neurosurgery - there's a lot of info on the net...
     
  12. By having more green sensors, the sensitivity to green is boosted without adding gain, hence noise, to that channel. Green is the color in which the human eye is most sensitive to detail (and noise), so the Bayer pattern is doubly effective by placing half the detail in the green channel.
     
  13. You don't need anything bigger than 10 mega pixels. What really counts is the size of the individual pixels. For instance the Nikon D70S has an individual pixel size of 7.8 microns and the Nikon D700 has 8.45 microns. That isn't much difference for the D700 that has 12.3 mega pixels. Ask the sales person the size of the individual pixels or the manufacturer so you know what you are getting. If they don't know then don't buy.
     

Share This Page