Jump to content

Total pixel count vs image quality (noise)


baivab

Recommended Posts

Leaving aside, software/driver manipulations, is it true that by cramming

<b>higher</b> # of pixels into the <b>same area</b> of the CMOS sensor,

effectively introduces <b>more noise</b>? Thus, a XTi vs. a XSTi will have

higher pixel count in XSTi but also have higher noise, thus requiring more

aggressive noise suppression? (Keeping aside such stuff like differences in the

CMOS arch. - CMOS-II/CMOS-III, features of the body itself, etc. etc.) <br>

Just a thought, since the darn <b>area</b> of the CMOS is still the same, right?

<br>

Is the trend my manufacturers is to cram <b>more megapixels</b> and

<b>sacrifice</b> on <b>quality</b>, since that's what avg. consumer seems to

demand?

Link to comment
Share on other sites

Cramming more into the same space does likely lead to more noise ALL THINGS BEING EQUAL. But all things are not equal. The technology keeps improving so that new sensors routinely produce less noise than their lower pixel count predecessors.
Link to comment
Share on other sites

Yes, that's what I understand is the case. Generally, the more megapixels in a smaller area

means each individual pixel well has a smaller size. This smaller size means a smaller

number of photons that strike the individual pixel, which means the signal to noise ratio is

skewed in the direction of less signal, more noise.

 

I'm sure someone else can explain it a bit more in depth, but that's the gist of it, all else

unconsidered (pixel lenses, whatever software manipulation there might be, etc).

Link to comment
Share on other sites

Check the theories against actual test results. At least when going from 8MP to 10MP on crop

bodies the general consensus was that Canon (and others) did it without a noise increase.

 

Also note that this "problem" would exist for FF sensors as well, and I haven't heard of too

many new 1DsMKIII owners complaining about excessive noise...

Link to comment
Share on other sites

This is an ancient debate (in many ways going back to the beginnings of 35 mm film...) without a clear answer, because one has to acknowledge technological progress. Provided that sensor and processing technology stands still, yeah, notion that cramming more pixels on a given area may increase the level of "noise" is plausible, but the sensor and processing technology has been steadily progressing...
Link to comment
Share on other sites

Put simply halving the spacing of the pixels cuts their area by a factor of four, beaning that they are four times more susceptible to noise all things being equal.

 

However, when producing an image for output, say on the web or as a print a high resolution image will need to be down sampled, this will introduce some averaging thus lowering the average noise in the output pixels.

 

So taking a simple view of the situation a higher resolution image, with higher noise per pixel, does not necessarily mean the output image at lower resolution is noisier than it would have been if a lower resolution lower noise image had been used to start with.

 

This situation is complicated by non-linear signal processing in the demosaic and sharpening algorithms used to produce the high resolution image and in the down sampling algorithms used to produce an image at the required output resolution.

Link to comment
Share on other sites

Light efficiency could easily be improved by binning the dumb Bayer pattern and adopting a more logical 3 pixel cluster approach.

 

Why do sensor manufacturers insist on throwing away sensitivity by having twice as many green sensors as red or blue? This simply means that the gain of the red and blue channels has to be boosted twice as much as the green channel.

 

The usual lame, and totally illogical, explanation is that the human eye is most sensitive to green light, but this is like saying that someone who's abnormally sensitive to pain ought to have more pain in their lives!

 

Hello camera makers! You're using RGB sensors, the clue is in counting the letters. There are THREE of them in case you're in any doubt, not 4.

 

Using a 3 group pixel cluster would also automatically improve the effective resolution of a sensor. And if you still can't figure it out, look at the arrangement of phosphor dots on a CRT screen.

Link to comment
Share on other sites

<i>Put simply halving the spacing of the pixels cuts their area by a factor of four, beaning that they are four times more susceptible to noise all things being equal.</i>

<p>

Signal increases proportionally to pixel area, noise to root area, so SNR in the image would increase by a factor of 2 by quadrupling the pixel area. All other things assumed to be equal.

Link to comment
Share on other sites

There are twice as many green channels as red or blue because that approximates the relative sensitivity of the human eye.

 

While being a good talking point, the "all else being equal" rule is the most broken rule in the book. I have a D2h and D2x. The later has three times the number of pixels, yet has better color, less noise and a lot better resolution. By all reports the D3 and D300 are better yet, while having the same nominal resolution as the D2x.

Link to comment
Share on other sites

>>> Check the theories against actual test results. At least when going from 8MP to 10MP

on crop bodies the general consensus was that Canon (and others) did it without a noise

increase.

 

That's because most people go with the simple notion that larger pixels produce lower

noise mantra. But that notion reflects only one of many noise sources that drive system

performance. And ignores many other factors in the sweep. Also, many people who talk

freely and make judgements about what drives sensor performance usually are not

engineers and have no real-life experience in designing sensor-based systems. The end

result is thus pearls of wisdom and mantras that are often wrong.

 

Wonder if people are similarly opinionated on the latest techniques in the field of

neurosurgery - there's a lot of info on the net...

www.citysnaps.net
Link to comment
Share on other sites

By having more green sensors, the sensitivity to green is boosted without adding gain, hence noise, to that channel. Green is the color in which the human eye is most sensitive to detail (and noise), so the Bayer pattern is doubly effective by placing half the detail in the green channel.
Link to comment
Share on other sites

  • 2 years later...
<p>You don't need anything bigger than 10 mega pixels. What really counts is the size of the individual pixels. For instance the Nikon D70S has an individual pixel size of 7.8 microns and the Nikon D700 has 8.45 microns. That isn't much difference for the D700 that has 12.3 mega pixels. Ask the sales person the size of the individual pixels or the manufacturer so you know what you are getting. If they don't know then don't buy.</p>
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...