Jump to content

The Nature of Noise


Recommended Posts

<p>.</p>

 

<blockquote>

<p>Earlier:</p>

<ul>

<li>"<em>... images that start on 4x5 film are taken with cameras that have relatively low pixel density ...</em> "</li>

</ul>

<ul>

<li>"<em>... online, in magazines ... in exhibitions ... digital images and large prints ... dynamic range and subtlety of color ... wonderful dynamic range and colors too ... I've seen with my own eyes ...</em> "</li>

</ul>

<ul>

<li>"<em>... dynamic range and color [from different sensor pixel densities] all else equal ..."</em> </li>

</ul>

<ul>

<li><em>"... I am simply curious ...</em> "</li>

</ul>

</blockquote>

<p>Thanks for your insight, Lobalobo.</p>

<p>I've seen this dialogic process many times before -- it take three or more iterations for the original poster to get to what's really on their mind. Not "<em>The Nature of Noise,</em> " but more like, "<em>Where does color gamut, dynamic range, and signal-to-noise ratio in a presentation system come from?</em> " And now I can address that (though any "4x5 film with low pixel density" I'll just write off to a typo, and "all else equal" when changing "only" sensors is an impossibility, so I'll ignore that, too). </p>

<p>What you are looking at is the color gamut, dynamic range, and signal-to-noise ratio <em><strong>of the presentation medium</strong> </em> -- screen or print -- and as such no one has any way of assessing the color gamut, dynamic range, and signal-to-noise ratio of the medium used to capture the original image (let alone the lighting, color gamut, and dynamic range of the original subject scene), nor can anyone use the final presentation form itself to assess the intermediate system used in between to modify and deliver that original capture to the final presentation medium.</p>

<p>In other words, you're asking, "<em><strong>I see pictures that pop, is it the camera?</strong> </em> ", and the age-old answer is, "<em><strong>No.</strong> </em> "</p>

<p>Any camera (and I mean ANY camera) can present a capture image to an image editing system that then manipulates and tweaks the image color gamut, dynamic range, and signal-to-noise ratio in ways untraceable to the original camera system, and then present that intermediate image for printing or display, which, of course, is then limited in it's own output color gamut, dynamic range, and signal-to-noise ratio of the display medium, but also by the display environment lighting, and by the eye and brain of the beholder (where color accuracy is not assessable, and off-white become white after a moment of viewing anyway, for instance).</p>

<p>As for downsampling adding anything back in from the original scene that was missing, I think not. As for downsampling adding or reducing anything, probably only by averaging or by more complex algorithms different from each programmer, hence competition out there for our dollars, but whatever, we play with it for a pleasing <em>resulting</em> color gamut, dynamic range, and signal-to-noise ratio, and has nothing to do with original capture accuracy. Yes, workflow to get to that target is easier if each component is most accurate, or more punchy, or more of whatever the artist likes, but after the image is handed off at each stage, not much forensics can accurately identify whether what we are seeing compares more or less accurately to the original scene, if that matters. Heck, even Ansel Adams tweaked his output images to make his own desired impression where the system itself laid out a flat and boring image otherwise without his manipulations! </p>

<p>Where does pixel density fit in here, other than as a miscible range of performance, changing with each sensor generation of new technology, season to season? Same with film, I guess, but who buys old film anymore when there's new film? Who buys cameras with old sensors anymore when there are new cameras with sensors? And what does it matter when we can fix it in Photoshop after all?</p>

<p>Thanks for clarifying, and I've learned a great deal researching the side-track tangentials getting here. It's been a very provocative and productive thread. Thank you.</p>

<p>.</p>

Link to comment
Share on other sites

<p>.</p>

<p>PS - Lobalobo, what do you think of the <em><strong>DxO</strong> </em> and <em><strong>Zeiss </strong> </em> information resources presented in the links we shared? Do they help expand your understanding of what you might be seeing on presentation screens and in prints?</p>

<p>.</p>

Link to comment
Share on other sites

<p>Thanks, the Dx0 sites are helpful on the general question, but not on my specific inquiries about dyanmic range and color. I haven't looked at the Zeiss link yet, but thanks for that.</p>

<p>Regarding the substnace of your most recent posts, for which I am sincerely grateful, I do take your point that a digital imaging system can produce any outcome within its range regardless of the input. But this proves too much. In theory, I can take a number two pencil and write down a series of "0s" and "1s" that, when processed by a computer, and output device will closely resemble the Mozart symphony I'm listening to or sunset I'm looking at (or either, modified as I wish). That's in theory, but neither I nor any other human with a pencil knows how to write down the "0s" and "1s" in the right sequence, so we use microphones and cameras (and other equipment and related software) instead. What this debate is about, I take it, is how much better the microphones or, here, cameras are in knowing what to feed the digital imaging system.</p>

<p>Turning to that question, I'm reminded that Richard Feynman once said that even quantum mechanics can be explained in simple, intuitive terms. I'm reminded of that quote when I teach (law) and I'm convinced that what is true of quantum mechanics is true of digital photography as well. So here are the two, simple, intuitive explanations I've heard for why large pixels produce better dynamic range and better color differentiation, followed by my questions:</p>

<p>- On dynamic range: large pixels don't fill up with light (i.e., "clip") as fast as small ones, thus with larger pixels it's possible to expose for the shadows (i.e., let in a lot of light) without clipping the highlights. My question is whether downsampling from a high number of small pixels can replicate this effect. I'm guessing the answer is "no" because a clipped highlight (or black shadow) has no information to reveal upon downsampling. And I didn't think this had anything to do with noise. But I wasn't sure on either count.</p>

<p>- On color, large pixels have less noise and so a standard "red" for exampe is going to be recorded by the camera closer to standard red when the pixels are larger. (And what I mean by "recorded by the camera" I mean record data that instructs the imaging device what color to produce if that color is withn the gamut of the imaging device.) With smaller pixels, the noise will either move the color from red towards some other color altogether, say green, or effectively mix in gray in the wrong proportion. My question is whether downsampling can fix the problem. I think not, because once the noise confuses the sensor as to what it has seen, so to speak, that information can't be recovered. But again, I'm not sure. (And yes, I know that the final imaging device can be told to produce whatever color the user wants, but the prospect of doing this manually brings me back to the listener or viewer with a number two pencil writing down "0s" and "1s".)</p>

Link to comment
Share on other sites

<blockquote>

<p>On dynamic range: large pixels don't fill up with light (i.e., "clip") as fast as small ones, thus with larger pixels it's possible to expose for the shadows (i.e., let in a lot of light) without clipping the highlights.</p>

</blockquote>

<p>Well that's simple and intuitive. However it occurs to me that an extension of this idea would lead one to believe that the bigger the pixel, the larger the dynamic range. I don't think that experience bears that out. Also, it seems to me that both large and small pixels fill up with light at the same rate (when you espress rate as a function of their total capacity).</p>

<p>However it does make sense that a large pixel, filled to some small percent (x) of its capacity would give you a cleaner signal than a small pixel filled to that same percent. Now we're back to talking about noise, and I don't think you can discuss dynamic range without taking noise into consideration.</p>

Link to comment
Share on other sites

<p>Noise is the uncertainty in the data. Put another way, noise is the error in a given data value. The noise for a given data value is the difference between the recorded value and the true, but unknown value. Artifacts are not noise because artifacts are defined as non-random errors.<br>

Noise is inherent to the data. It can not be reduced or eliminated (but the misnomer noise reduction does sell software).<br>

Noise can be averaged (filtered). Data values with less uncertainty are averaged with data values with more uncertainty. The information content of the low noise data values is (must be) lowered in order to reduce the uncertainty in the high noise data values. Efficient algorithms used with skill often produce a useful compromise that improves the presentation of the data. But the total noise (uncertainty in the data) is not reduced. The total uncertainty in the image remains the same. If you can figure out how to increase the total information content of a data set after it has been recorded, you will become richer than Bill Gates.<br>

Understanding the effects of noise in the data during digital image processing is complicated as several types of noise are present. The Bayer de-demosiacing algorithms and other processes also average the data. Compromises must be made. However the fundamentals of information theory still apply. The noise content of an image can not be reduced or eliminated.</p>

 

Link to comment
Share on other sites

<blockquote>- On dynamic range: large pixels don't fill up with light (i.e., "clip") as fast as small ones, thus with larger pixels it's possible to expose for the shadows (i.e., let in a lot of light) without clipping the highlights. My question is whether downsampling from a high number of small pixels can replicate this effect. I'm guessing the answer is "no" because a clipped highlight (or black shadow) has no information to reveal upon downsampling. And I didn't think this had anything to do with noise. But I wasn't sure on either count.</blockquote><p>

 

I've wondered about the reasoning behind this as well. I have heard it said that the larger the photosites, the larger the dynamic range. But as Mike points out, this isn't the be all and end all of the story. I also wonder whether the photosites ARE actually bigger. The fact that the 5D has a lower pixel density than the 1DsIV, does this mean that the photosites can actually handle MORE light or are they just a bigger in area?<P>

 

On your question of whether downsampling could improve dynamic range, I would have to think it would be a definite NO. As you speculated yourself, if the pixel is clipped in the first instance, no amount of down sampling is going to change that.

Link to comment
Share on other sites

<p>Mike Blume said:</p>

<p>"Well that's simple and intuitive. However it occurs to me that an extension of this idea would lead one to believe that the bigger the pixel, the larger the dynamic range. I don't think that experience bears that out."</p>

<p>There are other factors, but all else equal I believe that it is true, the larger the pixels the better the dynamic range.</p>

<p>"Also, it seems to me that both large and small pixels fill up with light at the same rate (when you espress rate as a function of their total capacity)."</p>

<p>Yes, but unless the camera's software knows to combine them into a single virtual pixel (which, I take it is what the new Fuji EXR technology does), they will all clip, I think.</p>

Link to comment
Share on other sites

<p>I make noise reduction software and I think I will chime in here.<br>

1- Better dynamic range and lower noise go hand in hand. Between clipping and black, it is noise that determines dynamic range. I would consider them the same thing. (As long as you're dealing with linear sensors, i.e. digital cameras, and not film.)<br>

2- Downsampling does reduce noise (and therefore improve dynamic range). Averaging values together will reduce noise and the averaging reduces it.<br>

This is also how image stacking works.<br>

3- Noise reduction plugins really do reduce a small amount of noise. How is this possible? It's because our physical world has certain properties (due to the physics of light and the nature of most objects) that statistically skews what the image should be. The ideal values for an image is not randomly distributed. Most/all noise reduction plugins will incidentally create values that are slightly closer (on average) to what the image should be, were it free of noise.<br>

Try this:<br>

Take images shot at two different ISOs, one very low (A) and one very high (B). Image C will be B with noise reduction applied.<br>

Do a difference composite to compare A-B and A-C. You should see that A-C is less different than A-B; it is closer to what an ideal image would be.<br>

4- The practical answer:<br>

You can ignore a lot of the math and statistical mumbo jumbo because it doesn't really matter. Humans don't see the world in terms of signal to noise ratios or standard deviations. There is a lot of sophisticated image processing in our brains that determines whether something looks 'right' or doesn't. We have models on how good something "ought" to be, but often these models aren't very good at predicting things and are therefore not that useful. <strong>At the end of the day, one of the most useful approaches is to simply look at an image and see if it looks right.</strong><br>

a- The noise reduction plugins are the way to go (in my biased opinion). They (pretty much) can always help.<br>

b- You can't polish a turd! If an image is too noisy, the NR plugins cannot rescue them.</p>

Link to comment
Share on other sites

<p>Glenn says: "Between clipping and black, it is noise that determines dynamic range."</p>

<p>I get the overall point, but definitionally, isn't dynamic range, loosely put, the distance between clipping and black?</p>

Link to comment
Share on other sites

<p>No, dynamic range is being able to distinguish separation of tone that make up shadow detail and highlight detail given the limits of a 8 bit 255 gray level video system and even more limited in a print.</p>

<p>You have so many separate density levels out of these 255 levels for each RGB channel devoted to making up tonal differences in these extreme areas. When you start seeing flat foggy looking shadow regions combined with the stippled texture of noise, dynamic range width size and perception of detail starts to take a hit.</p>

Link to comment
Share on other sites

<p>I said: "but definitionally, isn't dynamic range, loosely put, the distance between clipping and black?" Tim responded: "No, dynamic range is being able to distinguish separation of tone that make up shadow detail and highlight detail given the limits of a 8 bit 255 gray level video system and even more limited in a print."</p>

<p>Ok, but consider the following thought experiment: A wall contains, say, 24 equally wide vertical strips placed edge-to-edge within the field of view of two cameras. Strip 1 is Black and Strip 24 is White, and strips 2 to 23 are shades of gray that are progressively lighter moving from the first strip to the last. Imagine that the sensor in Camera 1 is exposed for the middle strips and produces an image that shows Strips 1 -5 pure Black and Strips 20-24 pure White, while the sensor in Camera 2 exposed in the same way produces an image that shows only Strips 1-2 as pure Black and only strips 23-24 as pure White. Are you saying that the sensor in Camera 2 does not capture a wider dynamic range than the sensor in Camera 1?</p>

Link to comment
Share on other sites

<p><strong>No, dynamic range is being able to distinguish separation of tone that make up shadow detail and highlight detail given the limits of a 8 bit 255 gray level video system and even more limited in a print.</strong><br>

I think most people would define dynamic range differently?<br>

Also, I personally wouldn't worry about quantization error caused from 8-bit coding if the image has noise in it, since the noise will naturally act as dithering and you won't see any problems with quantization error or banding.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...