Jump to content

Is Underexposing a RAW Format Shot Equivalent to Shooting It with the Corresponding Higher ISO?


Recommended Posts

Question as stated.

 

If yes, why do RAW processors not have the ability to move exposure

compensation more than 2 stops or so?

 

 

If not, why not? Does setting a higher ISO cause some different kind

of processing in camera in the amplifier or other circuitry relative

to merely underexposing a shot?

 

Thank you in advance for enlightenment...

Link to comment
Share on other sites

When the analog signal is passed trough the ADC, you have analog gain control you can use. This is basically how they handle changing the ISOs, by adjusting the gain. If you bump the ISO up the camera adjusts the gain and amplifies the signal passing through the ADC (noise and all). Once you have a file you can no longer go "back through" the ADC in any figurative sense so you're still working with the same data.
Link to comment
Share on other sites

Carl, David,

 

Thank you for your concise and insightful answers. So prompt too!

 

BTW, I wonder if the analog circuitry has some kind of front end noise reduction built in, similar to Dolby in audio? I'm no engineer, but merely curious. Or would such an approach distort the color values?

Link to comment
Share on other sites

Dolby solves the problem of noise on the recording media, which in this case is digital, so there realy isn't any noise.

 

You'll hear many an audio engineer say, if it goes in like shit, it comes out like shit. Meaning that is your microphone (or in our case, the sensor) is noisy, you'll never be able to really clean it up.

Link to comment
Share on other sites

If I recall correctly, Dolby originally was introduced to reduce tape hiss, which it did effectively, but (again, if I recall correctly) at the expense of destroying fine coloration in the music as well. As far as I can see, digital image noise reduction does the same-programs like NeatImage can remove fine details too, such as hair and pores in portraits.

 

I guess my question is whether the noise can be more effectively removed at the analog stage with image sensors.

 

Again, I'm no engineer, just curious.

 

Incidentally, is digital noiseless? I know digital processes have higher signal to noise ratio, typically, but there is plenty of redundancy built in into the encoding to get rid of whatever creeps in...

Link to comment
Share on other sites

"As far as I can see, digital image noise reduction does the same-programs like NeatImage can remove fine details too, such as hair and pores in portraits"

 

As a satisfied (and picky!) user of Neat Image I don't agree. If someone can post side-by-side comparisons that demonstrate the loss of significant detail then I might be be persuaded, but it's something I've not seen with my own shots.

Link to comment
Share on other sites

Er, I did say 'can remove' rather than 'does remove'.

 

I use NeatImage quite happily all the time, but even with the most careful profiling its easy to see that a smidgen of detail is removed too-this is apparent in portraits at 100% magnification.

Link to comment
Share on other sites

Using software it is damn near impossible to filter out noise without damaging other high frequency details. Noise Ninja and Neat Image and all the other new noise reduction utilities have improved this some with new algorithms, but they still remove some image information.

 

It's just the same as how you can't interpolate an image and get a picture that looks exactly the same or that would print just as sharp at a larger size. Something a lot of photographers seem to wrongly believe, perhaps because they wasted so much money on genuine fractals.

 

Mani, in camera noise reduction before the ADC could be looked at as the cutoff of the ADC I suppose. It doesn't digitize below a certain level or beyond another certain level. So where they know the response of the sensor will be noise, the signal will be ignored/replaced with black.

Link to comment
Share on other sites

It doesn't fully answer the question, but here's an example, talking about boosting exposure on an image from a 10D:

 

 

In the dark end of the histogram you're only getting a range of 128 levels, compared with 2048 shades in the high end. If you add 2 stops of exposure then you're stretching the darks from 128 levels to 512 levels. You'll start getting noise.

 

It's certainly a way to fix a stop underexposed, and can save an image 2 stops underexposed, but you might not like what you get.

 

Here's an image comparing 2 images that I stook 3 stops apart as I was bracketing the scene. The one on the left was adjusted in Capture1 by 2.5 stops, so they're as close to being the same as I can get.

 

<img src="http://www.itsanadventure.com/postimages/3stopcompare.jpg">3 stop compare</img>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...