Jump to content

Recommended Posts

Van Camper,

 

I was understanding you completely until I caught this from your previous post...

 

"...first stop records 1/2 the information...it should mean the first stop is twice as bright

as the other..."

 

From my understanding brightness and information are two completely different things. I

regard information to be clean, defined and separated tonal detail that isn't dark or murky

whether it's bright, midtone or dark. Basically definition.

 

From the RAW files I've just started working with I get all of this exposing to the right

down to the shadow regions meaning NO NOISE. However the image opens up with its

default settings looking quite flat looking and somewhat bright. But zooming in on areas

that should be close to black as I saw the scene I don't see a lot of noise, just a foggy

looking dark area. When the RAW file is gamma encoded with its correct looking scene

contrast, I still don't see any noise.

 

Now maybe it's because these are brightly lit midday outdoor scenes I'm shooting and

maybe with studio shots there would be more noise, but then you'ld have to increase

exposure for the reduced lighting situation. And so wouldn't this add a mathematical

variable to the calculation of what's recorded fidelity wise? I'm not a mathematician and I

do find the Reichman article hard to grasp because he doesn't show with images how

scene detail like the brick wall image posted gets mapped to 256 high bit levels of tone.

 

I mean once all the nick and cranny, bumps and gashes as in the brick wall are recorded

regardless if it's light or dark the detail is there. The user just has to linearly lighten or

darken or add contrast to make it more pronounced. How many tonal levels does one need

to do this? That brick wall is a finished rendering as shot. It showed all the detail that any

human eye could discern because it was shot 30 feet away at 35mm. There was just as

much finished detail in the shadows as well.

Link to comment
Share on other sites

  • Replies 92
  • Created
  • Last Reply

Top Posters In This Topic

 

 

 

 

"I was understanding you completely until I caught this from your previous post...

...first stop records 1/2 the information...it should mean the first stop is twice as bright as the other..."

-------------------------------------

 

Hi Tim, I was referring to Reichmans comment where he mentions the "brightest" first stop (Reichman put brackets around it). A section of his article is reproduced below-

 

"A 12 bit image is capable of recording 4,096 (2^12) discrete tonal values. One would think that therefore each F/Stop of the 5 stop range would be able to record some 850 (4096 / 5) of these steps. But, alas, this is not the case. The way that it really works is that the first (brightest) stop's worth of data contains 2048 of these steps ? fully half of those available.

Why? Because CCD and CMOS chips are linear devices. And, of course, each F/Stop records half of the light of the previous one, and therefore half the remaining data space available. This little table tells the tale.

Within the first F/Stop, which contains the Brightest Tones 2048 levels available

Within the second F/Stop, which contains Bright Tones 1024 levels available

Within the third F/Stop, which contains the Mid-Tones 512 levels available

Within the fourth F/Stop, which contains Dark Tones 256 levels available

Within the fifth F/Stop, which contains the Darkest Tones 128 levels available" M. Reichman

 

You mentioned there was just as much finished detail in the shadows as well. This is how I find my images. I have a lot of images where a small segment of the scene has highlights, and the other 80% has values zone 5 and lower and what I see is detail, but with noise introduced in the lower shadows, but for different reasons then Bruce Fraser describes(his article is basically a carbon copy of what Reichman had said, but more graphical). He goes on to say " You may be tempted to underexpose images to avoid blowing out the highlights, but if you do, you?re wasting a lot of the bits the camera can capture, and you?re running a significant risk of introducing noise in the midtones and shadows. If you underexpose in an attempt to hold highlight

detail, and then find that you have to open up the shadows in the raw conversion, you have to spread those 64 levels in the darkest stop over a wider tonal range, which exaggerates noise and invites posterization." Well, I do not believe the noise is due to degradation with each successive stop (less tonal values), but I believe it's due to noise that any electronic equipment exhibits (we cannot eliminate the problem, although a lot of equipment tries filtering it) . As your shadows get darker and darker your camera has to work harder and harder to capture real information. This is where noise becomes a problem with all electronic devices. A radio telescope operator must always deal with noise from the equipment, and noise naturally occurring in space. Is it caused by a mathematical progression? No? It is electronic noise. If you have 2 units of noise from a unit it is insignificant when the volume is set to 130 decibels, but soon as the volume is set at say 10 or below, those 2 units become significant (the same 2 same units is applied to the entire scale 1,2,4,8,?.2048,4096). Turn up the ISO, and the noise gets turned up as well, now it is 20 units of noise added to the highs and lows and everything in between. At ISO 1600 it becomes noticeable even in the upper tones (barely), but it doesn't take long until it shows itself in the middle and lower tones. This is why noise on our stereo amplifiers was always heard at only the lower listening volumes where it had a siginificant effect on the clean portion of music produced.

 

The biggest problem I have with Reichmans theory is the following comment he made? "each F/Stop records half of the light of the previous one, and therefore half the remaining data space available". This is poor logic, you cannot use the SAME mathematical progression to mean two different things at the same time. This is a gross error on his part. In the same sentence he uses the mathematical progression to refer to ? (or doubling) of the light remaining and also the same scale to mean ? the NUMBER of tonal values remaining (data space). The scale cannot mean two different things at the same time. I believe this is where all the confusion began. Worse, he subtracted between two side by side points on the scale and forces us to accept that this subtracted value has some meaning. It doesn't, and you cannot subtract one from the other! The difference between 1024 and 2048 refers to a doubling/halving ONLY (100% difference), Every other point in this equidistant interval scale also refers to a doubling/halving ONLY, yet he converted it to a non interval scale that is not equidistant in value (his values in his table are changing from 2048, 1024,528, etc). That you cannot do or use! This is now non- interval data, not equidistant, and no longer describes a doubling effect. The math assumptions he made are wrong. From this error, the remainder of his argument has become devalued. The premises, the math used for his theorem has fallen apart.

 

We all have an opinion, and I agree with noise existing, but not for the same reasons. So I believe in exposing to the right because the sensor isn't working hard (and of course we like extra shadow detail). Wedding photographers are an unfortunate bunch who rarely get the chance to expose to the right (white gown). Yet the dslr sensor has been bringing in the butter without exposing to the right. So in theory, it doesn't hurt to do so, but in practice you can't because of an important highlight or time constraints.. I would rather burn through bracketing a few shots off in 5 seconds then waste 2mins to check and analyze my histogram for R/G/B (and I hate taking my prescription glasses out). You would be missing a lot of great images at the wedding. The histogram is nice to look at when you have the time.

 

http://www.adobe.com/digitalimag/pdfs/linear_gamma.pdf summarizes (Bruce Fraser)

http://www.luminous-landscape.com/tutorials/expose-right.shtml (Reichman article)

 

Sorry for the long article. It is not a easy topic to explain. There are so many variables, and worse?non of us are experts. So stop worrying what you hear, just go and shoot, and bracket to be safe.

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

I'm at work and don't have time to read all of your post Van, so forgive me if I seem to totally ignore something you said above. I thought I might just try and give a brief and (hopefully) clear reason why I think Reichmann and Schewe are right. But first to try and address one of the points you keep mentioning, and that is applying the doubling progression to f-stop and the histogram. It really isn't as weird as you think it is. f-stop measurement is a logarithmic progression. ie. add 1 f-stop, and you DOUBLE the exposure. reduce 1 f-stop and you HALVE the exposure. Now in the case of the histogram you are applying a logarithmic function to a linear histogram. There is nothing at all to say you can't do this. But when you do, the effect of a logarithmic function on a linear function will be non-linear. ie. underexpose by 1 stop in a 5 stop dynamic range scene and you reduce exposure by HALF, not 1/5.

 

I think the main point of Reichmann's argument is to do with underexposing. Say you have a scene that shows the same Dynamic Range that the camera can represent. In this case, in a correctly exposed shot, the histogram would perfectly fill the range 0-255 in an 8-bit image. If you underexpose by 1 stop, you halve the exposure. Therefore, in a correctly exposed shot, level 255 becomes level 128 when underexposed by 1 stop, and everything above 128 has zero data in it. So it's clear to see if you underexpose by 1 stop you are chucking out half the available levels of the scene.

 

What might complicate this theory, and this is something I haven't heard refered to before, is that the post-capture histogram is not a linear representation of the scene before us. It's had a gamma correction applied to it, which affects darker regions more so than brighter regions. As Tim mentioned, the linear raw data of a scene is significantly stacked to the left. In this case i figure you are actually losing MORE than half the levels of a scene if you underexpose by 1 stop.

Link to comment
Share on other sites

"It really isn't as weird as you think it is. f-stop measurement is a logarithmic progression. ie. add 1 f-stop, and you DOUBLE the exposure. reduce 1 f-stop and you HALVE the exposure"

 

 

I have been reading H&D curves for 35 yrs, I understand that. But also be aware it is interval data of EQUAL SPACING. Sure, each stop is a halving/doubling of brightness, but it is a measure of brightness between 2 points by a factor of 2x (not a description of "tonal values" between 2 points as when Reichman subracts 4096-2048 to equal 2048 remaining tones).That is an incorrect interpretation If your read it in log(10), each stop is .3 logs. In other words 1,2,4,8,16, is the same as 0,.3,.6,.9,1.2 and the log scale more clearly shows the relationship of doubling of exposure.

 

"if you underexpose by 1 stop, you halve the exposure. Therefore, in a correctly exposed shot, level 255 becomes level 128 when underexposed by 1 stop, and everything above 128 has zero data in it. So it's clear to see if you underexpose by 1 stop you are chucking out half the available levels of the scene. "

 

Incorrect, level 255(pure white) does not become 128 with a 1 stop decrease. Level 128 is zone 5, 255 is zone 9 (4 stop diff). The zone system covers zone 0-9 (brightness range 0-256)to represent the typical range of most films(therefore the same range 0-255 for brightness levels is used on digital cameras). You also do not throw 1/2 your data away by underexposing 1 stop. The histogram has shifted to the right and the shape remains the same (# pixels for each tone are still shown to be the same, except zone 8 is now 7).

 

 

"What might complicate this theory, and this is something I haven't heard refered to before, is that the post-capture histogram is not a linear representation of the scene before us. It's had a gamma correction applied to it, which affects darker regions more so than brighter regions. As Tim mentioned, the linear raw data of a scene is significantly stacked to the left"

 

Not complicated at all if you realize the same data can be reported in different ways (logs, exponential value, etc). The key is they mean the same but represented differently. Once the exposure is made, the data is fixed. In RAW the file is dark, and obviously this shows on a histogram to the left. When you apply a steeper slope to those same values (a gamma correction- meaning slope change)mathematically, the brightness and contrast increases as seen on the LCD screen histogram (as a jpeg format) and everything moves to the right on the histogram to represent brighter tones.

 

The bottom line, the ? declining formula is not correct. The histogram does not show a change in the peaks (measuring totals for each tone) as you continue to apply the 1/2 forumla.Underexpose one stop and you see the histogram slide over by one stop to the left,the shape is unchanged. ETTR helps reduce noise, but remember you must increase exposure (using shutter or f/stop ring). Do you want to lose depth of field, or shutter speed in wind? Noise shows in the shadows, and it can be cleaned up with Noise Ninja, etc. I can live without it, wedding photographers do.

 

I think the following chart can clear a few things up. Although it is for film (Ansel, The Negative), digital still is similar to film (except for toe/shoulder).In this chart the curve shown could be a representation for a JPEG image, while in RAW the same image would be a very low sloped curve (low gamma).

Link to comment
Share on other sites

I don't think anyone is listening anymore, so I'll save my energy for when we next take this argument up in a newer thread. But I will just make a couple of points.

 

You emphasise that f-stop measurement is "interval data of EQUAL SPACING". I'm not really sure what the point is. But it should be pointed out that it is only equal spacing because it is represented on a log scale. If it was represented on a linear scale it wouldn't be equal spacing.

 

You state that it is a measure of brightness and not tonal values, but I would have thought these are the same thing. The brightest thing in a (8-bit) digital file will be represented by the tonal value of 255. And likewise down the brightness scale until you get to tonal value 0 (zero brightness).

 

You talk about Log(10). I'm pretty sure it is a Log(2) scale. ie. each successive f-stop is a doubling of exposure, not a 10x increase.

 

Unfortunately when it comes to the zone system, I have to confess I am a digital geek, and have done most of my serious photographic learning post-film. So I'm not really sure how the zone system relates to this argument.

 

Finally, I want to throw in another variable to my argument, which will probably confuse the issue even more. My understanding is that the raw data at some stage (I think post-AD conversion) undergoes a black point offset. That is, the bottom X levels, being primarily composed of noise, are clipped to black (zero). So when you underexpose by 1 stop, you now have to throw this point (as well as the gamma correction) into the mix to determine how the final histogram will change with a 1 stop underexposure.

Link to comment
Share on other sites

How 'bout that, someone is still listening!

 

Emre, i'm not sure if your answer was directed at me, but i'm all for ETTR. I was just throwing a few other ideas into the mix. As for debating, until Van and others buy our argument on ETTR we will have to keep on debating.

 

One point of correction: On further thought I now see that my comment about more than 1/2 the levels being thrown out due to the gamma encoding was not right.

Link to comment
Share on other sites

It is not a question of choice. Sensors have a linear output or they don't (and they do!) Whether people understand the ramifications of this is not my problem. Any debate has to start from the transfer function of the sensor. I searched this page and counted on one hand the number of times the word "sensor" was mentioned before my post. This suggests a lack of understanding of the digital process.
Link to comment
Share on other sites

>> [bernie] You talk about Log(10). I'm pretty sure it is a Log(2) scale. ie. each successive f-stop is a doubling of exposure, not a 10x increase <<

 

Bernie, you're not being clear on what "it" is. What Van said (correctly) is this," If your read it in log(10), each stop is .3 logs." OF course, you could express it any way you like, but the traditional method of graphing film response is the base 10 log of exposure vs film density. Density is formally defined using a base 10 log, so using the same numeric method on the exposure axis keeps the playing field level.

 

People who have been involved in photography at a certain technical level tend to talk about exposure changes in terms of base 10 logs where changing by 0.301 doubles or halves exposure. Most photographers like to refer to f-stop changes. In the Zone System, the zones in the original scene correlate to full f-stop equivalents. Someone else might like CIE colorimetry values. It seems sensible to stay with some sort of established system, though, just so other people in the trade can understand you.

Link to comment
Share on other sites

I thought the first linear graphic shown in the Bruce Fraser pdf pretty much made it clear

for me what Bernie's been pointing out.

 

The only issue I have is it's hard to prove or demonstrate the effect this has on editing a

RAW image. I don't see any posterization in the regions that are suppose to have the

fewest levels to work with namely the shadow regions.

 

However just on sheer faith alone trusting sensor processes really do distribute scene data

this way it makes sense to ETTR because we want the majority of data that can be seen by

the eye which mainly resides in the 3/4 tone to highlite regions to be distributed within

the majority of the 12bit levels.

 

It would be interesting to do a visual test using a shot of a wide dynamic range scene and

see if one f-stop darker off ETTR would show any noticeably significant increase in

posterization and noise.

Link to comment
Share on other sites

Bill, I'm all for convention for simplicity, but not when it's illogical. Whatever the relationship was in the film days, i would argue it's not so useful in the digital world. F-stop changes result in doubling or halving of exposure. The natural and logical way to represent this is with log base 2. Perhaps back in the day, ye olde slide rules couldn't handle base 2, necessitating the need for base 10?? But in these days it is illogical to use any other description. Anyway, you're right, you can describe it with whatever base you want, so it doesn't really matter anyway.

 

Bill, you seem to understand the zone system. What's your thoughts on the Zone system, ETTR and Van's argument?

Link to comment
Share on other sites

Tim, I reckon it's like most of these techno things we debate on these forums. In normal conditions most people would probably struggle to see the difference. But it's probably more important if you are doing very large prints or heavy editing. I don't do large prints, and only sometimes do heavy editing, so it's not really important for me. But being a sciency geek type guy I love understanding why something works, and trying to get the most out of what I've got just for sake of it.
Link to comment
Share on other sites

Hi Bernie, I'll try to explain it the way I see it.

 

"You emphasise that f-stop measurement is "interval data of EQUAL SPACING". I'm not really sure what the point is. But it should be pointed out that it is only equal spacing because it is represented on a log scale. If it was represented on a linear scale it wouldn't be equal spacing."

By interval scale I mean equal changes in amount along the scale (regular intervals). In math it's referred to as interval data (and is most evident from zone 3-7). Both the log scale (.3,.6,.9?.) and the Exposure Unit scale (factor 2x between each point on scale) are interval scales. There were several other scales in the chart which also mean the same (a doubling, or a factor of 2). However, it is the Exposure Unit scale that is creating the problem because Reichman interprets it incorrectly. I think it would be easier to understand if the exposure unit scale 1,2,4,8?4096 were written as 100%,200%,400%,800% brighter with each doubling of exposure (meaning each step is a 2x factor with each f/stop). Then it might be clearer that the horizontal axis only refers to EXPOSURE (not number of tones still available), while the vertical axis only refers to DENSITY(Tonality). What's worse is Reichman made the mistake of using the horizontal axis to define two different variables. First, he used the scale to refer to the exposure axis (a doubling effect) which is correct. However, he made the mistake in also referring to the number of tones remaining by subtracting two points on the horizontal axis (4096-2048 he argues is 2048 tonal values remaining). You cannot assign this interpretation to the axis, you are also not allowed to subtract two points to mean something else (all intervals are 2x?twice as bright as previous exposure). Therefore, you cannot keep subtracting between points as you descend down this scale and argue in the first stop you have 2048 tonal values remaining, and at the other end 64 tonal values remaining. That is incorrect interpretation of the scale . He also has not shown any proof how he might be able to apply it (which he can't). I have been discussing the film curve, but digital must emulate film closely or we would have some very weird results (they all must have a curve starting at bottom left and ending at the top right. The curves for film and the D200 were found to be very similar at the following forum. Also RAW is dark, has a low gamma, therefore shows itself to the far left on the histogram. Jpeg has gamma applied (slope), so the image on the LCD is nice and bright (and compressed).

http://www.luminous-landscape.com/forum/lofiversion/index.php/t11221.html ).

 

"You state that it is a measure of brightness and not tonal values, but I would have thought these are the same thing. The brightest thing in a (8-bit) digital file will be represented by the tonal value of 255. And likewise down the brightness scale until you get to tonal value 0 (zero brightness). "

The problem is your looking at two different charts. One is the characteristic curve which shows exposure (brightness) on the horizontal axis and density (tones) on the vertical axis. The other which your confusing is the Histogram, where the horizontal axis now refers to tones, while the vertical axis refers to total pixels for each tone along the horizontal axis (from 0-255).

"You talk about Log(10). I'm pretty sure it is a Log(2) scale. ie. each successive f-stop is a doubling of exposure, not a 10x increase. " Check the chart, it shows the log scale in base 10. Each successive f-stop is .3 units to the right on the log 10 scale (a doubling of exposure ->not 10x) . It continues at equal intervals of 2x.

Link to comment
Share on other sites

"Any debate has to start from the output characteristics of the sensor. I searched this page and counted on one hand the number of times the word "sensor" was mentioned before my post. This suggests a lack of understanding of the digital process."

 

Hi Emre?. well this holds for Reichman/Knoll. With no research, no proof, they came up with a formula based on wrong interpretation. Ask as much of them. Now here we are trying to prove they are wrong, rather then they should have proved it in the first place. I insist on more information from them. There was no order to the science they proposed. No hypothesis, no step by step presentation to support each step along the way. Shees, how easy it would have been for Einstein if all he had to say was E=MC^2. Why? Because that is the way it is! This was Reichmans supporting evidence! I think until he shows the math/charting/ further evidence, we have no reason to believe his ? diminishing theory. The only benefit of ETTR is noise reduction, and I believe it is due to signal-noise ratio experienced with all electronic gear, not due to 64 tonal values remaining (that approach is wrong). The only documentation he has shown is that if you do not apply ETTR, then you get noise. But he hasn't proven the source of the noise, only that it is there (that I don't argue).

Link to comment
Share on other sites

"Any debate has to start from the output characteristics of the sensor. I searched this page and counted on one hand the number of times the word "sensor" was mentioned before my post. This suggests a lack of understanding of the digital process."

 

Hi Emre. Well this holds for Reichman/Knoll. With no research, no proof, they came up with a formula based on wrong interpretation. Ask as much of them. Now here we are trying to prove they are wrong, rather then they should have proved it in the first place. I insist on more information from them. There was no order to the science they proposed. No hypothesis, no step by step presentation to support each step along the way. Shees, how easy it would have been for Einstein if all he had to say was E=MC^2. Why? Because that is the way it is! This was Reichmans supporting evidence! I think until he shows the math/charting/ further evidence, we have no reason to believe his ? diminishing theory. The only benefit of ETTR is noise reduction, and I believe it is due to signal-noise ratio experienced with all electronic gear, not due to 64 tonal values remaining (that approach is wrong). The only documentation he has shown is that if you do not apply ETTR, then you get noise. But he hasn't proven the source of the noise, only that it is there (that I don't argue).

Link to comment
Share on other sites

Van, I am struggling to follow your argument. I'm still not sure what the 'interval data of equal spacing' thing is all about. And I don't know what Density has to do with it.<br><br>

 

<i>The problem is your looking at two different charts. One is the characteristic curve which shows exposure (brightness) on the horizontal axis and density (tones) on the vertical axis.</i><br><br>

 

I'm not looking at any chart with Density on it. What does density have to do with it?<br><br>

 

<i>The other which your confusing is the Histogram, where the horizontal axis now refers to tones, while the vertical axis refers to total pixels for each tone along the horizontal axis (from 0-255).</i><br><br>

 

I stand by what I originally said which was: "The brightest thing in a (8-bit) digital file will be represented by the tonal value of 255. And likewise down the brightness scale until you get to tonal value 0 (zero brightness)".<br><br>

 

<i>I think until he shows the math/charting/ further evidence, we have no reason to believe his ? diminishing theory.</i><br><br>

 

The evidence is this: The sensor (happy E?) is a linear measuring device. Double the brightness you double the amount of photons it senses and records (within it's upper sensing limit). Halve the brightness and you will halve the amount of photons it senses and records. Now, halve the exposure (ie. drop one f-stop), and you halve the number of photons it senses and records. Hence, an object in the scene which at correct exposure <i>just</i> saturates a section of the sensor, will only half saturate it when underexposed by 1 stop. Full saturation is registered at 255 on the (8-bit) histogram. Half saturation is registered at 128 on the histogram. Therefore, underexpose by 1 stop, and you lose (roughly) half the levels available to capture that image. I wonder whether you are getting confused by what this technique, which Reichmann and Knoll described, is referring to. It isn't trying to divide up a histogram of an already captured image and say that the brightest stop of its histogram covers 2048 (in 12-bit) levels. What it is referring to is the difference between a correctly exposed image (in terms of ETTR) and one which is underexposed by X stops.

Link to comment
Share on other sites

"Full saturation is registered at 255 on the (8-bit) histogram. Half saturation is registered at 128 on the histogram"

 

Your looking at the histogram, intead look at the characteristic curve chart I gave above. On this chart decrease exposure 1 stop and you go from zone 8 down to zone 7. Zone 8 is 128 Exp Units (white), while 18% grey is zone 5 (a 3 stop difference). Your confusing the histograms horizontal axis (o-255, scaled 1,2,3,4,5,6... to describing tones)with the characteristic curves horizontal axis (0-10,000, scaled 1,2,4,8,16...describing exposure). The horizontal axis on the histogram is NOT a exposure unit scale (it is a tonal scale), and for tones on the characteristic curve you must look at the vertical axis (density). When you talk about exposing, you must always look at the Characteristic Curve because the histogram only shows summation data (the # of pixels receiving the same exposure for a particular tone along the bottom axis).

 

"I'm not looking at any chart with Density on it. What does density have to do with it?"

 

Density has nothing to do with it on the Histogram, but it does on the characteristic curve (veritcal axis), which is the one you should be looking at when discussing exposure (because density refers to tonality).A increase in exposure along the horizontal axis (on characteristic curve) corresponds to a certain increase in density (vertical axis) on film (after development). This density is what gives us our tones, and varying development (a slope change, also referred to as gamma change) can alter the tonality (contrast). Film and digital record this same relationship in the same way, starting with the curve sloping from bottom left out towards top right. You cannot change this relationship, if you do you will not have tones representing a scene correctly. They both have a straight line portion (linear), only the toe/shoulders vary.

Link to comment
Share on other sites

I reckon you need to forget what you knew from the film days (for this argument), and just concentrate on the relevant digital relationships. I've never heard of density in digital terminology. There is no need for it. The standard histogram fully represents the relationship between exposure, tonality and brightness. Surely you can't argue with the statement that on a linear sensor a point light source which just saturates the sensor will be represented by level 255 in the histogram, and the same source exposed by half will be represented by 128 or thereabouts.

 

I just took two shots, one underexposed by 1 stop. Hopefully my workflow was correct here. They are linear tiffs. This isn't true linear raw data as white balance has been performed, and it has been converted to a colour space, so this will throw some channels out. But if the workflow is correct this shows that by underexposing by 1 stop you reduce the most saturated pixels from level 195 to level 101. In this case, throwing 94 levels of shadow detail out the back door.<div>00NilW-40470184.thumb.jpg.780839b1c73cba16f411f34079c108af.jpg</div>

Link to comment
Share on other sites

Bernie,

 

If I could help you and myself clarify what you just posted...

 

What you've done with the 1 f-stop darker exposure from my understanding is move

viewable detail that would've been rendered with more levels down into the shadow

regions that are allotted fewer levels.

 

If you were to apply a lightening curve to get it the scene to look normal you'ld probably

now see more noise and posterization making up the dark lines in the slats (whatever that

is you captured) than you would with the lighter version.

 

I hope I'm understanding this correctly.

Link to comment
Share on other sites

Tim, I think I might be starting to doubt myself. What you say about lightening the darker exposure and posterization is correct, but your coming at it from the wrong way in relation to ETTR. In relation to ETTR, the lighter exposure would actually be 1 stop over, so it would be this image which would require a tone curve to darken it to look like the other exposure. Now this is where I am doubting myself - What is the difference between reducing exposure 1 stop at capture vs reducing it 1 stop in linear raw? In both cases you will be trying to fit 2 by X levels (as captured in ETTR) into X levels (or thereabouts). Both cases will suffer from quantization (rounding) errors. Perhaps the 'at capture' exposure reduction would be better due to the fact that quantization is only occuring once (in the A/D converter), as opposed to linear raw where quantization would have occurred twice (once in the A/D, and once again in the raw converter). But would the difference really be that great? I think I need someone from the ETTR camp to come and slap some sense back into me...
Link to comment
Share on other sites

With the lack of highlites in the scene you posted ETTR would be hard to gauge at time of

exposure. I would think setting the camera to program mode or preferably aperture

priority would let the camera's metering decide the exposure level when shooting RAW.

 

Usually from a scene that dark assuming that's how it actually was the camera's metering

will tend to capture it much lighter than what is seen by the eye going by what I get on my

Pentax K100D.

Link to comment
Share on other sites

Hi Bernie, sorry I couldn't get back sooner (things to do).

 

"I reckon you need to forget what you knew from the film days (for this argument), and just concentrate on the relevant digital relationships. I've never heard of density in digital terminology."

 

Well, the histogram is not a substitute for sensitometry (behaviour of light on film or sensors). Histograms as I said just summarize the number of pixels exposed for each tone between 0-255, nothing more, expose one less f/stop and everything just moves over to the left by1 stop. As for why introduce density...because with every exposure something else takes place....a build up of density on the film or charge on the sensor, this is our tones. So you have a X and Y axis situation. If you do not believe, then visit this website where they plotted curves for film and dslr. His comments..."I undertook these experiments to evaluate the characteristic curve a typical transparency film to that of a Nikon D200 camera RAW image rendered with Adobe Camera Raw and default settings, except for a shadow setting of zero, rather than the default of 5.As is evident, the curves for film and the digital camera are quite similar. Both have a knee and shoulder and a central linear segment. The digital capture has a wider dynamic range, giving meaningful values from log exp of 0 to -3.5, whereas the film flattens out at about log exposure of -2.5 or so. The concepts that we learned for exposure of transparency film carry over into digital". See their info in detail at?..

http://www.luminous-landscape.com/forum/lofiversion/index.php/t11221.html

 

I there is no close relationship between film and digital, then why is it that my handheld lightmeter works, the zone system works, many use the dslr to substitute for a Polaroid shot, and others use it to even judge exposure for their large format cameras?

 

"Surely you can't argue with the statement that on a linear sensor a point light source which just saturates the sensor will be represented by level 255 in the histogram, and the same source exposed by half will be represented by 128 or thereabouts. "

 

Wrong, as I mentioned before there are 4 stops in between (128 is zone 5, 255 is zone 9..check my chart above) value 128 and 255 on the histogram. The horizontal axis on the histogram is scaled 1,2,3,4,5..., while on the Characteristic curve it is scaled 1,2,4,8,16 to represent a doubling(factor of 2x, or 100% inc between stops). So zone 9 is 255 on the histogram, zone 8 is 223, zone 7 is 191, zone 6 is 159, zone 5 is 127 assuming roughly a 32 point change between zones. Go to the bottom slider on the histogram, and slide it to these points (while viewing something white) and you can eye ball it close enough to see the shades for zone 6,7,8,9. If you look at your CURVES chart in PS you will also see the straight line running from bottom left to top right with intersecting points at zone 0, zone3, zone 5, zone 7, zone 9. You can see 1 stop decrease in exposure will not get you from 255 to value 128. Also when looking at the histograms you provided, it DOES NOT show a 1 stop change (you made a mistake somewhere). Please look at the tonal scale bar along the bottom (showing tones 0-255). You will see that your lightest tone (right side of histogram) appears to be around a zone 5-6, while the other exposure appears to be around zone 3. This is a 2-3 stop range. The values 101 and 195 suggest about a zone 4 and zone 7 (remember each stop on the scale 0-9 is about 32 points apart).

 

"The standard histogram fully represents the relationship between exposure, tonality and brightness" ??.Actually it shows little, only x and y axis for tonal value and qty. It also doesn't show a exposure unit scale, log scale, or a zone scale which would clarify things. The histogram gives us no idea of the shape of the curve and more importantly gives no clue to shoulder/toe performance?.. this is why were having problems. This is the first step Reichman needs to do (set up a chart) to clarify. Where is his homework?

 

Regarding noise I found this bit of information to support why I believe noise is due to equipment (noise to signal ratio) and not his ? rule which I do not believe in.

"CCDs used in astronomy are similar to those used by the general public, but are generally monochrome and cooled with liquid nitrogen so as to reduce the noise caused by heat.. " http://en.wikipedia.org/wiki/Digital_photography" .

 

I want to also point out that your histograms DO NOT support the ? theory rule that Reichman proposes. If the brightest first stop has ? the tones, then why does it not show it in your histogram. Look at the peaks which represent qty (# pixels exposed for each tone). On your histograms the zone 7-9 values (representing the brightest tones) have the lowest peaks, while zones 3-6 show the bulk of all the data (the highest peaks).

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...