Jump to content

Will cameras ever match our eyes dynamic range ?


Recommended Posts

<p>No.<br /> They have not.<br /> They have not produced an sensor with the dynamic range of film yet.<br /> The digital wave form is a series of steps where as the aniloge wave form is smooth.<br /> The eye can adjust to different light levels.<br>

The brain has the advantage of having 2 eyes.<br>

This gives the brain the advantage to process all the information.</p>

Link to comment
Share on other sites

<p>I don't know why not. One could control the dynamic range of film by development so why can't a sensor be made more sensitive for that? The shadows of a person's face in bright sunlight may fall on zone 3 so one would overexpose two stops to raise it to zone 5 and under develop to drop the flesh tones back to zone 6 and the result was as seen by the eye. The same would apply to shadows cast by trees across grass or under bushes.</p>
James G. Dainis
Link to comment
Share on other sites

<blockquote>

<p>They have not produced an sensor with the dynamic range of film yet.</p>

</blockquote>

<p>That very much depends on what and which film you're talking about. Digital certainly gives me shadow detail and highlight detail that I could never get with slide films.<br>

The 'steps' argument is also true of sound recording. But you can keep your scratchy vinyl records, I know I've kept mine -- but not because they're analogue.</p>

 

Link to comment
Share on other sites

<blockquote>

<p>They have not produced an sensor with the dynamic range of film yet.</p>

</blockquote>

<p>Imatest shows current DSLRs at 12.5-13.5 stops. That's equal to or better then Portra.<br>

Matching the human eye? I think that will require sensors that read out 2-3 frames in one exposure. Either read rows at different ISOs while sacrificing some resolution (Magic Lantern can do this now), or read out the sensor electronically 2-3 times at different ISOs while the shutter is open.</p>

Link to comment
Share on other sites

<p>The actual dynamic range of the eye is quite limited but it does have "pixels" like rods and cones that work at different light levels. The <em>apparent</em> dynamic range we see is synthesised in the brain where bright highlights and dark shadows, acquired separately, are merged to into an image that is presented to our consciousness; automatic HDR!<br>

The other side of the automatic HDR that takes place in our brains is that we cannot, by effort of will, turn it off. The only way to see things without this always intrusive HDR is to take a direct photograph and look at that.</p>

 

Link to comment
Share on other sites

<p>Your eye is adaptive, through an automatic diaphram (the pupil) and auto ISO (visual purple at the low end and fatigue at the high). In a digital camera, the diaphram alone adds 6 stops to the 12 or so you get from an M9 sensor, and another 6 through amplification (ISO). How much "dynamic range" do your eyes have after being dilated for an exam? Astronomers routinely photograph celestial objects far too dim to be seen with the eye, by several orders of magnitude. It's not unusual to take useable photographs with a digital camera in light so dim you can't read the controls.</p>

<p>In a single exposure, slide film goes from black to transparent with a subject range of 6 to 8 stops. Color negative film is slightly better at 8 to 10 stops, and B&W about 12. Again, the M9 (Nikon or Canon) can equal B&W range of capture, but in color.</p>

<p>That said, film is more interesting. We lose interest in things that are just too perfect. Heifitz never missed a note nor an intonation, but people thought more of Kreisler, who frequently got a bit off the page.</p>

Link to comment
Share on other sites

Aside from a lens system to focus an image on the photoreceptors/sensor, your eye (and visual system) has very little in common with a camera and sensor. Your eye only has high resolution in a very small area directly in the center--as you move away from this area, the acuity and amount of color information decrease very rapidly. The reason you see a sharp-looking world with detail in both the bright and dim areas of a scene is because, as Edward Ingold said, your eye is constantly re-focusing and adjusting aperture and sensitivity as you scan the scene, and your brain puts all those pieces together into the world you see.

 

Rather than a camera taking a single photo, a closer (though far from perfect) analogy to how your visual system works would be a camera with a telephoto lens scanning around a scene rapidly taking a series of photos (adjusting aperture, ISO, and focus for each shot) and a very-powerful computer assembling all those shots into a panoramic HDR image. In real time.

Link to comment
Share on other sites

<p>+1 except that I believe that HDR as we see it in a photo is not like what the brain assembles. The brain maintains contrast levels.<br>

Which is why comparing photos <em>technicalese</em> to what our eyes see and brain computes is really a waste of time. Go with your gut. If it doesn't look right, it isn't. If it does look right, it is.</p>

Link to comment
Share on other sites

<p>Below is me standing on my front porch in front of my front door. This is what the camera sees. A person opening the door would see me not that dark shadow and instantly say, "Hello, James." So Why is that? Using a spot meter, the dark shadows under the bushes outside fall on zone 2. My face under the porch roof also falls on zone 2. It would be zone 6 standing back under the sun.<br>

If my face in that situation is the same dark tone as the shadows under the bushes, how can film or a sensor record otherwise? Using HDR and opening up four stops for my face exposure would do it, as would using fill flash. But, why can't the camera see what the brain sees? </p><div>00csPe-551666484.jpg.ca610b8262c6b0060b3590b272ff6d7b.jpg</div>

James G. Dainis
Link to comment
Share on other sites

<p>James: The camera only sees that because you stopped down the aperture reducing the amount of light hitting the sensor so that the outside scene is neither over or underexposed. If you opened the aperture, the sensor would "see" your face but the outside lighting would be blown. </p>

<p>The camera can only see at one aperture (or shutter peed) at a time. The eye continually changes it's aperture, the eye's iris, so that it can see at ranges greater than the camera/lens/ sensor combination. The brain then takes all these various views and combines them to interpret a scene that has a wide lighting latitude. HDR attempts to do what the brain does, but it's limited. That's why HDR shots don't look natural even though the full range of lighting is captured with multiple shots. </p>

Link to comment
Share on other sites

Look at this:<P>

<center><img src="http://jdainis.com/optical_ill.jpg"><P></center><P>

 

Both squares A and B are the some tone. (Copy and paste one on the other in Photoshop to see). Square B looks like a zone 6 equal to my face value and square A looks equal to a zone 3 bush shadow. So why does it work here and not in my figure in a doorway photo? <P>

 

I have seen this set up on a floor using tiles and a green tube with lighting casting a similar shadow. It works the same. There was no opening/closing of apertures to raise the zone 3 B up to a zone 6 from a Zone 3 A.

James G. Dainis
Link to comment
Share on other sites

"The camera only sees that because you stopped down the aperture reducing the amount of light hitting the sensor so that the outside scene is neither over or underexposed. If you opened the aperture, the sensor would "see" your face but the outside lighting would be blown. "

 

I know that. Tell me why if both the face and bushes are at the same zone via spotmeter reading, My brain sees them as entirely different much like the squares A and B above. And, if that same zone but different appearance can be seen in the illusion above, why not on a sensor? You take a photo of the above illusion and it would look the same.

 

It doesn't matter how great the dynamic range of the human eye/brain is, the fact remains that the bush shadow value and and face value are at the same dynamic range as checked by a spot meter.

James G. Dainis
Link to comment
Share on other sites

<p>When you print, you can render the darkest part of the film as black and the lightest as white, even though the density of the film has a much smaller range than the recorded scene, and the print a smaller range yet.</p>

<p>In the photo in question, there is no more detail in the shadow of the bushes than in the face of the subject. It's possible that detail could be brought out by adjusting the tonal scale of the image, especially if the source were digital or negative film.</p>

<p>The opposite is also true. In this image, taken with an M9, I had to "burn in" the foreground to produce a silhouette appearance. In the raw image, the people were clearly identifiable against the sunlit background.</p>

<p><img src="http://d6d2h4gfvy8t8.cloudfront.net/17865948-md.jpg" alt="" width="680" height="452" /></p>

Link to comment
Share on other sites

<p>The Canon 6D has a built=in HDR mode where 3 shots are taken in quick succession varying the exposure and the result output combining 3 exposures.This gives an effective increase in the dynamic range over that of the sensor.In daylight the process is sufficiently rapid that the camera can be hand held for static shots.<br>

But IIRC I read somewhere the dynamic range of the human eye is probably even more than this would give.</p>

Link to comment
Share on other sites

"There are parts of the shadow areas under the bushes that are black and you can't see details. So maybe you metered the lighter areas so the comparison is faulty."

 

Could be but now we are just nit picking. My Pentax V spotmeter had the needle pointing to 7 on the areas under the bushes and pointing to 7 on my face under the porch roof. I called them zone 2 but I could just as easily called them zone 1. It would be my choice where I wanted to place them but the point is the bush shadow and my face value are the same brightness. Both reflect the same intensity of light. What the camera recorder was the truth. For a person opening the door, my face does not appear black but four or five zones lighter while the bush shadows stay black in the background. That is similar to the optical illusion I posted above. Square B appears lighter because the brain wants it to appear lighter. To the person opening the door, my face appears lighter because the brain wants it to appear lighter. The camera captures truth; what the eye sees is an illusion.

 

Create a sensor that can capture what the eye sees? Rather create a sensor that captures what the brain is thinking.

James G. Dainis
Link to comment
Share on other sites

" For a person opening the door, my face does not appear black but four or five zones lighter while the bush shadows stay

dark"

 

that's easy: It's because your mind is directing your brain to "brighten up" the area where your face is so that when

looking directly at it so you will know who is at your door. can recognize you. If you concentrated o nthe fully sunlit area to

either side of the head the head would appear as something closer to a silhouette until you shifted your mind to pay

attention to your face again.

Link to comment
Share on other sites

<p>An often asked question, even by you, Harry! Here are a couple of old threads I pulled up:</p>

<p>http://www.photo.net/casual-conversations-forum/00WF6I</p>

<p>http://www.photo.net/casual-conversations-forum/00ZL4D?unified_p=1</p>

<p>I suppose I'd add the following to what I've already written on the subject:</p>

<p>Although no camera will ever see the way the human eye sees (nor would it be desirable for it to do so, because we truly would not like or understand the presented image or process it ourselves in any meaningful way), it should still be possible in theory to produce an image with dynamic range mimicking the highest contrast, real-world scenes (e.g. glistening, sun-lit snow on spruce trees, with deep, dark shadows beneath). Then our visual systems would be able to extract the needed information as though we were looking at the real thing. For this to happen, our photographic technologies would need to vastly outperform our biological visual systems. But yes, I believe this is possible in theory. These are the more daunting technical obstacles that might stand in our way:</p>

 

<ul>

<li>lens flair. Even the best of our lenses can limit shadow detail in the presence of extreme highlight content.</li>

<li>sensor dynamic range. To take a "full DR" photograph, one would have to expose for the deepest shadows. Be prepared for slow shutter speeds you might not like. There will always be a need to capture a certain number of photons to create an image, and photons from extremely dim subjects are few.</li>

<li>reflection from the sensor. To capture the sort of dynamic range we would require, sensor assemblies will need to be of a new, highly non-reflective design, either through near-100% capture of light by the sensors (would be nice, but I'm not holding my breath on that one!) or through capture behind the sensor of all light not captured by the sensor. Reflections from the sensor tend to find their way back to the sensor and muddy the deepest shadows.</li>

<li>output medium. Once you capture such an image, how are you going to display it? Dynamic range is already severely choked in paper prints, and those prints will never be much better. The only way to achieve this feat would be with monitors (including possibly projectors, although you have the lens problem yet again on the output). In theory, one might be able to do this with distant generations of LED monitors (not LED back-lit LCD screens, but true LED, such as the current organic LED monitors, only better).</li>

</ul>

<p>Lastly, there will always be SOME subject (e.g. a lump of coal sitting in a snowy field, with the full sun glaring into the lens INSIDE the frame) that will present a greater challenge. It will always be possible to contrive a scene that exceeds our technical limitations. Currently, our cameras and output media are perfectly capable of representing the full dynamic ranges of many scenes. Let's not forget that. While we might be able to reach for higher dynamic range sensors, that still begs the question of what we're going to do with all that dynamic range. The weakest link in our imaging technology is presently not the sensor. Rather, it's the output medium. I think this has always been the case.</p>

Link to comment
Share on other sites

Sara, I'm glad you brought that up. I was thinking of the problem of blocked highlights with paper prints. Suppose a shaft of sunlight were to fall on a bride's dress while posing in the shade. If looking at a negative one can see in a black area some fine lace work on the bride's dress. On a print that would only show as a pure white area. Trying to burn in that area with some extra enlarger exposure would result in the lace showing but on a now gray area on the white dress. The dynamic range of the paper is only ten stops or zones. Having a greater dynamic image film or sensor capture would just result in those blocked highlights when making prints.

 

We are back to the illusion of reality. People see that bride and a white dress with a shaft of sunlight and they also see the lace in that sunlit area. White dress, white sunlit area with lace. The camera, which is recording the reflected light reality, depending on the exposure setting sees either a gray dress with a white spot showing the lace or a white dress with a white spot showing no lace.

 

Could the dynamic range of the paper or display medium be increased? Things go from black to white. Despite what the detergent commercials say, there is no whiter than white.

James G. Dainis
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...