Jump to content

Help understanding film vs digital response to light intensity


tianxusky

Recommended Posts

Hey everyone,

I've been doing extensive research into this area for personal interest as well as for digital video color grading, but am really struggling.

 

My question is, how does film really respond to light intensity compared to digital sensors? I understand that human response to light intensity is very complicated but can be simplified as being non-linear, and often logarithmic. My question is what about film, and digital? How do they respond to light? When a film negative is exposed, how is the data recorded, is it logarithmic, how? How do digital sensors record light, is it linear?

 

What I am seeing everywhere is H & D graphs, which are density vs log exposure graphs, showing S-curves. But these are log graphs, more so, they are log-log graphs, both density and log exposure are logs. Why do we measure density logarithmically in relation to opacity and light transmittance? Where can I find a normal, arithmetic film exposure vs light intensity graph that can clarify things?

I know that human response to light graph (percieved brightness vs light intensity) looks like a r-shape. What about film and digital sensors? Also, I read that digital sensors try to output values of 0-255 according to the S-curve from a film H&D (density vs log exp) curve, why is this? I'm just confused about all of this! There are a lot of questions but would appreciate any answer to any of them!

 

Imgur

Imgur

Imgur

 

Thanks!

Link to comment
Share on other sites

Why do we measure density logarithmically in relation to opacity and light transmittance?

 

- Because you'd need a very big piece of graph paper to show an Exposure v Density (opacity) curve in linear space with any meaningful amount of refinement.

 

In any case, the transform is very easy using a spreadsheet. LogE (Lux-seconds) to Lux seconds is a simple antilog base10 function: 10^Log10E.

 

Much the same for density, except the opacity scale runs from 0 (absolute transparency = 0.0D) to 1 (totally opaque = infinityD). So the function is: Opacity = 1 - (1/10^D).

 

A typical linearised film curve shows a near exponential rise in opacity that tails away to an almost straight horizontal line for very little increase in exposure. A curve that's nearly useless for visually comparing or evaluating differences between films or development procedures.

 

Digital sensor output OTOH is pretty much linear, with exposure increase causing a directly proportional increase in current or voltage. However, such a response is impossible to represent on any practical output medium beyond a very short subject-brightness range, and would look very dull if a wide SBR was linearly compressed into a much smaller range. Therefore a gamma (log) function or S-shaped curve is imposed on the sensor output in order to give a more visually pleasing image that fits within the output device's dynamic range and colour space.

Edited by rodeo_joe|1
  • Like 1
Link to comment
Share on other sites

Wow thanks so much!

I couldn't figure out how to convert the log graph, I drew one, is it right? Is the film response vs. light intensity graph (arithmetic scales) more or less the same as human percieved brightness vs. light intensity? And the digital sensor response would look like a straight line? (attached image)

 

I understand why a digital sensor would respond linearly to light, it just counts how many photons are hitting it.

I understand why human response isn't linear, pupil dilation in dark areas, cone and rod distribution in the eye, plus other stuff, so it's said to be logarithmic.

But why doesn't film respond to light linearly? Beyond the inertia point and saturation point, shouldn't it be just like digital sensors, responding linearly to the photons that hit the silver halide crystals? Is it because once a molecule is exposed, it can't be exposed again, so there are less and less molecules available with increasing light photons, but surely that doesn't matter when you're exposing at fractions of a second? Is it something to do with the way developing works?

 

Why don't we just present linearly-recorded images of the world, with light values as they really are, and let our eyes distribute the tones, like the way we see reality? Is it because we cannot reproduce light at the range of intensities of reality, our monitors only emit certain range of light values, so if we don't stretch our camera values, they'll all be produced in a narrow intensity range, and look very flat? If this is correct, why don't we stretch it out linearly in proportion, but impose an S-curve on it?

 

When we use the waveform scope on Premiere Pro or Davinci, what exaclty do the IRE values/0-255 values show?

I understand it is logarithmic, in 0 black - 100 white, 50 isn't actually 50% reflectance, so how should we understand it? Given different dynamic ranges of cameras and monitors, this all becomes so complicated, so I thought I should get a grasp of the basics first. Thanks, any insights would be appreciated, am struggling to find answers :D

 

477386588_ScreenShot2019-01-01at12_52_26.png.b3411c41b5090a9bb8b710754eef179d.png

 

271282818_FullSizeRender2.thumb.jpg.8ed400ff39798e8d971b005adef1390e.jpg

 

FullSizeRender.thumb.jpg.29747101d22ec47e01f923dc41872266.jpg

Edited by tianxusky
Link to comment
Share on other sites

The last graph appears to be roughly the right shape, but the y axis scale is incorrect and inverted.

 

Opacity is the linearised equivalent of density. I.e. the greater its value the less light is passed. Opacity also has a value scale running from zero to one - it's not expressed as a percentage.

 

The graph you've constructed would only be true for reversal film, which doesn't have the same response curve as the more commonly shown curves for negative materials.

 

A reversal film effectively has a combination of curves, one from the first development, and an inverse and higher 'gamma' one from the second and colour development. In fact it actually has three curves, one for each of the cyan, yellow and magenta dye-forming layers.

 

In order to simplify things, I would first look at a B&W negative curve, which gives a much more straightforward and 'purer' representation of how a silver-halide emulsion responds to light. Such a curve can be found here on page 4 of the datasheet.

 

"When we use the waveform scope on Premiere Pro or Davinci, what exaclty do the IRE values/0-255 values show?"

 

Having never used those programs, I have no idea what the 'waveform scope' shows. However, the values 0 to 255 are commonly used to show the pixel value in a digital image. In an 8 bit per channel colour space, zero represents black, or zero brightness, and 255 represents pure white, or the maximum brightness level of an RGB channel. Again, such a scale is non-linear and will almost certainly have had a log function imposed on it.

Edited by rodeo_joe|1
  • Like 1
Link to comment
Share on other sites

Why don't we just present linearly-recorded images of the world, with light values as they really are....

 

- So many questions!

 

The light from the sun provides an illumination that can exceed 120,000 Lux at the Earth's surface. There's no viewing device that can come anywhere near to that brightness. There's also no surface that reflects zero light. Therefore any practical device for showing an image must compress real-world brightness values into a smaller range. If this is done linearly, it just doesn't look right; losing the impact of the original scene. Therefore we have to artificially add in the log function that our eye and brain automatically applies when viewing a scene with a very wide brightness range. This log function effectively fools the eye-brain combo into believing it's seeing a wider brightness range than it really is.

 

OK. I'm sure that's an over-simplification, but I'm not an expert in vision science, and a fuller explanation would take many thousands of words, quite a few illustrations and pages of complex maths.

 

Google is your friend, guide and mentor. I'm sure the answers to all your questions are already out there if you look for them. Failing that, any decent library.

Edited by rodeo_joe|1
  • Like 1
Link to comment
Share on other sites

Thanks very much, super super helpful! First time posting in this community and am delighted to received such a detailed and kind response. Thank you.

With this background, I can now go on to research waveforms and digital cameras and their log systems without being even more confused!

 

Since this post is about understanding film response to light, I looked into exactly why film responds logarithmically, but couldn't find an answer, I can only speculate, for anyone who might be interested, or offer better insight, I've written my thoughts, as I noticed there aren't many threads about this online:

 

It might be to do with the fact that silver halides, once exposed to photons (unit of light) that reduce them to pure silver specks (which form the latent image), cannot 'record' another photon exposure, so with increasing number of photons, there is a decreasing probablilties of hitting another silver halide, so film response slows down, because there is less and less material to be exposed with increasing exposure. Or maybe its because when a photon hits a silver halide, it splits the silver and halide, ejecting an electron which needs to travel and find a silver ion (a silver atom with a missing electron) to attach to, before turning it into a pure silver atom, if it doesn't find one in time, it might return to form the silver halide it originally was a part of. Maybe with increasing exposure, increased number of ejected electron flying about, something happens to the probability of finding silver ions, because there are also equally more silver halide bits for the electrons to return to, spread out across the crystal lattice. Or maybe its to do with development. Developers turn the remaining unexposed silver halides into pure silver specifically in areas with already reduced silver (silver specks formed by exposure), in a way amplifying the latent image. But developers work faster in areas with more silver specks, slower in areas of less silver specks, this makes sense, areas with more silver specks have more exposure so need to be developed more. But are they developed in proportion to the original speck amount? As developer reduces silver halides, there will be increasingly more pure silver, maybe causing exponentially faster developing. But with areas of less silver, developers may take a long time to accumulate enough silver to increase development rate. This suggests that the development phase is not linear either. I might be really wrong about this, and would need a scientist to expalin this properly, but essentially, all this put together might somewhat explain why film doesn't just count photon by photon like a digital sensor does, and it's good enough for me :)

Link to comment
Share on other sites

so with increasing number of photons, there is a decreasing probablilties of hitting another silver halide, so film response slows down, because there is less and less material to be exposed with increasing exposure.

 

- That's about the size of it, except it takes more than one photon to produce a development site within a given halide crystal.

 

Once a crystal is made developable, it's an 'all or nothing' situation, and during development the whole of the crystal tends to be reduced to filamentary silver, which incidentally is ejected from the crystal to end up adjacent to the original crystal site.

 

Also, halide crystals are of varying size and orientation, with consequent different chances of being hit by photons, and with different contributions to aggregate density after being reduced.

...in a way amplifying the latent image.

 

- Exactly! It's a chemical amplification process.

But are they developed in proportion to the original speck amount?

 

- No, as previously stated, halide crystal reduction is all-or-nothing. Actually much more 'digital' than a so-called digital sensor, which accumulates charge in an analogue fashion. That charge is only converted to a digital signal external to the sensor photosites, and after amplification.

As developer reduces silver halides, there will be increasingly more pure silver, maybe causing exponentially faster developing. But with areas of less silver, developers may take a long time to accumulate enough silver to increase development rate.

 

- No, there's little evidence for 'contagious' development. Although some developers take advantage of a 'physical development' effect, whereby dissolved silver (created by addition of a chemical solvent - e.g. sodium sulphite) is plated out onto the latent image.

 

To the best of my knowledge, the exact mechanisms and interactions on a molecular scale during development are still not fully understood, and are now not likely to be given the obsolescence of the technology.

 

Look for papers by Dr. Mees of Eastman Kodak laboratories for greater insight into the working of the film process.

Edited by rodeo_joe|1
Link to comment
Share on other sites

You ask some good questions and I think are on the right track. I actually tried to call a colleague, and inorganic chemist who is a reliable source of information for any kind of random discussion I want to have and who I often bounce stuff like this off of before answering your post, but I'm guessing he's not back yet from Christmas break. I also tried another colleague who is both a physical chemist and a photographer(although he's not touched film in 10+ years) so I'll do my best to wing it.

 

t might be to do with the fact that silver halides, once exposed to photons (unit of light) that reduce them to pure silver specks (which form the latent image), cannot 'record' another photon exposure, so with increasing number of photons, there is a decreasing probablilties of hitting another silver halide, so film response slows down, because there is less and less material to be exposed with increasing exposure.

 

Sounds reasonable enough to me so far.

 

Or maybe its because when a photon hits a silver halide, it splits the silver and halide, ejecting an electron which needs to travel and find a silver ion (a silver atom with a missing electron) to attach to, before turning it into a pure silver atom, if it doesn't find one in time, it might return to form the silver halide it originally was a part of.

 

This does seem a bit of a stretch. When the first halide turn to silver atoms, it would be REASONABLY expected that they would take their electron from the bromide(or more generally the halide, but probably bromide) ion to which they are bonded(or at least close to in the lattice). Of course, that leaves you with a single unboded bromine, which is going to be quite reactive and want to find another to at least form Br2. It's not inconceivable to me that this could cause the formation of second nearby atom of Ag metal.

 

As developer reduces silver halides, there will be increasingly more pure silver, maybe causing exponentially faster developing. But with areas of less silver, developers may take a long time to accumulate enough silver to increase development rate. This suggests that the development phase is not linear either. I might be really wrong about this, and would need a scientist to expalin this properly, but essentially, all this put together might somewhat explain why film doesn't just count photon by photon like a digital sensor does, and it's good enough for me :)

 

I'm not totally sure about this one.

 

First of all, development is more or less governed by first order kinetics, where rate is logarithmically proportional to concentration. Since the developer is reducing AgBr to Ag metal, higher concentrations of AgBr would tend to lead to higher rates of development in that location. Of course, we also observe this to some extent in the phenomenon of "local exhaustion" of developer. It's why over-agitation during developing can increase contrast, and also why-with the right developer-we can use "stand development" and get(relatively) low contrast.

 

You have me thinking, though, and I want to talk to some folks and try to get a better answer.

Link to comment
Share on other sites

P.S.

I think that in order to understand the non-linearity of density, you have to fully appreciate the digital (all-or-nothing) nature of halide crystal reduction.

 

Imagine a small area of film containing just one, near transparent, halide crystal. When reduced to opaque metallic silver, that area of film undergoes a near infinite increase in optical density. A true quantum-leap.

 

It's only when many such microscopic areas of film are aggregated and averaged that we get an illusion of shades of density. What we're really seeing is a dithered collection of black specks with transparent spaces in between. The statistics of randomness come into play, and tell us that if the averaged area is big enough, we'll never get a completely opaque covering of specks. And equally, that the slightest sign of an image requires a quantifiable jump from zero density to some detectable density. Except of course that there's background chemical noise in the form of a base fog to overcome.

Link to comment
Share on other sites

Up until the late 1800’s, exposure and its effect (film blackening) was little understood. This was a period of great inquiry and two men, Ferdinand Hurter and V.C. Driffield researched and published and received medals by the Royal Photographic Society in 1898.

 

Engineering of the era accomplished much, but the math was challenging -- no calculators, no computer, only pencil and paper and thank the gods, there was the slide-rule. The marvelous “slip-stick” solved complex math using logarithmic notation. Addition and subtraction using logs, substituted for multiplication and division. I went to school in the sixties and I used one every day.

 

We measure the Intensity of light passing through developed film, its Transmission = the amount of light that gets through divided by the total light energy that hits the film.

 

We measure Opacity = total light Intensity that hits the film divided by the amount that gets through.

 

This O = I/T As an example: If T = 85% then the Opacity is 1/0.85 = 1.175

 

Now D = density: D = log O and D = log I/T

 

H &D used logs because all other engineers used logs. It was widely understood that human hearing ((dB, decibel) and human vision could be charted using logarithmic curves. Also, logarithmic graphs use less paper than linear graphs and some (including me) say they are elegant while linear graphs are awkward.

 

A logarithmic graft of film blackening due to exposure and developing resembles ½ of a bell curve. We divide this curve into regions: toe, straight-line, shoulder, and the region of solarization. We experiment, making film and developers and measure and graph. We can glean all manner of data. We can calculate a contrast value, determine what paper grade to use, what enlarger exposure, what color filters to mount on the camera or enlarger. Take heart, you can learn how to do this stuff in about 10 or 12 years.

Link to comment
Share on other sites

We measure Opacity = total light Intensity that hits the film divided by the amount that gets through.

 

This O = I/T As an example: If T = 85% then the Opacity is 1/0.85 = 1.175

 

Now D = density: D = log O and D = log I/T

 

- Sorry Alan, but opacity isn't defined that way, since such a scale would go to infinity, and therefore would not be graphically representable on any sensible scale. The usual scale used for opacity is either a direct percentage of light blocked 100 (1- T/I) or as a normalised scale with a maximum of 1; 1-T/I.

 

We do indeed need the direct function I/T in order to calculate density, but its tendency to infinity makes the direct opacity function impractical for graphical representation. Hence the use of its logarithmic form, density.

Link to comment
Share on other sites

[uSER=2403817]@rodeo_joe|1[/uSER] -----

 

Transmission is usually called “T” and is expressed as a percent. T = the amount of light that gets through any area divided by the total amount of light that hits that area.

 

Opacity = Total amount of light that hits any area divided by the amount of light that gets through that area. 0 = 1/T

 

If my original post was in error – this should right the wrong. Don’t worry, I will catch on in the next 55 years or so.

Link to comment
Share on other sites

Alan, the problem isn't with your method of working out opacity, it's with representing opacity in any meaningful way!

 

I spent several hours reverse-plotting one of Kodak's H&D curves into spreadsheet data and linearising the result. I used the simple little Windows program ScanIt. The reverse-engineered graphs follow:

 

Original H&D curves -

Trix1.jpg.b0fb2427b23079a7a7644d952ce218ca.jpg

Linearised representing Opacity as a percentage of light blocked -

Trix3.jpg.9d74f1a147b5a430a10f75d580badb3e.jpg

Linearised using Opacity as antilog Density -

Trix2.jpg.d0c122320402f58ecf6673ffcbe34f6f.jpg

As can be seen, both methods of representing opacity are fairly useless. The first because it compresses the higher densities, and the second because the lower densities are badly represented.

 

And because the exposure scale has also been linearised, we're really only seeing what happens at higher exposure levels, with half of the exposure range compressed into about the first 1% of the X axis.

 

Fundamentally, that's why we use logarithmic scales, because neither of the linearised graphs are at all useful in visually representing both the low exposure/density and high exposure/density parts of a film curve on one plot.

 

Given a big enough piece of graph paper, a meaningful linear representation might just be possible, but we're looking at ranges of several hundred to one for opacity, and several thousand to one for exposure. So a linear representation just isn't sensible.

 

Darn! Just noticed I didn't change the labelling of the Y axis. On the second graph it should be labelled 'Opacity %'.

Edited by rodeo_joe|1
Link to comment
Share on other sites

Yes.

 

First, the range of light intensity, from full sun to open shade (indirectly lit by the sky) is already fairly large,

and then add in the different reflectance of light objects vs. dark ones. The intensity range can vary

over a factor of 1000, or more. A factor of 1024 is 2 to the 10th power (convenient in f/stops or

shutter speed steps.)

 

Next, consider Beer's law. The reduction in light going through an absorbing medium

is exponential in thickness. Consider a more common case in the real world: it is a

sunny day, and you put on sunscreen. The bottle says SPF 30. It reduces the amount

of UV that gets to your skin by a factor of 30. But you put on twice the recommended amount.

(and it isn't so easy to measure the amount.). You now have two layers that reduce the UV

intensity by a factor of 30, so the result is a factor of 900 decrease.

 

Note also that SPF 30 does not have 3 times the UV absorbing chemical as SPF 10,

but instead log(30)/log(10), or about 1.5 times more.

 

Anyway, since absorption is exponential in the amount of absorbant, it is usual to use

a logarithmic scale for optical absorption, commonly a base 10 log. (I don't know

why sunscreens don't use this system.). The light absorption by a filter is exponential

in the amount of absorbant, and also of developed film in the amount if silver present.

(Or color dye for color films.)

 

If developed silver was linear in exposure, then optical density (the log of the ratio

of light going through) would be linear, such that transmittance (light going through)

decreases exponentially.

 

If all silver grains are the same size, then they have somewhat equal likelihood to

absorb enough light to be developed. This, then, results in a very high contrast film.

 

Mixing silver nitrate and potassium bromide tends to result in a large variation in the

size of silver bromide grains formed, large ones more sensitive than small ones.

The timing and mixing concentrations can be adjusted to generate the appropriate

grain size distribution.

 

The result, then, is a film sensitive over wide range of intensities, and that can

give a wide range of optical absorption (on a log scale).

 

Note, then, the complication for digital sensors: the output is linear in light

intensity, not the logarithmic sensitivity that we desire. One needs to be able

to process a large voltage range to image a scene of ordinary intensity

differences.

 

We can look at objects in direct sun, or indoors lit by small lamps.

We expect them to photograph the same, over such a large intensity range.

-- glen

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...