Film Negative Invert and Processing in L* Gamma FAQ

Discussion in 'Digital Darkroom' started by dmitry_shijan, Mar 23, 2021.

  1. As for those strange 25,600 ISO tests - at insane ISO speeds (25,600 ISO is 8 stops underexpose) sensor generate noise in all tonal range, so those tests are just useless here.
  2. It comes from manufacturers' data sheets, which are published on line. These values reflect subjects with a normal (i.e., 6:1) contrast ration. Higher values refer to high contrast resolution targets (i.e., 1000:1)

    249 ppi > 3x 80 lpi

    Most 24 MP cameras have anti-aliasing filters, which reduce the effective resolution by about 30%. High resolution Sony cameras, 42 MP and 61 MP, do not have an AA filter. AA filters effectively eliminate Moire patterns in common subjects. Moire occurs in high resolution sensors too, but the patterns are very small and relatively unobtrusive. A notable exception would be in fabrics, where repetitive patterns occur throughout a large area. In my experience, a Leica M9p, 18 mp, no AA, is consistently sharper than a 24 MP camera (I have several) with AA, using the same lenses.
  3. Let's go through this sloooowly.

    1. Film grain does not move over time, therefore any 'temporal noise reduction' is useless against it.

    2. The noise contribution from a modern CMOS-sensor digital camera is negligible compared to film grain - unlike the old CCD film-scanner sensors.

    3. Multi-shot stacking does nothing to reduce film grain, and almost nothing to reduce the already very low level of digital noise.

    Full frame from 100 ISO negative film -
    100% crop from above frame -
    That's film dye-clouds ('grain') you can see, not digital noise.

    Here's a much closer, optically enlarged, look at the dye clouds in the above picture -

    And here's the camera noise at the same magnification. Using a neutral density filter in place of the film, and with the same post-processing done on it -
    See any noise worth a damn? Because I don't.

    If you want to reduce grain noise, then use a small aperture on your copying lens to induce diffraction. Or just throw the lens slightly off-focus.

    Multi-shot stacking is a complete waste of time. It does nothing to reduce 'grain' and imperceptably little to reduce already near-invisible digital noise.

    And don't waste your time re-inventing the filmholder. Just get a decent negative carrier from an old enlarger.
  4. If you believe that, then you obviously haven't read the specification - found here.

    It has a small linear section in the dark region that's fudged into a gamma 3 log curve. As I pointed out several posts ago.
  5. Use of a fully electronic shutter would be essential to avoid motion at the pixel level for multi-sampling. The fact that grain was blurred in previous examples clearly indicates that motion occurred. That said, there is no justification for multi-sampling when copying color negatives (or slides). Noise is not a significant factor at low ISO values (< 25600).
  6. Make your mind up.
    Is this phantom digital noise in the shadows or the highlights?
  7. My stand system is based on aluminum 15mm rods and blocks used in cinema camera systems. It is very short and very stable. I also use electronic shutter. Some samples may be blurred due poor alignment between camera and film. Samples with man in grey blazer and people in colorful wear with backpacks are sharp enough.

    If you don't like Stacking - don't use it. I agree that at higher resolutions it may be near useless because noise structure is way less smaller than film grain.
    At smaller resolutions it makes a visible difference and helps to clean up artifacts. The noise is still visible even on normally exposed frames.
    Do the tests. Shoot 5 frames in burst mode, and show/hide layers one by one. You will see some digital noise dancing in each frame.

    L* gamma is not a Log curve. I have no idea why should i care which parts of which formula it use. It works perfect with film scans and this is enough for me. During invert it produce "symmetrical" uniform results that require minimum additional amount of manual tonal adjustments in final photos. Do your tests. If you don't like it don't use it.
    Some background for geeks here Digital Photography - Marcel Patek: Monitor gamma "As one can see, the L-star response is clearly linear in all brightness ranges. This means that doubling the value of RGB always changes value of L by the factor of two or that by stepping the RGB values by e.g., 10 points will change the L values by a constant increment (in this case by 100/255*10 = 3.9). Also, since incremental changes in L are perceptually uniform, changes from dark to bright values in a synthetic grayscale (RGB form 0-255) will be perceived as smooth and uniform."

    That optical enlargement test example looks nice.
    Here are my optical crops example of same frame. This is larger possible magnification that i can do on my bellows. This optical crop is virtually equal to full frame scanned at near 16000 x 24000 = 384MP.

    So yep, those are colored saturated film grain seeds. And i guess that colored grain look must be somehow preserved as starting point in smaller sized scans.

    Problem 1:
    With 26MP scans there is a conflict between film saturated grain particles and digital noise that is near the same size at that magnification. And as result - some saturated aliasing and moire artifacts overlapped with film grain. Probably i need to experiment with camera scans at higher optical resolution and stitching.
    Problem 2:
    Anti aliasing filters in raw processing apps are very different from basic Chroma noise reduction filters and remove digital aliasing artifacts as well as all saturated seeds of film grain. And surprisingly it is not depends a lot of scan size and film grain size. Saturation loss in film grain particles is visible at 26 MP example as well as at optically enlarged "virtual 384MP scan." Probably i need to search for better processing workflows.

    Last edited: Apr 9, 2021
  8. Examine the L-star function:

    C'=1.16 C1/3 - 0.16 for C ≥ 0.008856
    C'=9.033 C for C < 0.008856

    If it's not logarithmic, then why is there a power of 1/3 (gamma 3) in the above function?

    It's not much different from the non-linear part of the sRGB function:
    C' = 1.055*C^1/2.4 - 0.055

    The bogus term 'linear' refers to an attempt to produce a similar brightness variation for a set interval of RGB pixel values. An aim that's never going to work over more than a very limited set of brightness values.

    Even with a meagre 25 digital levels between doubling/halving brightness intervals (i.e. 25 levels per stop), we could only represent just over a 10 stop range in an 8 bit image with such a system. This would show severe posterisation and banding in the brightest tones.

    The L* gamma 3 allows about 60 levels per stop, which limits the range to no more than 4 stops if followed exactly. Hence the linear region, which shifts it's range to considerably more, while sacrificing the 'ideal' of a fixed bit-value/stop.

    Here's a short table of L* RGB values versus normalised brightness. Calculated according to the published formula above.
    1.0 = 255 (obviously)
    0.5 = 195
    0.25 = 147
    0.125 = 108
    0.0625 = 78
    0.03125 = 54
    0.01563 = 34
    Does that look in any way 'linear'?
    By any sensible definition of the word.
  9. I really don't get what you attempt to proof or deny with those copy/pasted numbers here. If gamma-based curve use some part based on some formula it doesn't mean that it should be automatically named as Log gamma. It is just incorrectly to use definition "Log gamma" for L* or for sRGB.
    There is a special name for that specific curve: L* (or L star, or sometimes Lightness gamma). Please use correct definition and don't confuse people.
  10. It occurred to me that image stabilization can be unstable, causing a shift of two or more pixels between shots, and sometimes within a shot. My digital Nikons were notorious for sudden shifts when long lenses were used on a tripod at slow shutter speeds . I have never seen that with Sony A7's, but it's a possibility.
  11. Oh, wriggle wriggle.
    L star, or L* (as I correctly called it) isn't a true gamma curve, and I completely agree. But neither is it linear either, and its transform certainly involves a log function.

    If you actually read through my posts, you'll see I've never called L* a logarithmic curve. It's a chimera cook-up from some cheese-dream factory somewhere. Neither linear nor logarithmic. Not fowl nor beast.

    What the heck is your argument? And why is L* such a good thing?

    I'm getting perfectly good scans and inversions using both sRGB and Prophoto (gamma 1.8) profiles applied to the RAW input.

    Give me one compelling reason why L star is better. Not just a few random example pictures, but some good technical reason(s), that work reliably over several film types, exposure and lighting conditions. Then L* might be worth considering.

    Otherwise, cooking up some hybrid working space with the tone-curve from this, and the colour co-ordinates from that and goodness knows what white-point, just sounds slightly deranged! Where's the logic behind it?
    Last edited: Apr 9, 2021
  12. Camera stand and base plate are all metal and very stable. No any shake at deepest pixel levels even with mechanical shutter.
    It is still not finished setup, so currently i just use filmholder from old scanner and place it on top of crappy furniture led panel. This produce a lot of focusing variations across the frame yet.

    Stand Parts:
    SmallRig Multi-purpose Cheese Plate 1092
    SmallRig Super lightweight 15mm RailBlock v3 942
    SmallRig Baseplate with Dual 15mm Rod Clamp 1674
    15mm Rods

    Focusing system parts:
    Novoflex dual rail MINOLTA Auto Bellows
    RafCamera RMS female to M39x1 male thread adapter with M42x1 front female thread (manufactured based on my idea)
    RafCamera 18mm clamp to RMS male thread adapter for Minolta 5400 DPI scanner lens
    RafCamera Novoflex Minolta Bellows front plate with M39x1 (manufactured based on my idea)
    RafCamera Novoflex Minolta Bellows rear plate with M39x1 (manufactured based on my idea)
    Macro Extension Tubes M42 (used as a lens hood)
    Lens Mount Adapter Ring M39 Lens to Fujifilm

  13. Now who's cutting and pasting?
    Do the maths. The above is clearly not true.
    Mine's bigger!:p
    Last edited: Apr 9, 2021
  14. rodeo_joe|1, take your pills please.
  15. digitaldog

    digitaldog Andrew Rodney

    "since incremental changes in L are perceptually uniform".

    It is not quite as perceptually uniform as some would believe.
    ColorWiki - Lab is warped
  16. digitaldog

    digitaldog Andrew Rodney

    Lab assumes that hue and luminance can be treated separately and it assumes that hue can be specified by a wavelength of monochromatic light. Yet numerous experimental results indicate that this is not the case. For example, Purdy's 1931 experiments indicate that to match the hue of 650nm
    monochromatic light at a given luminance would require a 620nm light at one-tenth of that luminance. This is known as the Bezold-Brucke shift: Bezold - Wikipedia–Brücke_shift
    Lab doesn’t address this shift in the perception of hue as light intensity changes.

    Lab assumes that hue and chroma can be treated separately, but again, numerous experimental results indicate that our visual perception of hue varies with the purity of color.

    Mixing white light with a monochromatic light does not produce a constant hue, but Lab predicts it does and this is particularly noticeable in Lab modeling of blues, This is the cause of the blue-purple shift often reported.
  17. If you read further, you may see that in that article Marcel Patek notes that mathematically L* response is not exact the same as Lab lightness. As i understand L* is gamma was designed to attempt to emulate Lab lightness response inside RGB.
    I got the idea to use L* gamma for negatives invert few years ago. It started form some partially unlucky attempts to invert negatives in Lab color model. I noticed that when i transform scanned image from Input scanner color profile to Lab model and do invert with L a b channels, i got very uniform tonal result compared to same tests in sRGB or gamma 2.2. But problem was with color. Lab is huge and camera input profile coordinates transformed to Lab seems fly away too far and started produce subjectively incorrect results after invert (Lab is warped). So i thought - hey, i remember long time ago it was special L* gamma somewhere in my memory, maybe it helps to get the best from both RGB and Lab worlds. And yep, it done the magic. Tonal result after invert was exact the same as in Lab. But colors being inverted as expected without surprises if working color space was large enough to contain film gamut.
    I am not a mathematician and i don't do deep scientific research. For me it just ok that i don't see with naked eye difference in tonality between invert in real Lab and in L* gamma. So L* gamma give me some sort of reference point when my invert is more-less symmetrical.
    L* also makes easier to control contrast in shadows and highlights with curves, because provides less aggressive tonal compression in those areas compare to sRGB or pure gamma 2.2
    Last edited: Apr 9, 2021
  18. digitaldog

    digitaldog Andrew Rodney

    All you have to understand is, Incremental changes in L(star) are NOT perceptually uniform.
  19. I really don't know what else to say about L* gamma question. I done invert tests in different gammas. I compare results and i like how image looks in L* gamma. I really don't care if it perceptually uniform or not. Negative inverted in L* gamma just feels more "real life like" compare to sRGB gamma. It also produce the best separation between bright skin tones and extreme highlights, so Image looks more 3D-like.
    L* gamma is just an option that i suggest based on subjective compare tests. Feel free to do your own invert tests in gamma 2.2 or in sRGB gamma, or in whatever you want, compare and use gamma that fits to your subjective look and feel better.
  20. Few days ago i discover another interesting variable in negative processing workflow. Exposure clipping level always somehow linked to color space.

    There is an original linear raw data that was monitored by RAW histograms and was shoot unclipped.

    When you transform it to some large color space like ProPhotoRGB and check histograms - it usually looks ok and also unclipped.

    But if you transform same raw source to tiny sRGB color space - you got strong clipping. Original data is still there, so you need to bring down Exposure in raw editor. No problem with raw when used non destructive color management.

    You can also do the same "raw-like" exposure trick with linear tiff files from scanners if use non destructive color management. For example there is an Exposure tool in PhotoLine, and you can apply it in realtime to source linear image before all color spaces transformations.
    But in Protoshop or other apps that provide destructive color management it is impossible to bring back that clipped data. As result your negatives with invert and AutoLevels applied in sRGB start to look even worse.

    So this is just another argument to always work in wide color spaces.

    Examples. (Irdient Developer is useful for this illustration because it provides non destructive color management and allow quickly preview histogram as if transformed to different color spaces in realtime):

    Camera input color space and linear gamma non color managed - no clipping

    Transformed from camera input color space to ProPhoto L gamma - no clipping

    Transformed from camera input color space to sRGB L gamma - clipping in red channel

    Exposure in raw adjusted to -0.5 to bring back "hidden" clipped data.

Share This Page