Jump to content

Recommended Posts

  • Replies 210
  • Created
  • Last Reply

Top Posters In This Topic

L* gamma custom-made ICC profile

OK. Where does one get that?

A Google search for L* gamma comes up with nothing photo-related at all.

 

Please explain exactly which processing packages you're using and recommending. You've mentioned about a dozen so far.

It seems to me that a profile would have to be created for each film, and when Kodak alone changed their negative film dyes about once a year; that's a lot of profiles to produce, and without the benefit of any colour reference(s) in the picture in most cases.

 

Do you have any real pictures to show us? Other than a random scattering of objects that we have no idea what they actually look like?

First of all your Non mask-corrected version source is clipped in red channel, so you just operate with already broken image from start. Delete it and re-scan without clipping.

But if the camera exposure is reduced to include (unnecessary) shadow detail in the red channel, then the blue channel's highlights bang up against the LH side of the graph!

 

If you want all of the RGB histograms comfortably included, then the easiest way is to remove the bias between them - the mask base colour.

 

And if a profile can correct the mask, then it can surely correct a slight defect in the rendering of yellows.

Edited by rodeo_joe|1
Link to comment
Share on other sites

LOL, you are really strange person.

I explained how to build L* gamma ICC profile in first posts and provide links to 3 sources.

You didn't read it.

You ask to explain about L* gamma.

 

Sorry, no more explanations for you. Do whatever you want and be happy.

Link to comment
Share on other sites

If something already clipped during scanning or digitising it can't be recovered. I did't talk about some magical additional data recovery from source images. I talk about already existing data clipped by AutoLevels tool during editing and then recovered. When you apply AutoLevels you always clip some data because this is how AutoLevels work. In 32 bit you can bring back that original data in one click with "Output Levels" slider. In 8 or 16 bit you need more manual work with "Input Levels" sliders in each channel.

Please make sure you read info before post statements like these.

Why do Autolevels always clip data? I'm using Epsonscan on an Epson V600 and V850 scanners).

Edited by AlanKlein
Link to comment
Share on other sites

- AutoLevels clip data because they need some amount of highlights and shadows to detect where each color channel starts and ends.

- There is a "Clip" setting in AutoLevels. It define the amount of clipped highlights and shadows.

- Is it possible to set "Clip" to 0.0 and apply AutoLevels without clipping, but in this situation colors usually will be detected slightly incorrectly, because instead of normal well defined colors, AutoLevels will detect some garbage and artifacts in shadows and extremely bright highlights. This is why "Clip" setting in AutoLevels usually should be set somewhere from 0.01 to 0.1.

- AutoLevels is very delicate step in workflow. In simple images with bright colors it usually don't require too many tuning, but with images that have fog, dimmed, sky or unusual lighting you may need to experiment with different "Clip" setting in AutoLevels until you see subjectively pleasant colors.

- If you ok with slightly clipped shadows and extreme highlights after AutoLevels, you may keep image as is, don't recover dynamic range and don't add additional contrast. Even with slightly clipped highlights and shadows, dynamic range of of final processed film negative outperforms any digital photo.

 

As for Kodak Ektar 100 negative - just scan it same as any other film and process and described.

I never scan films with Epson scaners so i can't help a lot with settings. But based on my V10 reflective scanner, i guess this is how it should work:

- Turn off color management and scan as positive film. Even with disabled color management Epson scanners can't scan to Linear gamma, but scan to gamma 2.2 also will work more-less ok.

- In Epson Scan app, If it possible adjust Exposure settings manually and make sure that Red channel don't clipped in histogram preview.

- Open scanned image in graphic editor and ASSIGN Epson input ICC color profile (for example in Photoshop it is: edit -> assign profile...). Epson input ICC profile usually located in system profiles folder and named something like Perfection V100 ref and Perfection V100 film. Not sure which one will work better here. It could be that Perfection V100 film designed for some specific negative invert tasks in Epson Scan app. So it is a good idea to start with Perfection V100 ref ICC profile.

- Now CONVERT image from scanner input ICC profile to ProPhotoRGB L* working profile (for example in Photoshop it is: edit -> convert to profile...)

 

If you don't want to generate L* working ICC profile manually in apps described in first post, you can simply download Elle Stone's Well-Behaved ICC Profiles package here ellelstone/elles_icc_profiles (click to green button named "Code" and select Download ZIP)

For some reason Elle Stone use name "LargeRGB" instead of "ProPhotoRGB". Also he use name "labl" instead of L* (or L star) gamma. More info described here Elle Stone's well-behaved ICC profiles and code

To avid possible compatibility problems i suggest use V2 ICC profile type but not V4. So in that package you need to look for LargeRGB-elle-V2-labl.icc, or simply download single ICC profile here https://github.com/ellelstone/elles_icc_profiles/raw/master/profiles/LargeRGB-elle-V2-labl.icc

Put it in system profiles folder.

 

Next follow the workflow described in first posts of this thread...

Edited by dmitry_shijan
Link to comment
Share on other sites

OK. I finally managed to find where this "L* gamma" came from. It's the tone curve used in ECI-RGB version 2.0. It would have helped to mention that earlier. It's actually not a true gamma curve, and like sRGB uses a linear region up to bit value 23, with an offset fudge to cover the join, and to no real purpose. The top part of the curve actually uses the steep gamma of 3. So far from linear

 

The ECI RGB v2 primaries are the same as NTSC but with a white point of D50. This offers no gamut advantage over AdobeRGB as far as I can tell. You gain slightly with cyans, but lose out on yellows and blues. Plus AdobeRGB uses a simple gamma 2.2 power law, which is easier to correct for.

However, I like Ektar 100 and would like to try it again. What's the best way to scan it to get the best color results?

Ektar doesn't have the best aligned curves.

Ektar-100.jpg.68f76c4aba0ce53791a990719715af05.jpg

If the offset is removed thus -

Ektar-100-2.jpg.7752ff9fa848090da4dcff04a563f3b3.jpg

You can see that the yellow dye image is still too contrasty, and when inverted gives rise to excessive blue in the highlights.

IMG_20210329_110513.jpg.356a94828e6dae94c905bf53cddd51cf.jpg

Therefore, after inversion the blue channel curve will have to be tamed, or both the green and red pulled up to match the blue.

 

In fact most of Kodak's negative films appear to have a weak masking that doesn't match the cyan and magenta dyes well to the yellow layer.

 

Profoto-100-1.jpg.d5cb9aa09da5995e5449bf639be679fb.jpg Profoto-100-2.jpg.7e58a42790c2f2d2a6f739748fb730a5.jpg IMG_20210329_110430.jpg.c637ef70a8236792a9e23bce163d0044.jpg

 

Fuji's Reala Superia is a bit better -

Fuji-Reala-Superia-1.jpg.ad1617f1449d87beb8a6a928d190372e.jpg Fuji-Reala-Superia-2.jpg.bd8feec0804d64628d2fca24f4995c34.jpg IMG_20210329_110551.jpg.9be6a1a19d2aa4d48fcf11d4c1310d70.jpg

 

I think the curve inversions give a better idea of what you're up against once the negatives are in the digital realm.

 

The shifting of the curves to overlay one another, as shown above, is exactly what cancelling the mask base colour does. This can be acheived by optical filtering, as previously 'discussed' - or dismissed out of hand for no good reason - or by using the camera white balance. The only downside to using the camera WB is that it usually requires a very low colour temperature to be set, which may increase noise in the blue channel.

 

What doesn't happen is that the function of the mask is impaired. It's simply like adding a neutral density to the cyan and magenta images to bring them more into line with the yellow. And then if the overall exposure is increased the least-dense parts of the negative will be aligned and all three curves can be exposed to the right to maximise bit-depth and tonal distribution.

Edited by rodeo_joe|1
Link to comment
Share on other sites

Lstar gamma encoding was kind of a 'rage' on the European side of the CMS gamma fans. This from way back in

(Tuesday, March 11, 2008 8:46:58 PM) from a color scientist at Adobe is a good read:

 

L* is great if you're making copies. However, in most other

scenarios, L* out is vastly different from L* in. And when L* out is

different from L* in, an L* encoding is very inappropriate as

illustrated below.

 

Let me provide an example for video. Let's say you have a Macbeth

chart. On set, the six gray patches would measure around L* 96, 81,

66, 51, 36, 21.

 

Assuming the camera is Rec.709 compliant, using a 16-235 digital

encoding, and the camera is set for the exposure of the Macbeth

chart, the video RGB values would be 224,183,145,109,76,46.

 

On a reference HD TV monitor they should reproduce at L* 95.5, 78.7,

62.2, 45.8, 29.6, 13.6.

If say 2% flare is present on the monitor (for example at home), the

projected values would be different again, here: 96.3, 79.9, 63.8,

48.4, 34.1, 22.5.

 

As you can see, L* out is clearly not the same as L* in.

Except for copiers, a system gamma greater than 1 is a required

feature for image reproduction systems aiming to please human eyes.

For example, film still photography has a much higher system gamma

than video.

 

Now, if you want an L* encoding for the video, which set of values

would you use:

96, 81, 66, 51, 36, 21 or

95.5, 78.7, 62.2, 45.8, 29.6, 13.6?

Either is wrong, when used in the wrong context.

If I need to restore the scene colorimetry for visual effects work, I

need 96, 81, 66, 51, 36, 21.

If I need to re-encode the HD TV monitor image for another device,

say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.

 

In this context, using an L* encoding would be utterly confusing due

to the lack of common values for the same patches. (Like using US

Dollars in Canada.)

Video solves this by not encoding in L*. (Admittedly, video encoding

is still somewhat confusing. Ask Charles Poynton.)

 

When cameras, video encoders, DVDs, computer displays, TV monitors,

DLPs, printers, etc., are not used for making exact copies, but

rather for the more common purpose of pleasing rendering, the L*

encoding is inappropriate as it will be a main source of confusion.

 

Are you planning to encode CMYK in L*, too?

 

Lars

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

Note that "Gamma" and "Color Space" are separate parts of the image. With tools described earlier you can generate custom working ICC profile with any gamma (tonal curve) you like and with any color space you like.

 

L* gamma is just a real world lightness-based tonal curve. It allow to work in RGB color model and use advantage of same "symmetrical" "physically based" tonal response as Lightness in Lab color model.

This is why it is so unique and useful for film negative invert.

 

ICC profile with L* gamma is for image editing only, it should never be used in final exported 8 bit JPEGs.

Link to comment
Share on other sites

With tools described earlier you can generate custom working ICC profile with any gamma (tonal curve) you like and with any color space you like.

Well, that's what's confusing. You mention both a Prophoto profile import and L* gamma in the same sentence.

 

Where's the evidence that L* gamma actually improves anything?

 

All of the sample pictures of the child eating a strawberry show acceptable colour. The only variation is a slight difference in tone curve, which could easily be compensated for given access to the original 16 bit files. And none of the variations shown address the ugly posterised red skin tone in the shadow of the metal railing. Note - not the red scabs on the boy's arm.

 

The posterisation is probably a function of the original camera exposure, or simply a characteristic of the film used. So no digital tone curve is going to help with that.

 

What's needed are comparisons across a wide range of film types and subjects. Not just a couple of examples that have taken goodness knows how long to process.

 

Have you tried your recorded actions on a variety of scanned negatives? And if so, what where the results? Please show.

Link to comment
Share on other sites

Here is also an idea how to emulate scanner multisampling with DSLR.

There is a well known very effective lossless temporal noise reduction method. It is based on multiple similar images blended with "Mean", "Median" or some other special blending modes. It is well described in this article Pat David: Noise Removal in Photos with Median Stacks (GIMP/G'MIC & Imagemagick)

 

This stacking option also available in Photoshop, but as usual designed in rather hidden and confused way:

File -> Scripts -> Load Files Into Stack -> check "Create Smart Object"

Layer -> Smart Objects -> Stack Mode -> Mean

 

This stacking option also available in Affinity Photo, but currently not available PhotoLine. Hope PhotoLine developers will add it in future.

 

Mean — averages pixel content across the stack of images. Good for long exposure simulation and noise reduction.

Median — removes pixel content that is not consistent in each image. Suitable for object removal and noise reduction.

Outlier — exposes pixel content that differs in each image: great for sequence composites.

Maximum — uses the maximum pixel values from each image. Can be used for creative exposure blending where the subject is lighter than the background.

Minimum — uses the minimum pixel values from each image. Suitable for exposure blending where the subject is darker than the background.

Range — indicates areas that change across the image stack. Good for analyzing what has changed between each image.

Mid-Range — uses the middle pixel values from each image. Can be used to increase tonal range if used with bracketed exposures.

Total — produces the total value of pixels from each image. Usually results in overexposure, but can be used to lighten very underexposed imagery.

Standard Deviation — analytical: measures the distribution of information between the images. Useful for object removal as it clearly indicates areas that will be averaged out with a Median operator.

Variance — analytical: as Standard Deviation, indicates how pixel values are spread between images. More intense distributions are shown very clearly.

Skewness — analytical: highlights edge detail and indicates the intensity of pixel value distribution. Can be used to determine tonal and spatial differences between images.

Kurtosis — analytical: detects the peakedness of an image. A brighter result represents low noise levels and a tonal uniformity (most pixels at dominant gray level). Darker results represent greater noise and less tonal uniformity (more pixels further away from dominant gray level).

Entropy — analytical: represents the number of bits required to encode information in the stack. Could be used with stacked video frames (within the same scene or shot).

 

So in short:

1. You just need to quickly shoot 5-10 similar copies of the same film. Continuous Shooting (Burst Mode) will do the trick. Make sure your setup is stable and you don't move camera or film during shooting.

2. Process RAW files to TIFFs as it was described earlier, but don't do invert and don't do other processing yet.

3. Stack images into one single file with "Mean" blending mode and save as single TIFF.

4. Process negative with workflow described earlier in my posts.

 

This will clean up all possible digital noise without touching film grain structure and will make your source file more dense at pixel level.

 

Also this method probably should be way less risky than HDR stacking, because it will not change original tonal relations taken from linear sensor data.

 

And a quick test to proof my theory. Here is crop of inverted and processed film negative. This film negative sample was scanned with camera and was underexposed more than usual to amplify the noise and see the camera limits. This is 400% scaled crop, but i can see the difference at 100% as well.

 

QhrfESn.jpg

XwpPAPr.jpg

lelGUVe.jpg

JnGsNhm.jpg

Link to comment
Share on other sites

Looks great if you want slightly more blurry 'scans'.

 

The digital noise isn't an issue even with a single frame, since it's at a level of about 1/10th of the grain/dye-cloud 'noise' that multi-shot scanning does nothing to reduce. - Unless, as in the first example above, there's some obvious image shift between shots.

 

Why not just increase the illumination level to get noise down? That's if it's even an issue to start with.

Link to comment
Share on other sites

Since my last post I've processed nearly 1000 negatives using the "NegativeLabPro" plugin for Lightroom. About 1/3 of my time is spent "scanning" film strips with a Sony A7Riv camera and Nikon 55/2.8 Macro, and 2/3 if the time straightening and cropping the images before processing. I used a Lumecube LED panel, set to 5500K with a CIE rating of 95%, and a sampled WB to match in the camera. The negatives are a mix of Fuji Reala and Fuji Superia 400.

 

The results were surprisingly consistent, regardless of the emulsion, lighting, exposure level and environment. There were none I had to reject based on color balance. That said the colors are not as bold as you would get from a minilab, actually quite close to Sony colors, that is to say "accurate". I spent roughly 4 hours on this project, averaging about 250 image/hour scanning, and about the same time in Lightroom. If better processing software comes along, I still have the negatives in RAW format.

Edited by Ed_Ingold
  • Like 1
Link to comment
Share on other sites

Temporal noise reduction don't blur any details, until you stack 100-1000 of images.

From my observation Median produce slightly higher micro contrast than Mean. But the difference is near invisible.

Increased illumination level won't help a lot. I inspect IT7.8 test slide frame and i can see that very small amount of digital noise always started somewhere from the middle grey patches. Sure it is near invisible but it is always there. To get rid of all noise with illumination you need to increase exposure a lot and so clip huge amount of dynamic range.

And this is not only about noise. It is about pixels density and pixels quality. With stacked images from multiple 14 bit "relaxed" bayer pattern-based images you get solid and dense real 16 bit color data.

From my personal tests 5 images stack is OK, but some tiny amount of noise in deepest shadows is still there. 10 stacked images look perfect, but take more time and process. Stack more than 10-20 images probably useless for film scans. But for sure is may depend of camera sensor. Lower quality sens0d -> more noise -> need more stacking.

Also don't forget that stacking is highly exponental. Dfference between 0 and 5 images always look huge. Difference between 10 and 100 stacked images may be near invisible.

Link to comment
Share on other sites

And here are some interesting resolution and artifacts tests.

In the past i was lucky (actually not lucky) to use MINOLTA DiMAGE SCAN 5400II. It is a great scanner, but it had a global problem with non uniform backlight stripes, described here Backlight stripes in MINOLTA DiMAGE Scan 5400 II or what? (too bad uploaded photos are gone over time)

After many attempts to modify LEDs or diffuse that defect i gave up with it. I decide to take off the lens form that scanner and build a camera scan system.

Same time i had access to smaller MINOLTA DiMAGE SCAN Elite II model, so i was able to take some shots from it for compare.

 

For this test i build input ICC profiles for camera and both scanners based on Kodak IT 8.7 Scanner Calibration Target from Affordable IT 8.7 (ISO 12641) Scanner Color Calibration Targets

Profile Type: Single gamma + Matrix

 

Scan from camera debayered in Iridient Developer with Anti Aliasing setting: 2.

Custom contrast camera curve removed. No noise reduction, no sharpening, no any other adjustments applied to RAW file. 5 frames stacked in Median mode.

Poor quality consumer furniture LED panel used as backlight.

 

Scans from scanners are in linear gamma with disabled color management.

 

All images processed with my workflow described earlier:

Transform from Camera input ICC profile to ProPhotoRGB with L* gamma ICC profile -> Invert -> Apply RGB AutoLevels -> Recover back clipped data from RGB AutoLevels -> Contrast.

No custom White point picker.

 

This is rather complicated frame i use specially for tests. Is is very scratched and it have a lot of extreme saturated colors.

 

E9j4owI.jpg

XpplCb5.jpg

nXbSJ0i.jpg

Link to comment
Share on other sites

And some Anti Aliasing tests. If turn Anti Aliasing test OFF image looks very close to source form scanner. Huge amount of rainbow patterns and dots over film grain structure and aliasing artifacts around contrast white dust and scratches. This probably means that scanners use very basic internal debayering and don't do any anti aliasing reduction.

Too many anti aliasing filtration also looks not good. Film negatives are very sensitive to small changes in source image, so even small amount of extra filtration may decrease global saturation in final processed image.

So after some tests i can suggest:

Anti Aliasing: 2 (instead of default 3)

No Luma/Chroma noise reduction.

No sharpening.

And of course it is all depends of camera and sensor technology. Sensors with OLPF filters need less AA filtration. Normal Bayer sensors need more filtration that Fuji X-Trans sensors.

 

pK5yNXY.jpg

Wan8iRb.jpg

Link to comment
Share on other sites

This probably means that scanners use very basic internal debayering and don't do any anti aliasing reduction.

Scanners almost invariably use a tri-linear sensor. They don't need de-Bayering because they don't use a Bayer CFA to start with.

 

The exceptions are some of Nikon's scanners that use an unfiltered CCD linear sensor and rapidly switch red, green, blue and IR LEDs to get the colour and defect-map channels.

 

Scanners are also prone to aliasing. It's an interaction between the regular digital sampling spatial frequency and the irregular 'grain' pattern.

 

Do some homework on how this stuff works.

From my personal tests 5 images stack is OK, but some tiny amount of noise in deepest shadows is still there

So, that would be the brightest parts of the negative that show most noise?

How does that work?

Edited by rodeo_joe|1
Link to comment
Share on other sites

Yep, probably "dabyter" is not a correct world for single or triple row linear scanners, but anyway they do some sensor-to-image pixels processing and usually have pretty hard aliasing.

And yep, brightest parts of the negative will show most digital noise because negatives are inverted. And the most digital noise in blue channel, because orange color cast in negatives.

Link to comment
Share on other sites

And yep, brightest parts of the negative will show most digital noise because negatives are inverted.

LOL.

Where does this noise originate then?

If it's in the camera or scanner, then that's very strange behaviour; to have noise that's super-proportional to the signal level.

 

Use your eyes! That multi 'scan' composite is just blurred compared to the single shot version. To the point that even the grain isn't visible.

 

And why would you ever underexpose a camera copy to start with?

Edited by rodeo_joe|1
Link to comment
Share on other sites

Add fake noise to that image again and it will look subjectively not blurred :) It is a known - visual effect more noise add fake feeling of more detailed image.

I just do some tests here and try to amplify some effects. Hope somewhere i'll be able to get more focused and grainy camera scan.

 

Seems you confused with everything in this world. I really tied to explain every basic thing to you here.

Link to comment
Share on other sites

Seems you confused with everything in this world.

Yes, well somebody is.

Here's what happens when you invert digital noise:

 

A Kodak greyscale and colour swatch shot at 25,600 ISO with all noise reduction turned off -

Frame.thumb.jpg.de5f05d4aa7de4b0e14f7219e69711b5.jpg

Can you see much noise in the two lightest patches?

 

A tighter crop of the greyscale -

Noise-strip.thumb.jpg.4a5997eebe2b67f546c65adbdf79d917.jpg

And a negative inversion of the above -

Inverted-Noise.thumb.jpg.64a59b82679f411d2b008da9577275eb.jpg

Note that what was the brightest step is now practically noise-free.

 

How it works is: Noise is indeed more visible in darker tones. However, noise also reduces with exposure. The nett result is that, after inversion, the darker tones get less noisy.

 

The noise is in fact more visible in mid tones, where the two effects of noise perception and real percentage noise cross over.

 

But nobody in their right mind would use a digital camera set at 25,600 ISO and with its noise reduction turned off to copy negatives!

 

Incidentally, the 2.1D 'brightness range' of the above greyscale is almost exactly the same as the density range found in a colour negative. The cyan, yellow and magenta dye images each only have a brightness range of just over 100:1, which won't tax the ability of any digital camera worthy of the name.

Edited by rodeo_joe|1
Link to comment
Share on other sites

In multi sampling, you seem to lose a lot of edge detail. IMO, it's more important to retain sharpness than avoid noise/grain. I'm not convinced you are seeing noise, and not grain, in your illustrations, with the exception of Rodeo's shots of a color chart. Edited by Ed_Ingold
Link to comment
Share on other sites

Once again - my examples are not 100% sharp because i don't have a proper film holder yet to share some decent frames. Currently working on quality film holder project that may be useful for many people in future.

 

Digital noise in digital sensors - in shadows.

Film negative - Inverted.

Digital noise in film negative became visible - in lights.

All my current examples are with NEGATIVES.

 

Diffused light - softer film grain look, but also near invisible dust and scratches.

Collimated light (usually used in film scanners) - sharper film grain look, but all dust and scratches became extremely visible.

 

Defocused or soft lens - Softer film grain, but same amount of digital noise.

 

26MP is not a enough resolution to capture sharpest possible film grain details.

 

My examples only about 2 stops underexposed from native camera ISO 160.

Link to comment
Share on other sites

This is a recent example of negative conversion. It was done with a Sony A7Riv (61 MP), Nikon 55/2.8 Macro lens, and Nikon ES-2 film holder. The software was Lightroom, with a NegativeLabPro plugin. You can see dye clouds In the magnified sample, but no signs of noise. You will have to enlarge the sample panel at least 4x to see individual pixels. The film, as I recall, is Fuji Reala. The scene is the Charles Bridge in Prague.

 

Full Image

_7R41446.thumb.jpg.849c06cd52d6624c764f37f4e44e30a5.jpg

 

Pixel = Pixel Sample

Sample-1.jpg.db769f144cb6fa9eccb1b8d5f656bf27.jpg

Edited by Ed_Ingold
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...