Jump to content

shijan

Members
  • Posts

    139
  • Joined

  • Last visited

Everything posted by shijan

  1. I know how unprocessed RAW file looks, PhotoLine also can show image in raw pattern mode like this. And in reality those black and white pixels. They are usually shown as RGB pattern only for more human friendly preview:
  2. I don't know how to name that correctly. But looking how it works in PhotoLine i assume that there is always some sort of reserved empty space in debayered source image. And it is usually trimmed under the hood. PhotoLine operates with imported raw file at very low level so you can see some things the that is impossible to see in other apps. For example, here is how processed raw file looks when you open it in PhotoLine. Now we can turn off that auto generated contrast curve: And next we can go even deeper and bring back all trimmed empty data. This is totally useless empty data, so i guess other raw converters usually autodetect and trim it by default somehow.
  3. Also seems every raw processor use some unique detection and trimming of "empty" data range in debayered raw file. So the same "flat" scene-referred export from different raw processors usually have slightly different brightness.
  4. From https://www.color.org/scene-referred.xalter - "The linear_RIMM-RGB_v4.icc profile contains a linear CCTF. It should be used when a linear scene-referred encoding is desired, but only with 16 (or more) bits per channel." - "Open the raw file in ACR. Note that the color rendering controls should be left at zero or linear in order to produce a scene-referred image." - "Set the Workflow Options Space to ProPhoto RGB and the Depth to 16 Bits/Channel." - "Open the converted (to scene-referred) raw file into Photoshop" -"Edit -> Convert to Profile and CONVERT to the linear_RIMM-RGB_v4.icc profile using the relative colorimetric rendering intent." As for me this is all looks the same as simple export to custom-made ProPhoto RGB with Linear gamma ICC profile directly from ACR :)
  5. I read deeper, and seems workflow described in that article is rather useless today. It was designed for Photoshop CS3 (2007) so it assume ACR use Process Version 2003 or 2010. Also as i remember in those days ACR was able to export only to sRGB, AdobeRGB and ProPhotoRGB profiles. So they create that "hack" to convert in Photoshop exported image form ProPhotoRGB gamma 1.8 back to ProPhotoRGB gamma 1.0. Current versions of ACR can export directly to any installed ICC profile, so you can just switch to Process Version 2003 or 2010, reset all adjustments to 0 and export to custom-made ICC profile. For example to "native" scene-referred ProPhotoRGB gamma 1.0 or to ProPhotoRGB L*gamma if it required by your workflow. To be honest all those Lightroom/ACR hacks are very depressive and i can't see any point to use it when there are a lot of other apps that can do proper "flat" scene-referred export in way more simpler and user friendly way. It is real mystery why it is so complicated for Adobe simply add option in ACR to remove build-in contrast curve same as other raw editors normally can do:
  6. This is really useful info. By the way, i just compared those "flat" exported files processed in Lightroom/ACR with Process Version 2003 or 2010 and with all adjustments turned off, and they still look not the same "flat" as real "flat" untouched exports from other raw editors. So probably that scene-referred image export may be the only technically correct way to go for Lightroom/ACR film scan exports. Overall i don't use Lightroom/ACR, so i may not understand it as pure as i understand other tools.
  7. When it goes to negative processing it is about: Subjective compare Understanding what variable cause changes in final look Understanding was it caused by a technical mistake or was it just something related to actual film look. A lot of things are subjective and a lot of conclusions are up to personal tech level. Sometimes on some level conclusions may look logical but they appears incorrect in reality. Film produce huge amount of saturation, so it require really large digital color space to work normally. AdobeRGB is rather small and can't fit film colors without clipping. - Processing in Lightroom is incorrect. With normal processing in Lightroom/ACR it is impossible to output "flat" uncorercted tiff file usable for proper invert and processing. - You just refuse to test, compare, learn and try technically correct options. You also post huge amount of incorrect info and incorrect conclusions based on this info. This only confuses people. - Batch convert from raw to tiff. One click on action + few mouse scrolls + grey point pick + contrast LUT. I already posted videos here. If some photos with some specific settings look ok by luck it doesn't mean that things are ok. - When you "align" channels you formally also reduce exposure a lot. This protects colors from clipping in tiny AdobeRGB space. "align" channels is a bad way to do because it is distort original relations between colors and image look incorrect after invert. Depending of input profile quality, film colors complexity and saturation amount difference may be visible more or less. Unadjusted version just brighter in summary and so produce clipping in tiny AdobeRGB space. Some tiny amount of colors may be clipped, but it may be not too visible on low resolution histogram. This is why it is always better to monitor clipping by film frame, but not by actual image. - I provided workflow and all possible explanations here. Test it, try it, use it or don't. I really don't care. some magic LOL
  8. It was an official Kodak specs for ICC profile named "ProphotoRGB". It consists of ProphotoRGB primaries and gamma 1.8. Some apps just use ProphotoRGB primaries and different gamma. For example Lightroom/ACR for processing transform internally debayered data to ProphotoRGB primaries with linear gamma transformed in the end to sRGB gamma i guess (formally they name this Melissa RGB) Or for example in my workflow i use ProphotoRGB primaries with L* gamma. It is ok to use any custom combination of primaries (color space) and gamma for your needs.
  9. Can you provide raw example? You just do something wrong somewhere in your workflow that cause that strange saturation. I really have no idea why it looks like this in your example. Here is link to my RAW file example, if someone want to play with it: X-E4_DSF0397 TEST.RAF The idea of this thread was to provide well behaved and stable starting point and explain the sources of all possible variables and problems here. If things done technically correct - image should look nice from start.
  10. Yep, ACR can't show linear raw source histogram. It only can show histograms based on color space selected in export options. So rodeo_joe|1 miss info about what color space was selected in export options there.
  11. Also here is another side by example why it is a bad idea to randomly align channels or "remove orange mask" by adjusting WB in RAW. Relations between original colors juts became distorted and you will get over/under saturation and hue shifts. To exclude variations this is example made with original factory Fujifilm X-E4 camera input profile. Due low quality LEDs both test examples look ugly, but colors in image with orange mask removed in RAW looks broken even more.
  12. By the way, it appears that there IS a way to export source image without additional build-in contrast curve from Lightroom/ACR! Just switch to Process Version 2003 or 2010 and reset all sliders to 0. Lightroom/ACR also can do color management, so you may export your scan directly to ProPhotoRGB with L* gamma or to any other color space/gamma you like.
  13. "applying the same actions to the unaligned histogram (mask uncompensated) version, we get this" You just got saturation clipping here because some colors don't fit to sRGB or AdobeRGB color space. But when you adjust WB and align channels your extreme colors start to fit better to those tiny sRGB or AdobeRGB color spaces and so no clipping in final image. Problem in Lightroom - it is not too clear sometimes how exact it use color management. It do all processing natively inside ProPhotoRGB but same as other raw editors it show histograms clipping level based on color space selected in export options. "Not only do the histograms change position, but their relative height and shape changes too." This is because Lightroom can't remove custom contrast camera curve (actually it can but need some hack for this), so exported images are always distorted with some sort of random additional contrast from start. This may add additional artifacts during invert and autolevels. By the way, are you Invert and AutoLevels directly in Lightroom or later in Photoshop? It's more an art than a science only when instead of searching tech mistakes in color management, people attempt to adjust and fix those mistakes during additional color correction. If you like to share raw source of that flower here, i can show you that it is easy to get unaligned histogram (mask uncompensated) and same time have no any saturation clipping. And same time in this scenario colors will look better from start because relations between colors will not be distorted by some random WB adjustmnent.
  14. Here is also side by side comparison example how custom made camera input profile works and how it helps to fix problems with low quality backlight. Examples processed only with Invert + AutoLevels. No grey point adjustment or any sort of "snap neutral midtones" used here. As usual, these are extreme examples from some point of view. I would never suggest anyone to use basic low quality LEDs as film scans backlight. And here are with Grey Point picked from same area in both images:
  15. Ok, i mean here "raw" as debayered to RGB but non color managed yet image source. Sure sRGB as an output color space for the web. sRGB was shown here just for test purposes to see difference in exposure clipping in more obvious way.
  16. Few days ago i discover another interesting variable in negative processing workflow. Exposure clipping level always somehow linked to color space. Explanation: There is an original linear raw data that was monitored by RAW histograms and was shoot unclipped. When you transform it to some large color space like ProPhotoRGB and check histograms - it usually looks ok and also unclipped. But if you transform same raw source to tiny sRGB color space - you got strong clipping. Original data is still there, so you need to bring down Exposure in raw editor. No problem with raw when used non destructive color management. You can also do the same "raw-like" exposure trick with linear tiff files from scanners if use non destructive color management. For example there is an Exposure tool in PhotoLine, and you can apply it in realtime to source linear image before all color spaces transformations. But in Protoshop or other apps that provide destructive color management it is impossible to bring back that clipped data. As result your negatives with invert and AutoLevels applied in sRGB start to look even worse. So this is just another argument to always work in wide color spaces. Examples. (Irdient Developer is useful for this illustration because it provides non destructive color management and allow quickly preview histogram as if transformed to different color spaces in realtime): Camera input color space and linear gamma non color managed - no clipping Transformed from camera input color space to ProPhoto L gamma - no clipping Transformed from camera input color space to sRGB L gamma - clipping in red channel Exposure in raw adjusted to -0.5 to bring back "hidden" clipped data.
  17. I really don't know what else to say about L* gamma question. I done invert tests in different gammas. I compare results and i like how image looks in L* gamma. I really don't care if it perceptually uniform or not. Negative inverted in L* gamma just feels more "real life like" compare to sRGB gamma. It also produce the best separation between bright skin tones and extreme highlights, so Image looks more 3D-like. L* gamma is just an option that i suggest based on subjective compare tests. Feel free to do your own invert tests in gamma 2.2 or in sRGB gamma, or in whatever you want, compare and use gamma that fits to your subjective look and feel better.
  18. If you read further, you may see that in that article Marcel Patek notes that mathematically L* response is not exact the same as Lab lightness. As i understand L* is gamma was designed to attempt to emulate Lab lightness response inside RGB. I got the idea to use L* gamma for negatives invert few years ago. It started form some partially unlucky attempts to invert negatives in Lab color model. I noticed that when i transform scanned image from Input scanner color profile to Lab model and do invert with L a b channels, i got very uniform tonal result compared to same tests in sRGB or gamma 2.2. But problem was with color. Lab is huge and camera input profile coordinates transformed to Lab seems fly away too far and started produce subjectively incorrect results after invert (Lab is warped). So i thought - hey, i remember long time ago it was special L* gamma somewhere in my memory, maybe it helps to get the best from both RGB and Lab worlds. And yep, it done the magic. Tonal result after invert was exact the same as in Lab. But colors being inverted as expected without surprises if working color space was large enough to contain film gamut. I am not a mathematician and i don't do deep scientific research. For me it just ok that i don't see with naked eye difference in tonality between invert in real Lab and in L* gamma. So L* gamma give me some sort of reference point when my invert is more-less symmetrical. L* also makes easier to control contrast in shadows and highlights with curves, because provides less aggressive tonal compression in those areas compare to sRGB or pure gamma 2.2
  19. Camera stand and base plate are all metal and very stable. No any shake at deepest pixel levels even with mechanical shutter. It is still not finished setup, so currently i just use filmholder from old scanner and place it on top of crappy furniture led panel. This produce a lot of focusing variations across the frame yet. Stand Parts: SmallRig Multi-purpose Cheese Plate 1092 SmallRig Super lightweight 15mm RailBlock v3 942 SmallRig Baseplate with Dual 15mm Rod Clamp 1674 15mm Rods Focusing system parts: Novoflex dual rail MINOLTA Auto Bellows RafCamera RMS female to M39x1 male thread adapter with M42x1 front female thread (manufactured based on my idea) RafCamera 18mm clamp to RMS male thread adapter for Minolta 5400 DPI scanner lens RafCamera Novoflex Minolta Bellows front plate with M39x1 (manufactured based on my idea) RafCamera Novoflex Minolta Bellows rear plate with M39x1 (manufactured based on my idea) Macro Extension Tubes M42 (used as a lens hood) Lens Mount Adapter Ring M39 Lens to Fujifilm
  20. I really don't get what you attempt to proof or deny with those copy/pasted numbers here. If gamma-based curve use some part based on some formula it doesn't mean that it should be automatically named as Log gamma. It is just incorrectly to use definition "Log gamma" for L* or for sRGB. There is a special name for that specific curve: L* (or L star, or sometimes Lightness gamma). Please use correct definition and don't confuse people.
  21. My stand system is based on aluminum 15mm rods and blocks used in cinema camera systems. It is very short and very stable. I also use electronic shutter. Some samples may be blurred due poor alignment between camera and film. Samples with man in grey blazer and people in colorful wear with backpacks are sharp enough. If you don't like Stacking - don't use it. I agree that at higher resolutions it may be near useless because noise structure is way less smaller than film grain. At smaller resolutions it makes a visible difference and helps to clean up artifacts. The noise is still visible even on normally exposed frames. Do the tests. Shoot 5 frames in burst mode, and show/hide layers one by one. You will see some digital noise dancing in each frame. L* gamma is not a Log curve. I have no idea why should i care which parts of which formula it use. It works perfect with film scans and this is enough for me. During invert it produce "symmetrical" uniform results that require minimum additional amount of manual tonal adjustments in final photos. Do your tests. If you don't like it don't use it. Some background for geeks here Digital Photography - Marcel Patek: Monitor gamma "As one can see, the L-star response is clearly linear in all brightness ranges. This means that doubling the value of RGB always changes value of L by the factor of two or that by stepping the RGB values by e.g., 10 points will change the L values by a constant increment (in this case by 100/255*10 = 3.9). Also, since incremental changes in L are perceptually uniform, changes from dark to bright values in a synthetic grayscale (RGB form 0-255) will be perceived as smooth and uniform." That optical enlargement test example looks nice. Here are my optical crops example of same frame. This is larger possible magnification that i can do on my bellows. This optical crop is virtually equal to full frame scanned at near 16000 x 24000 = 384MP. So yep, those are colored saturated film grain seeds. And i guess that colored grain look must be somehow preserved as starting point in smaller sized scans. Problem 1: With 26MP scans there is a conflict between film saturated grain particles and digital noise that is near the same size at that magnification. And as result - some saturated aliasing and moire artifacts overlapped with film grain. Probably i need to experiment with camera scans at higher optical resolution and stitching. Problem 2: Anti aliasing filters in raw processing apps are very different from basic Chroma noise reduction filters and remove digital aliasing artifacts as well as all saturated seeds of film grain. And surprisingly it is not depends a lot of scan size and film grain size. Saturation loss in film grain particles is visible at 26 MP example as well as at optically enlarged "virtual 384MP scan." Probably i need to search for better processing workflows.
  22. As for those strange 25,600 ISO tests - at insane ISO speeds (25,600 ISO is 8 stops underexpose) sensor generate noise in all tonal range, so those tests are just useless here.
  23. I don't insist that stacking is a must have option for every scan. Median Stacking probably the last thing you need to do in camera scanning and negative invert process. You never see difference between all those extreme resolutions and stacking on 10x15 prints or when downscaled to HD or 4K monitor resolution. But once again: Film grain is a static texture in multiple scans of same frame. Digital noise is dynamic texture and it changes randomly in every frame. Median Stacking removes moving noise structure and don't touch static film grain structure. I have no idea where those "Reala and Velvia barely deliver 80 lpi" numbers came from and how and when was it tested and why should i believe that? I did my own tests and i see that 42MP provide more details in cloth textures that 26 MP camera scan of crappy old faded scratched film negative shoot on basic consumer film roll with basic 50mm lens. Also don't confuse specs. 2x 4x are rather abstract defenitions. x4 of what? 2x megapixels is not same as 2x pixel resolution. Measurement in megapixels is rather stupid, but we are live in this reality. For consumer 42MP sounds like about 2x more than 26MP WOW!!! But in reality 42MP vs 26MP is just a tiny increase in resolution. To increase resolution 2x from 26MP you need 103MP 11 MP = 4032 x 2688 22 MP = 5646 x 3964 26MP = 6240 x 4160 42MP =7920 x 5328 61MP = 9504 x 6336 103 MP = 12480 x 8320
  24. How can you sure that it is not some amount of digital noise partially overlapped above film grain structure? To see where digital noise starts in your camera just do stacking tests with any regular sharp and fine detailed digital photo (not a film scan). Or just shoot few frames in burst mode and inspect them one by one to see where moving digital noise structure starts to became visible. With same basic test you can confirm or deny your "details loss" theory. Digital noise is not a individual pixels. It is a dynamic structure from some point of view similar to film grain. I never suggested to use Log gamma here. L* gamma is not a log gamma. I provided side by side tests showing 42MP vs 26 MP vs 11 MP. There is a difference in fine details visibility between 42MP in 26 MP. And it was sample shoot taken with basic consumer film and rather soft 50mm portrait lens. Can we see your measurements where A7Riv has more than 4x the resolution of film?
×
×
  • Create New...