Film scans as 'flexible' as digital captures?

Discussion in 'Digital Darkroom' started by willy_boots, Nov 11, 2010.

  1. For a few months now I've been scanning on my epson v500 and I've been having fun getting back into film -- especially medium format. I like to scan big files, high resolution and 48 bit. Something I've noticed when fiddling around in lightroom is that when I tweak the colours the changes happen in very large increments, larger than when I'm adjusting a DNG from my D90. For example, I adjust white balance and a few notches warmer bring on a relatively larger blast of yellow than when I adjust a DNG. Why is this? Are scans less flexible during post-production than raw digital files?
    And when should one adjust colour temperature, curves, and saturation would you say? Do most people do it during the scan?
     
  2. I've never used lightroom for editing, so I am not sure how the interface works with scans, but I have generally found film scans, and I have done them for over 10 years, have what seems to be much more resilience in post than do digital captures--MF or dslr. I don't think the separation is as large as even 5 years ago, but it is noticeable if one works on a number of different files.
    I think you would see the difference working in PS, but as I said, I don't work with lightroom and possibly that interface does not mesh as well with that type of file.
     
  3. Scanned or digital raw - it makes no difference. Once the image is in the compute there are three kinds of pixels - red, green and blue. It doesn't matter where they came from. You don't actually see a raw file in Photoshop. What you see is a TIFF or PSD representation of what's in the raw file.
     
  4. Scanned or digital raw - it makes no difference. Once the image is in the compute there are three kinds of pixels - red, green and blue. It doesn't matter where they came from.​
    I am afraid I have to disagree with this. Digital raw files are in the camera’s color space, whatever that may be. Scans will not be in the film’s color space, but rather a standard color space. The transformation between the two color spaces may be nonlinear, in which case information would be lost.
    Since digital raw files postpone this color space conversion until they are converted to regular images, they are more flexible than scans. Note that neither one actually contains complete information about the color of the light that went into the camera.
     
  5. Willy, when you see the controls on the RHS of the LR screen that produces these large changes, are you in the "Library" or "Develop" module?
    Your phrase, "a few notches warmer" makes me suspect you might be using the "Quick Develop" panel of the Library module. You will have better controls in the full blown "Develop" module.
    Tom M
     
  6. For example, I adjust white balance and a few notches warmer bring on a relatively larger blast of yellow than when I adjust a DNG. Why is this? Are scans less flexible during post-production than raw digital files?
    And when should one adjust colour temperature, curves, and saturation would you say? Do most people do it during the scan?​
    I think that the difference is that Lightroom changes its color correction for raw files depending on the white balance settings. The scans are already color corrected, so the white balance controls change the white balance and nothing else. Since the two cases will be running different code, it is also possible that the Lightroom developers just neglected to normalize the white balance controls to the same scale for both.
    In terms of white balance, scans are less flexible than digital raw files; see my post above. Negative film tends to have significantly greater dynamic range than digital sensors, though. Positive film tends to have only slightly greater dynamic range than digital sensors.
    If you can get white balance, curves and saturation close to what you want during the scan, that would probably be better for a 24-bit (8 bits per channel) scan. For 48-bit scans I would probably recommend getting only white balance close, and leaving a linear tone curve and unmodified saturation.
     
  7. david_henderson

    david_henderson www.photography001.com

    Personally I do not spend much time whilst scanning to try and colour match the original. I find thats easier and better done in an editing programme afterwards. Both PS and Lightroom are IMO far more sophisticated tools than the software that flatbed scanners run on.
    Are scans more flexible? Difficult to compare like with like. Your V500 files are likely to be much bigger than most digital cameras' outputs. So slower to edit. There is a tendency there to push large scanned files around that in reality contain no more detail- and maybe less- than a dslr image, because flatbeds usually majorly overclaim on effective resolution. It may be that this means that there are limitations on print size on a scan that a good dslr might better. On the other hand it might be the other way round if you have (say) a 6mp dslr vs a great scan from medium format. You'd need to ask a much more specific question to get a useful answer.
     
  8. I am afraid I have to disagree with this. Digital raw files are in the camera’s color space, whatever that may be.
    True, to a certain extent. However, whatever you are seeing on the screen consists of red, green and blue pixels. Color balance only alters the relative intensity of these colors. At that point, the conversion is already accomplished and a color space assigned.
    In lightroom, all adjustments are non-destructive regardless of format (e.g., raw, tiff or jpeg), making it feasible to try different scenarios. However, I find no difference in the behavior of controls between scans or digital images, whether in Lightroom or Photoshop.
    This has nothing to do with dynamic range. Scanning levels the playing field. Besides, the dynamic range of capture is separate from that off the film. Negative film compresses the dynamic range of the subject nearly 2:1. On the other hand, a high-contrast reversal film like Velvia expands the range by nearly the same factor. Dynamic range of capture of the latest generation of DSLRs is comparable to the best negative color film and four to five stops better than reversal film.
     
  9. I've noticed this jumpy behavior editing Raw files opened in Photoshop straight from ACR.
    I tend to agree mostly with what Joe C said, however, I'ld like to know where he gets proof of his claims, just out of curiosity, not that I dispute what he said. But to add to what he says I have to base what I state on pure observation with the fact (told to me by Adobe program engineers) ACR and Lightroom's tools are scaled/tuned/designed to act on high bit linear data by way of parametric instructions/algorithms. The preview you see in ACR/LR is just a reasonable facsimile of this behavior since the original Raw data is never being touched.
    Since a scan has already been converted to gamma encoded pixel data, the scaling and behavior of the tools within the preview reflects this. Now the reason this happens on Raw files opened from ACR/LR into Photoshop is because its tools are acting on the gamma encoded output (ProPhotoRGB, AdobeRGB, sRGB) chosen in the Raw converter.
    Photoshop is a pixel editor and ACR/LR are parametric instruction editors.
    This is my guess based on my connecting the dots from what I've read in the Adobe forums and the same observation made by Willy.
    I wonder what the behavior of the previews would be if Willy scanned a high bit linear version of the image in his Epson software and edited directly in Photoshop. In my scans of 35mm negatives, it's not as jumpy, but still not as fluid and smooth as editing high bit Raw digital camera files in ACR/LR.
     
  10. However, whatever you are seeing on the screen consists of red, green and blue pixels. Color balance only alters the relative intensity of these colors. At that point, the conversion is already accomplished and a color space assigned.​
    The white balance only alters the relative intensity of these three entire color channels, yes. Color correction does more than that, affecting different colors differently. With the scans, the color correction has already been done in the files. With the digital raw files the color correction has not been done yet, rather the software performs it before showing you the output and it can be redone from scratch if you change the color space.
    Dynamic range of capture of the latest generation of DSLRs is comparable to the best negative color film and four to five stops better than reversal film.​
    I imagine that some negative film still does better, but granted they have gotten the read noise down so that some DSLRs can manage around 14 stops of dynamic range at their minimum ISO.
    I'ld like to know where he gets proof of his claims, just out of curiosity, not that I dispute what he said.​
    For the details about Lightroom I am partly speculating and partly half remembering articles I had read about Lightroom using a varying combination of two different camera profiles to correct white balance. But it is a matter of fact that the digital raw files are in the camera’s native color space while the scans are not.
     
  11. That is some imagination to think that any DSLR can achieve 14 stops of dynamic range at any ISO. Please share which one has done so . . . ;-)​
    The Nikon D7000 is allegedly 13.87 Ev of dynamic range at ISO 100. I am willing to round that off to 14. http://www.dxomark.com/index.php/en/Camera-Sensor/All-tested-sensors/Nikon/D7000
    You also have this wrong as the film itself is the "RAW" material . . .​
    I was describing the scan of the film, not the film itself.
     
  12. Far exceeding the much more expensive D3x, not likely - 9 -> http://www.dpreview.com/reviews/nikond3x/page21.asp
    According to DxOMark, the dynamic range of the D7000 is very slightly better than that of the D3X at ISO 100. It is possible that Nikon is gaming the test conditions or that DxOMark has a poorly designed test, but without some evidence of either one I see no reason to dismiss the test results.
    The price of something is no guarantee that it exceeds cheaper items at every metric.
     
  13. DXO's scale are all elevated compared to DPREVIEW's.​
    Dpreview's testing methodology for testing dynamic range does not appear to be very good. They use the brightness control when they test raw files (ACR Best: Exp. -1.10 EV, Blacks 0, Brightness 125, Contrast 0, Curve Linear). If ACR’s brightness control is the same as that of Lightroom, then it is a nonlinear transform. Color correction is probably also in effect. Neither of those two things seem conducive to accurate test results.
    By comparison, DxO claims to take the R, Gr, Gb and B channels directly from the raw file and analyze them separately. That seems like a more reasonable method for a dynamic range test than relying on Adobe Camera Raw without even zeroing out the brightness control.
    Perhaps you need to learn the tools your using better?​
    Your initial post was rude, unhelpful and misspelled the word “you’re”.
     
  14. Trouble is DXO mark don't care about how the pictures look. Most photographers would not want to drag out 14 stops from a digital file. Just because it can be measured doesn't mean it will look good and does not mean that someone could ever achieve that with a real world image.
     
  15. What does the discussion on dynamic range claims between film and digital have to do with the OP's pointing out the abrupt behavior of LR's tools on his scans?
     
  16. What does the discussion on dynamic range claims between film and digital have to do with the OP's pointing out the abrupt behavior of LR's tools on his scans?​
    It has nothing to do with his question about Lightroom's behavior, but may be relevant to his topic and question about whether film scans or digital raw are more ‘flexible’.
    I apologize for responding to the troll posts, though.
     
  17. The white balance only alters the relative intensity of these three entire color channels, yes. Color correction does more than that, affecting different colors differently.
    Let's think about that a bit. There are only three colors. Adjusting the white balance affects the same channels as any other color adjustment, only to a more limited regimen. In any color adjustment, there are only two degrees of freedom plus luminance. Theoretically, white balance refers to the color temperature of a black body radiator in which all three channels are affected in a predictable manner. In practice only the red and blue channels are used, increasing one while decreasing the other as to maintain a constant luminance, leaving the green channel untouched. A true black body emits less light (e.g., less luminance) as the temperature decreases, hence all three channels. We could do that, then increase the luminance (all three channels proportionately), but the results are mathematically the same.
    Lightroom makes this adjustment much more intuitive than Photoshop, since it is done in degrees Kelvin, equivalent to the light source at the time of exposure, to restore a daylight color balance. It is easier to program in C than in assembler, to calculate celestial mechanics using vectors rather than Cartesian coordinates. Sometimes nomenclature makes all the difference.
    Returning to the OP's question and inferences, raw images contain as much information as collected. Some of this information is lost once you execute the conversion by closing ACR or exporting from Lightroom. You also lose information from a film scan when you make in-process adjustments. It is possible to retain "raw" film scans then make adjustments later. The main advantage of this is to try different conversion scenarios without the time needed to redo the actual scan. IMO, it's better to do it right the first time. Time and real estate are among the things they're not making more of these days ;-)
     
  18. Has anyone else noticed that the OP has never made a return appearance?
    Tom M
     
  19. Theoretically, white balance refers to the color temperature of a black body radiator in which all three channels are affected in a predictable manner.​
    No, white balance has nothing directly to do with color temperature. All white balance refers to is the ratio of the different color channels. Ideally for a white object, after white balancing, the red, green and blue channels would all be equal. Using the color temperature as one of the white balance control inputs sometimes makes it easier to correct the white balance, but is not actually necessary.
    The white balance only alters the relative intensity of these three entire color channels, yes. Color correction does more than that, affecting different colors differently.
    Based on your response, I am not sure that my point there was clear. What I was trying to emphasize was that for a scan the red channel is typically sRGB red or Adobe RGB red or perhaps Prophoto RGB red, whereas for a digital raw file it is the camera’s red. There is a fundamental difference between the two, and converting from one color space to another color space may lose information.
    It is possible to retain "raw" film scans then make adjustments later.​
    Of course it is, but to preserve all of the information contained in the film it is necessary to save the actual film. The scans of the film contain less information than the film does. It is the film itself that contains roughly the same amount of information as the digital raw file does, not the scan.
    Not only may it be better to correct white balance during scanning (when it may take place before color correction does) than to do so after scanning, but it may be better still to use filters to correct white balance when shooting film. Actual film could in theory be separated into its individual color channels, making the filters unnecessary, but it would be complicated to ensure this was done correctly.
     
  20. This certainly is a humbling thread. I couldn't follow most of it!
     
  21. No, white balance has nothing directly to do with color temperature.
    Color temperature is one way of achieving white balance in an intuitive fashion. Not many images have a true white or neutral grey area unless you deliberately insert one into the scene. Nor is white balance always desirable. In an example we have all experienced, colors under a canopy of trees is strongly biased toward green (depending on the season). The scene will look fairly natural if we use 5600K as the basis for color balance, whereas a true white balance will look decidedly magenta. If we were photographing china for a catalog in such an uncontrolled environments, white balance might be an option.
    What I was trying to emphasize was that for a scan the red channel is typically sRGB red or Adobe RGB red or perhaps Prophoto RGB red, whereas for a digital raw file it is the camera’s red.
    Actually, the scan is what it is. The color space is assigned to the raw values by the scanning software and the "numbers' adjusted accordingly. A color-managed program like Photoshop reads the embedded color space and displays the colors accordingly. Within broad limits, one color space, sRGB, Adobe RGB or Prophoto RGB, will look the same as another. Adjustments you make to that image are independent of the color space.
    Actual film could in theory be separated into its individual color channels, making the filters unnecessary
    Once you scan the film, the color channels are all the same, regardless of any differences between films. In theory, red is red, green is green and blue is blue. How your monitor (or printer) displays those colors in a specified environment is resolved using a profile, in which the RGB ratio is a function of luminosity.
    It is the film itself that contains roughly the same amount of information as the digital raw file does, not the scan.
    I'm an unrepentant digital fan with a lot of film experience. Yet I find this statement is pedantic, if not completely unfounded. A lot depends on the film and the digital camera. Some aspects of digital capture are superior to film and in other aspects the roles are reversed. Suffice to say they are different, and each has its place. There is no high ground in the digital v film debate, as there are no winners in Tic-Tac-Toe ;-)
     
  22. Adjustments you make to that image are independent of the color space.​
    This is mostly true, but white balance is an exception since it relies on adjusting the ratio of the particular channels that were captured. Adjusting three different channels in the color corrected version does not have the same effect.
    Once you scan the film, the color channels are all the same, regardless of any differences between films.​
    That is precisely the problem. Different films may contain different information about the colors in the actual subject of the photograph, but the scanner is only capable of capturing colors in its own color space, presumably followed by a conversion to a standard color space. By going through the scanner’s color space as an intermediate step, some information is lost, particularly since this transformation from film color space to scanner color space is nonlinear.
    The color transformation performed by the scanner to get to a standard color space may be mostly linear, but the downstream tools do not know what that transformation was, and therefore cannot undo it.
     
  23. True, to a certain extent. However, whatever you are seeing on the screen consists of red, green and blue pixels. Color balance only alters the relative intensity of these colors. At that point, the conversion is already accomplished and a color space assigned.​
    The major difference is the scan is a baked rendering, the raw RGB representation at the current stage is a possibility for rendering. Keep in mind, raw data is like digital clay, essentially Grayscale data awaiting rendering based on what the user applies in the converter. There are some advantages to true, trilinear RGB data from a scanner as opposed to interpolated color (RGBG). But in terms of rendering, there’s a bit more data at hand with raw data than from a chrome (a neg could be considered closer to scene referred).
    With raw, there is some underling processor color space assumption. Every converter can assume whatever the designer wishes even before this data is converted into the processing color space for the converter (for example, Adobe RGB (1998) for Aperture, ProPhoto 1.0 TRC for ACR/LR). Bibble, Raw Converter, etc, its not clear what color space is being used for processing, then at some point, they all allow you to select a color space encoding after conversion.
    Let's think about that a bit. There are only three colors.​
    At the raw’s rawest stage, there are not three colors. That’s an important consideration. There’s a photon count behind some kind of RG or B filter. But that hasn’t become a color just yet.
     
  24. To the OP:
    What file format are your scans in? JPEG scans will have the same problem as JPEGs from a digital camera .... lack of colour gamut within which to make seemless alterations.
    Try scanning to something with a larger colour gammut, like Vuescan's "RAW" scan TIFF file @ 48 bits. Then play with that in Photoshop/Lightroom, etc.
     
  25. Let us hope that our newly elected officials can argue the relative merits of legislation as passionately, and hopefully as "well informed" (using FACTS not just their opinions) to change the course of our healthcare and national defense initiatives! Very interesting discussion.
     
  26. Mark please leave politics out of photo net. Especially since you do not seem to be well informed on the subject.
    IMO the OP may be causing some of his own problems. To the OP, I would suggest scanning at the scanners sweet spot for resolution (about 1800 to 2400 ppi) and not the highest setting in the software. The reason being that the scanner software is just interpolating data after the sweet spot. I have no suggestions about bit depth on the epsons, but 48 bit may not be optimal. Also, you want to make sure you are using a good color space, like Adobe RGB. In these cases your scanning software may not be doing the best job of providing what you are asking of it.
    Next, I like to scan based upon the histogram, try to get as even as possible along both axises. Spikes in either axises mean less variation and tones. IMO leave the color correction and 'exposure' to post scan.
    I'm not a fan of epsons, IMO if at all possible I'd suggest getting a better scanner.
     

Share This Page