Here is also an idea how to emulate scanner multisampling with DSLR. There is a well known very effective lossless temporal noise reduction method. It is based on multiple similar images blended with "Mean", "Median" or some other special blending modes. It is well described in this article Pat David: Noise Removal in Photos with Median Stacks (GIMP/G'MIC & Imagemagick) This stacking option also available in Photoshop, but as usual designed in rather hidden and confused way: File -> Scripts -> Load Files Into Stack -> check "Create Smart Object" Layer -> Smart Objects -> Stack Mode -> Mean This stacking option also available in Affinity Photo, but currently not available PhotoLine. Hope PhotoLine developers will add it in future. Mean — averages pixel content across the stack of images. Good for long exposure simulation and noise reduction. Median — removes pixel content that is not consistent in each image. Suitable for object removal and noise reduction. Outlier — exposes pixel content that differs in each image: great for sequence composites. Maximum — uses the maximum pixel values from each image. Can be used for creative exposure blending where the subject is lighter than the background. Minimum — uses the minimum pixel values from each image. Suitable for exposure blending where the subject is darker than the background. Range — indicates areas that change across the image stack. Good for analyzing what has changed between each image. Mid-Range — uses the middle pixel values from each image. Can be used to increase tonal range if used with bracketed exposures. Total — produces the total value of pixels from each image. Usually results in overexposure, but can be used to lighten very underexposed imagery. Standard Deviation — analytical: measures the distribution of information between the images. Useful for object removal as it clearly indicates areas that will be averaged out with a Median operator. Variance — analytical: as Standard Deviation, indicates how pixel values are spread between images. More intense distributions are shown very clearly. Skewness — analytical: highlights edge detail and indicates the intensity of pixel value distribution. Can be used to determine tonal and spatial differences between images. Kurtosis — analytical: detects the peakedness of an image. A brighter result represents low noise levels and a tonal uniformity (most pixels at dominant gray level). Darker results represent greater noise and less tonal uniformity (more pixels further away from dominant gray level). Entropy — analytical: represents the number of bits required to encode information in the stack. Could be used with stacked video frames (within the same scene or shot). So in short: 1. You just need to quickly shoot 5-10 similar copies of the same film. Continuous Shooting (Burst Mode) will do the trick. Make sure your setup is stable and you don't move camera or film during shooting. 2. Process RAW files to TIFFs as it was described earlier, but don't do invert and don't do other processing yet. 3. Stack images into one single file with "Mean" blending mode and save as single TIFF. 4. Process negative with workflow described earlier in my posts. This will clean up all possible digital noise without touching film grain structure and will make your source file more dense at pixel level. Also this method probably should be way less risky than HDR stacking, because it will not change original tonal relations taken from linear sensor data. And a quick test to proof my theory. Here is crop of inverted and processed film negative. This film negative sample was scanned with camera and was underexposed more than usual to amplify the noise and see the camera limits. This is 400% scaled crop, but i can see the difference at 100% as well.