Jump to content

joe_c5

Members
  • Posts

    417
  • Joined

  • Last visited

Posts posted by joe_c5

  1. <blockquote>

    <p>Because we might disagree about technology?</p>

    </blockquote>

    <p>Because, in the words of another recent poster who I believe captured the spirit of the problem, I think you are a “nasty bully”. I am sorry, but such behavior does not deserve “deference”.</p>

    <blockquote>

    <p>Such extremes, why not just not come here anymore?</p>

    </blockquote>

    <p>Very last post, and then I intend to do just that.<br>

    If you recall, I was gone for <em>years</em>. I'm staying gone this time, unless I get an email notification of your lifetime ban. Good day, all.</p>

  2. <blockquote>

    <p>Doesn't matter, you've got scanner RGB. Or a Quasi-Device Independent RGB working space (ProPhoto RGB etc).</p>

    </blockquote>

    <p>Scanner RGB we have already established is fine. Converting to a working space is not ideal since one might wish to use different working spaces for different eventual output targets.<br>

    If not for the ubiquitous nonlinear tone curves, I would prefer to always work directly in the output space, eliminating the possibility of having to deal with out-of-gamut colors.</p>

    <p>And finally, please argue with somebody else. You asked why somebody might want an ugly image, I told you why I might want a dim image, you both accused me not answering the question and simultaneously implied that I gave a stupid answer to the question.</p>

    <p>In my legally protected opinion, the Photo.net moderators should ban you from Photo.net. If my saying so violates the terms of service in any way, then they should ban <em>me</em> from Photo.net.</p>

  3. <blockquote>

    <p>I don't know what you mean by a 'dim scan in the scanners color space' and further. The scan either is dim as you point it or it's not. We're just talking about RGB values in a defined color space so the numbers have a scale. If it's dim, why keep it dim if that's not the goal for representing the original from the scanner?</p>

    </blockquote>

    <p>Dim meaning “definitely not clipped”. The closer it gets to 100% brightness, the higher the odds of clipping a pixel somewhere by mistake. It needn’t be very dim. The primary goal is not to clip it, since making it brighter is doable and unclipping it is not.</p>

    <blockquote>

    <p>The numbers and color spaces can be different and appear exactly the same in terms of color and tone. Yes, a linear scan <strong>without</strong> a profile defining that scale looks dark and ugly. I've got plenty of examples I could share showing that as soon as you <em>Assign</em> the proper profile, the dark image no longer appears dark (because <strong>it never was</strong>!). A lack of color management made it look dark. So I'm not clear on what you're saying and would love to see two low rez examples of this with proper tagged profiles.</p>

    </blockquote>

    <p>I understand this, but I highly doubt that every scanner in existence is even capable of tagging scanned images with its native color profile. A lot of them are only going to even give you standard color spaces and/or not tag them as such. You seem to agree that scans tagged with the scanner’s profile are better than scans converted to and tagged with some other profile?</p>

    <blockquote>

    <p>But again, the question goes unanswered: <br /><em>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? </em><br /><em>How is an ugly 16-bit image a better start than a better looking 16-bit image?</em><br /><em><br /></em>You've got a scanner and an original and presumably you want to match or improve the scanned version of the original. Presumably you're color managed such the display, the scanned data and the scanner driver or other app shows numbers correctly. You can set the scanner to give you a proper appearance or you can do it somewhere else. Assuming you've done your homework and the scanner driver provides the tools you need (much like a raw converter you'd select for that kind of data), <strong>why</strong> produce anything in the scanner RGB color space, it's native color space and gamut, that <strong>doesn't</strong> look as close to your goal as possible?</p>

    </blockquote>

    <p>I gave three examples. One was the color space, on which I believe we agree. Another was the clipping, which you appear to disagree with precisely how dim it should be, but probably agree that it should not be clipped? No robust engineering design relies on something being exactly perfect, there always needs to be design margin.</p>

    <blockquote>

    <p>Yes, color space conversions cause data loss and we've accounted for that with high bit but no reason to do so if it's pointless, nor other edit. You can't have anything but scanner RGB initially and converting it to anything <strong>but</strong> the output color space is just another conversion. Leave the data in scanner RGB tagged as such in high bit, now you've got the most data and it at least looks as you desire. If you feel not altering the scanner software gives you more data but an ugly color appearance, you're going to edit the data to make it look good anyway so it's moot. You can pay me with data loss now or later. There's no free lunch why not use the scanner driver and produce the best data at the get go?</p>

    </blockquote>

    <p>The thing is that the output color space may not be known at scan time, or multiple output color spaces may be needed for different applications. All I had been recommending with respect to color was leaving the color space as native, and I recognized that not all hardware or software has good built-in support for a color managed workflow.</p>

    <p>Finally, my third example was tone curve. If available, I would want 16-bit (or slightly higher even, if available) <em>linear</em> data, similar to but not identical to what is available from a digital camera. Nonlinear tone curves are unnecessary until required by an output profile or used for an edit.</p>

    <p>Edit: I suppose, as you pointed out, that there is nothing ugly about the linear native color space. That leaves the clipping, for which I maintain that it is safer to have the scan be too dim by an amount equal to the tolerance of its brightness than to set the white point exactly and risk clipping a hot spot somewhere.</p>

    <p>TLDR: Clipping is usually bad.</p>

  4. <p>Sampling theory actually says that scanning the 400ppi print at 400ppi is sufficient, but some margin is a good idea, so 800ppi sounds like plenty unless the scanner’s anti-aliasing filter is terrible. Also note that the edges of each printed pixel might possibly contain spatial frequencies above even 800 line widths per inch, not that there is necessarily any point in capturing it.</p>

    <p>I would expect the photo paper itself to be capable of higher resolution per mm (let alone per picture height) than the film. I am having difficulty finding a reasonable assumption for the modulation transfer function (MTF) of a typical enlarger lens, though. Without some known, low enlarger MTF, I would assume that the enlarger resolution per picture height will be on the order of the film resolution per picture height and therefore only lower the final resolution slightly.</p>

    <p>The better the MTF of every step in the process is known, the more accurately digital sharpening could restore the image toward 100% MTF. That adds significant noise, though, and might also unavoidably add sharpening halos; I would have to think about the latter.</p>

  5. <blockquote>

    <p>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? <br /> How is an ugly 16-bit image a better start than a better looking 16-bit image?</p>

    </blockquote>

    <p>Some transforms are nonlinear and therefore can be irreversible, for example, many color space conversions, clipping. A slightly dim scan in the scanner’s color space may look terrible compared to a brighter scan converted to AdobeRGB, but the former retains all data while the latter throws some out.</p>

    <p>Applying tone curves adds quantization noise. Those are also irreversible if you don't know exactly what the tone curve was. At 16-bit, the quantization may not matter, but not knowing what the tone curve was in order to get back to linear data can be critically important. Even for the <em>de facto</em> standard of sRGB, the tone curve might be true sRGB with a linear ramp followed by a 2.4 gamma curve, or it might be a 2.2 gamma curve (which it isn’t supposed to be, but lots of hardware and software plays fast and loose), with significant differences in the shadows between the two.</p>

    <p>As for scanning resolution, I think I would recommend a minimum of 1200dpi with the assumption that the photos are 4″x6″ prints from 35mm film. Some films have some modulation transfer function remaining out past 100 line pairs per mm. If you have different sized prints, different sized film, or more information than I do about the MTF of your film and/or lenses, you might adjust that minimum scanning resolution somewhat. Scanning at the scanner’s native optical resolution would be preferred if you have the storage space and spare processing time.</p>

    <p>Finally, note that most of the time when people downsample images it is done incorrectly, and will darken some bright edges and small highlights, so if you plan to scan at a very high resolution and then reduce the resolution later, you probably wish to take this into account and rescale them in linear color space.</p>

    <p> </p>

  6. <blockquote>

    <p>This is what raw data looks like:</p>

    </blockquote>

    <p>That is what raw data <em>from a typical digital camera </em>looks like. Data from a scanner probably does not use a Bayer filter format, but it might have some other arrangement of color filters. For a flatbed scanner there would probably also be some duplicated data as the scanner head moves (the Epson V700 seems to use a 6 line CCD).</p>

    <p>Since there would probably be significantly more data than TIFF format, I doubt there is much demand for scanning to an actual raw format so I would not expect many scanners, if any, to support it.</p>

  7. <p>I use both Adobe Lightroom (it may technically be called Photoshop Lightroom, but I don’t think it is subscription only yet) and the GNU Image Manipulation Program (GIMP). The name of the latter is a travesty; somebody really ought to fork it <em>just</em> to change the name.</p>

    <p>GIMP cannot match Photoshop’s feature set, and two particularly glaring omissions are 16-bit channel support, and adjustment layers, both still under development. There is almost no CMYK support, either; I think soft proofing to a CMYK profile is about it. All that said, with a price of free, compared to the cost of a Photoshop subscription, GIMP competes quite well on value for some use cases, or even some business cases. It would probably be worth trying it out for an hour or so.</p>

    <p>Similarly if you are editing digital photographs, especially in quantity, I recommend trying the 30 day free trial of Lightroom. I dislike the hidden nonlinear tone curve in recent versions of Lightroom, but there is a built-in preset inverse tone curve to get back to either roughly linear or exactly linear (I did not test which of the two was the case).</p>

  8. <blockquote>

    <p>Eric. You and Tim should conspire together to write a book on color management. It would be a fun read.</p>

    </blockquote>

     

    <blockquote>

    <h3>Forum Posting Guidelines</h3>

    <p> …<br>

    13. Though it shouldn't be necessary to ask this, it nevertheless is. Please treat other users with courtesy and respect, even if you disagree with them. Photo.net will not tolerate users who are insulting or abusive to others (see the photo.net <a href="/info/terms-of-use">Terms of Use</a>, especially section #4 "Conduct of Users")</p>

    </blockquote>

  9. <p>The profile is relevant because it mostly determines both the accuracy of a display and the differences between displays.</p>

    <p>For example, on a Dell U2412M, which is a budget 24″ IPS monitor (it may even have been specifically derided earlier in this thread), one profile using a shaper+matrix created from 1250 measurements has a maximum ΔE of 3.043870, while another profile using an XYZ look up table created from the <em>same</em> 1250 measurements has a maximum ΔE of 0.997169.</p>

    <p>Again, this is the same monitor with the same calibration and the same measurement data, but a better profile results in better color accuracy.</p>

    <p>It also seems like acceptable accuracy for most purposes. The sRGB gamut makes this particular monitor a poor choice for some print work but it is disingenuous to imply that no inexpensive monitor could possibly meet a professional’s needs.</p>

    <p>So it is likely not so much that the Apple Cinema display is iffy, but instead that its <em>profile</em> is iffy. It is even possible that the reference monitor’s profile is iffy, either instead, or as well.</p>

    <p>A common method for determining that sort of thing is to keep improving the input until no significant change can be detected in the output. In this case, making the profile better and better should eventually stop significantly improving the color accuracy, at which point the profile can be considered not to be the cause of the observed color mismatch.</p>

  10. <blockquote>

    <p>What is that supposed to mean? I asked earlier so I don't think I'm going to get an answer. Calibrated to what, and how? The point of color management, profiling and calibration of this device.</p>

    </blockquote>

    <p>Rather than asking him a question that you already know the answer to for your particular Apple Cinema display, have you considered either:</p>

    <ol>

    <li>Answering the yes-or-no question</li>

    <li>Posting what it was calibrated to, and how?</li>

    </ol>

    <p>As to your question, I would be interested in whether the profile used a look up table, and how many color samples were used to generate it. I don't care about the calibration at all unless it is either pathological or restricts the gamut.<br>

    <br />If a profiled monitor displays colors inside of its gamut that are off by more than the quantization noise, that is either the fault of the profile or possibly something else wrong with the color management system.</p>

  11. <p>I'm going to regret posting, but in many cases all that is important about a display is how well it is <em>profiled</em>. For some images with a wide gamut, it may be important that the monitor also have a wide gamut.<br>

    <br />Calibration, and in particular some of the features allowing calibrations to be quickly changed, might possibly save some time, but in general it will <em>not </em>make images displayed with a look up table based profile significantly more accurate.</p>

    <p>Consider how few printers are <em>calibrated</em> for different papers or inks. Most are only <em>profiled</em> for different papers or inks. I don't read about a whole lot of concern that the printers are not being recalibrated.</p>

     

  12. <blockquote>

    <p>I think you can achieve almost the same with the sRGB color space. The sRGB non-linear correction curve is very closely approximated with 2.2 gamma.</p>

    </blockquote>

    <p>It is close to 2.2 gamma, but as you pointed out, an input profile created from the sRGB ColorChecker made images slightly darker.<br>

    <br>

    If Argyllcms colprof could be forced to assume a shaper with an sRGB tone curve, then one could get more or less an identity profile while still using sRGB. Argyllcms is open source, so it shouldn't be hard to do if it were important. That wouldn't help with the camera's nonlinear tone curve, though, so there would still be other weird effects using a camera jpeg.</p>

  13. <blockquote>

    <p>You are right. But it seems like -u is ignoring -U - I have the same results for -u enabled with and without -U. And both are darker than original image.</p>

    </blockquote>

    <p>Ok, it looks like I misinterpreted the effect of -u; it seems to be the same as -U 1. With either setting I got only very slightly darker for either Colorchecker.tif or RGB16Million.tif. If you consider that too dark, then the problem seems to be the difference between the sRGB tone curve and the 2.16 gamma that colprof is picking as the best fit.</p>

    <blockquote>

    <p>What do you mean? How can I create ICC profile given a DNG profile? The only option I found was <a href="http://dcp2icc.sourceforge.net/" rel="nofollow" target="_blank">dcp2icc</a> utility, but its results were unpleasant.</p>

    </blockquote>

    <p>I mean that the results of using an ICC profile should be fine. I had only used DNG profiles as an example of using any kind of profile directly on a RAW format image.</p>

    <p>I tried converting everything to AdobeRGB to avoid issues with the sRGB tone curve, and now the resulting profile seems to be extremely close to the identity profile you were looking for, within around 1 sRGB point for ColorChecker.tif. Using Imagemagick's convert:<br /> convert -profile "sRGB Color Space Profile.icm" -profile AdobeRGB1998.icc ColorChecker.tif colorchecker2.2.tif<br /> scanin.exe -dioap ColorChecker2.2.tif ColorChecker.cht ColorChecker.cie<br /> colprof.exe -v -y -qh -bn -ag -U 1 ColorChecker2.2<br /> tifficc -v -i ColorChecker2.2.icm ColorChecker2.2.tif colorchecker2.2corrected.tif</p>

    <p>So it seems to be incredibly important when creating ICC profiles to avoid nonlinearity unless you have enough patches to correct for it. Since sRGB has a nonlinear tone curve (different from gamma 2.2), I recommend that you avoid it until after the color correction is complete. While a gamma curve is also nonlinear, Argyllcms's colprof seems to be designed to deal well with gamma curves.</p>

    <p>If you use jpegs from the camera instead of raw files, you are likely to run into the same issue with the camera's nonlinear tone curve, even if you have a jpeg in AdobeRGB. Jpegs could still be at least reasonably color corrected, but it is likely to take a lot more than 24 patches.</p>

  14. <blockquote>

    <p>1000?! What are the color charts with that huge number of patches?..</p>

    </blockquote>

    <p>I don't know of any targets intended for cameras with more than around 100, so I guess that mostly rules out lookup table based profiles. For printers, for example, targets can be as many patches as one has time to measure. For a regular Colorchecker, I recommend -ag or maybe even -aG.</p>

    <blockquote>

    <p>As I understood, -u key has no impact in non-cLUT modes, like -aG, doesn't it?</p>

    </blockquote>

    <p>The -u switch seemed to make a difference in -ag when I tried it. I no longer saw the extra step "Fixup matrix for white point" in the verbose output.</p>

    <blockquote>

    <p>But I need to integrate a color correction in my custom application and I don't know of a way to apply DNG profiles (in .dcp-files) to RAW files programmatically. That's why I'm dealing with ICC profiles.</p>

    </blockquote>

    <p>Starting with RAW files is the important part, whether the profile is DNG or ICC shouldn't really matter. Starting with jpeg would still work, but would have a lot more nonlinearity to deal with; with jpeg I would recommend using enough patches that -as worked well.<br>

    </p>

     

  15. <p>I can get what seem like acceptable results on the RGB16Million.tif image using:<br>

    colprof.exe -v -y -qh -bn -ag -u ColorChecker</p>

    <p>One difference is using -ag instead of -ax, to generate a gamma+matrix profile instead of an XYZ lookup table. The 24 patches of the colorchecker is just too few to even get accurate shapers let alone an entire lookup table. With more patches, particularly more dark patches, shaper+matrix might be a reasonable choice. I would not recommend trying to create a lookup table based profile with less than around 1000 patches, though the exact number will vary with the device and the choice of patch colors.</p>

    <p>The other difference is -u instead of -U 1.0952, to completely prevent scaling of the white point to the lightest patch of the colorchecker. With -U it was scaling the white point and then sort of scaling it back to not exactly the same place it started. With an actual photograph, you might need both -u so that it did not think the whitest patch was pure white and -U to correct the exposure.</p>

    <p>You may get better results color correcting a raw file in one step than correcting a jpeg which has already been color corrected once. If you use an actual colorchecker target, I get quite good results using X-Rite's Colorchecker Passport software with Lightroom, though I don't like Lightroom 4. Colorchecker Passport uses Adobe's DNG profiles instead of ICC profiles.</p>

    <p>Finally, keep in mind that with pretty much any camera on the market, either consumer or professional, even after color correction some colors will still tend to be wrong. When colors are particularly important, for example product photography, the colors often need to be corrected by hand.</p>

  16. <p>To get the same exposure on the crop sensor in your example, you would use the same shutter speed on both the full frame and the crop cameras. Exposure is a measure of light per unit of area. Since the 2x crop sensor has only one quarter as much surface area, it does only collect one quarter as many photons compared to an actual 600mm f ⁄ 4 lens on a full frame 35mm camera. In this example, the crop camera will therefore have twice as much photon shot noise.</p>

    <p>To get an image that is photon-for-photon identical (including depth of field) to 600mm f ⁄ 4 at 1 ⁄ 600s and 400 iso on a full frame camera, the 2x crop camera would need 300mm f ⁄ 2 at 1 ⁄ 600s and 100 iso. <br /><br /></p>

  17. <p>I have determined experimentally that attempting to rebut similar statements tends to be unproductive, so I will not do so.</p>

    <p>I caution others that this point of view appears similar to the ones that caused the Dark Ages. Most software vendors are constantly striving to improve their products, and I suspect that includes Adobe.</p>

    <p>Finally, since it appears nobody else on Photo.net can or will back me against such views, I am afraid I do not fit in here and intend to head elsewere.</p>

    <p>Farewell to all.</p>

  18. <p>Most digital cameras capture fewer than 65,535 electrons per subpixel, making 16-bit linear a good format indeed for digital raw images, theoretically allowing for no quantization error to be added by the encoding beyond what was already present in the electrons. So far, the limited dynamic range of the read circuitry has made 14-bit linear adequate for most cameras, where the 15th and 16th bit would have been mostly noise.</p>

    <p>Also XYZ space is not the same as LMS (Long Medium Short) space, which should be better for matching since it is based more directly on the human eye than XYZ is. I believe that the ICC is aware of the limitations of XYZ space and is discussing eventually moving to a better connection space.</p>

    <p>Finally, editing linear data appears to be as straightforward as using 32 Bits/Channel in Photoshop. I admit that this change in workflow is not free, but it seems to be practical in many situations.</p>

  19. <blockquote>

    <p>in any case the 2.2 gamma mid-grey value should be 186, not 187. Also, the jpeg file format carries colour space information which may alter how it appears on other people's monitors.</p>

    </blockquote>

    <p>Both of these were deliberate, the value of 187 since the tone curve is sRGB instead of gamma 2.2 (they are slightly different) and the color space information since it may improve accuracy with a profiled monitor.</p>

  20. <p> Disclaimer: I am not a professional photographer.</p>

    <p>Consider a macro lens for typically lower distortion and the option for closer focusing if necessary, but you may not need either of those things.</p>

    <p>For many purposes, one of those two lenses you mentioned should take acceptable photos of sunglasses. Background, lighting and color correction are likely to be more important than camera and lenses in determining the quality of the photographs.</p>

     

  21. <p>Making sure that Sharpening in the Picture Control Settings is set to 0 should help at least slightly if that is not already the case. Lowering the Contrast a bit might also help.</p>

    <p>For black and white portraits, the manual suggests setting Filter Effects to Green as an option to soften skin tones. I do not know if it would improve them or not, but it may be worth testing. Another option for black and white would be to shoot in color and convert to black and white in post processing.</p>

    <p>I am afraid I am not familiar enough with the lenses to make any recommendations.</p>

  22. <p>It does not seem that Lightroom 3 supports the E-5 yet. I expect that they will at some point.</p>

    <p>One particularly determined individual claims to have used exiftool and ExifToolGUI to edit the exif data from “E5” to “E-30” including the hyphen, and to then be able to import them at all as a temporary workaround.</p>

  23. <blockquote>

    <p>I remember the rise and fall pixel timing issues with CRT's as the argument against using eyeball calibrators that relied on raster line blending targets and as a determiner of actual gamma.</p>

    </blockquote>

    <p>I think it is at least reasonable if the light and dark lines are on separate CRT scan lines. It was when vertical lines or cross hatch patterns were used on a CRT that the pixel rise and fall times tended to cause large errors.</p>

×
×
  • Create New...