Jump to content

joe_c5

Members
  • Posts

    417
  • Joined

  • Last visited

Posts posted by joe_c5

  1. <p>Consider using Javascript instead of Flash for the slideshow on the front page. I personally prefer to only use Flash if the same results cannot be achieved without it.</p>

    <p>And as mentioned above:</p>

    <ul>

    <li>The gallery controls in the upper right hand corner do not work because there are elements on top of them.</li>

    <li>The viewed gallery images sticking outside of the rectangle but behind the logo seems a little strange and might not be intended.</li>

    </ul>

  2. <p>Using a ColorChecker or similar color reference card will often help get most colors close enough, but there is no method to get all colors absolutely correct without making lots of changes by hand, either from memory or while looking at the scene.</p>

    <p>In theory a photographer could take spectrophotometer measurements at the shoot of a few particularly important colors so as not to have to rely on memory, but that would be so inconvenient that it would not always be useful.</p>

  3. <p>Sorry, to me that was the instructions, and I used GIMP, but the gist ought to apply to any software. In more detail:</p>

    <ol>

    <li>I used the lasso tool to draw a selection inside the car but well outside the dress except at the sill where I drew slightly inside the dress.</li>

    <li>With the dress selected I opened the Hue/Saturation control panel.</li>

    <li>I clicked on “B” for blue.</li>

    <li>I moved the Hue slider from 0 to around 32.</li>

    <li>I clicked OK.</li>

    </ol>

  4. <p>Resolution? No. Possibly very slightly with sophisticated software and slight camera movements, but not necessarily even then.</p>

    <p>The dynamic range could be significantly higher, which might improve the overall quality of the image. If the highlights had been at or near clipping at the same time as the shadows were noisy, the composite image could be visibly better. People do this all the time, and the technique is termed HDR (High Dynamic Range). It is nearly useless for moving objects, though.</p>

    <p>Without knowing specific lenses and cameras, there is no way to tell whether the composite image would be better or worse overall than a single photo from a larger format camera.</p>

  5. <p>The cause may have to do with the fact that your eyes can see the difference between blue and violet light, but many cameras cannot. Similarly the camera may be able to see the difference between violet and magenta in cases where your eyes cannot.</p>

    <p>The car may have been magenta instead of violet, meaning that it reflected red light too, so that the camera may have captured both blue and red. If the dress was violet, then the camera may have captured only blue.</p>

    <p>Fixing this would be more difficult, and would probably require making a selection or mask to avoid changing the car and then adjusting the hue of blue in the dress. Here is a quick and dirty version in about a minute, but doing it carefully should not take that much longer.</p><div>00XhFt-302995584.thumb.jpg.b576577bf8c2942493e6521a67d12758.jpg</div>

  6. <blockquote>

    <p>I wonder how this happens? I'm guessing the portion of the 1.8 gamma curve was shaped in such a way that couldn't be done using curves cuz' I really gave it a go and almost gave up until I tried the method above.</p>

    </blockquote>

    <p>If that is really what happened, then my best guess would be that the profile was converting between the sRGB tone curve and gamma 1.8, whereas the curves could only easily convert between gamma 2.2 and gamma 1.8, which is a different transformation. A sufficiently accurate curve could replicate this, though I do not know if the curves dialog box would allow it.</p>

    <p>The sRGB tone curve is linear in the shadows, followed by a gamma 2.4 curve to end up close to the gamma 2.2 curve for the remaining tones.</p>

  7. <p>I agree with Jack that gamma is unnecessary for processing. Not all operations even correctly take the gamma into account, particularly resizing images in many software packages. This can require extra steps to be done by hand.</p>

    <p>Since most displays and probably at least some printers are expecting gamma encoded data, it would still need to be in a gamma space for final output. As long as video bandwidth is still at a premium, I do not see displays moving to linear encoding, either.</p>

  8. <p>A lab could produce good results from Adobe RGB images <em>if their process was set up to</em>. If not, the results would probably look undersaturated.</p>

    <p>Most important to getting good prints back is to give the lab whatever format they ask for. That might reasonably be sRGB or AdobeRGB or their own printer’s color space.</p>

     

    <ol>

    <li>I am afraid I cannot specifically recommend a product. For starters, you could not be certain that their process would not change even if you did come up with a profile. If you still wished to attempt it, you would probably need a spectrophotometer, and software designed to build a profile based on spectrophotometer readings of a printed test chart.</li>

    <li>This is not how monitor profiles are meant to be used. The intended use is so that images in any color space can be displayed as close to correctly as possible on your monitor. You are not supposed to change your monitor profile for different purposes (with the possible exceptions of white balance or brightness, but even then I doubt many people go to the trouble of using multiple profiles). To get the effect you are looking for, the intended solution is to soft proof to the printer profiles to see roughly what they would look like after printing.</li>

    <li>If you print using the correct profile for your printer and the lab prints using the correct profile for their printer, you should get at least very roughly the same colors and brightness. Gamut would probably match if the original image were sRGB or if the printers had similar gamuts. Depending on what rendering intent was used during printing, the white balance may vary a little with different papers and printers. White balance varies with lighting anyway, and does not by itself indicate either one is wrong, only different. In theory you could even use one printer to soft proof another printer, but in practice I do not know if anybody does that.</li>

    </ol>

    <p>The ColorChecker and Colorchecker Passport software seem pretty good for correcting the output of the camera, but have nothing to do with the monitor matching the prints or the prints matching eachother.</p>

  9. <p>The native resolution of the Epson 3800 print driver is 360 ppi. It might even honor the “finest detail” setting for color images, which would raise the native resolution to 720 ppi.</p>

    <p>It is very likely that the detail difference is real between 200 ppi and 360 ppi printed on an Epson 3800. The variation in when people decide the difference becomes visible is why I listed the wide range of 150 ppi to 600 ppi.</p>

  10. <p>I am afraid there is no single answer as to how large you can print. A compact digital camera could perhaps reasonably be used to print a billboard.</p>

    <p>That said, one of many possible standards for high quality images would be 300 pixels per inch. That would be roughly 18.7x12.5″, well within the capabilities of your printer. Depending on the lens used, close inspection might show quality issues in the corners even at that size. I expect most people would agree the limit to stand close inspection was somewhere between half and twice that size.</p>

    <p>For one point of comparison, Ansel Adams made many 8x10″ contact prints, where the negative was the same size as the print with no enlargement. The less you feel the need to match that level of quality, the larger you can print.</p>

  11. <blockquote>

    <p>Adjustments you make to that image are independent of the color space.</p>

    </blockquote>

    <p>This is mostly true, but white balance is an exception since it relies on adjusting the ratio of the particular channels that were captured. Adjusting three different channels in the color corrected version does not have the same effect.</p>

    <blockquote>

    <p>Once you scan the film, the color channels are all the same, regardless of any differences between films.</p>

    </blockquote>

    <p>That is precisely the problem. Different films may contain different information about the colors in the actual subject of the photograph, but the scanner is only capable of capturing colors in its own color space, presumably followed by a conversion to a standard color space. By going through the scanner’s color space as an intermediate step, some information is lost, particularly since this transformation from film color space to scanner color space is nonlinear.</p>

    <p>The color transformation performed by the scanner to get to a standard color space may be mostly linear, but the downstream tools do not know what that transformation was, and therefore cannot undo it.</p>

  12. <p>If this is a Mac, pressing Command-Shift-3 may save a screenshot of the entire screen as a png file to the desktop. Command-Control-Shift-3 may capture a screenshot to the clipboard. Command-Shift-4 may let you capture only a portion of the screen by drawing a box. Pressing the spacebar after the Command-Shift-4 may let you pick a particular window instead of drawing a box.</p>

    <p>If despite the Apple Cinema Display it is a PC, the Print Screen button would put a screenshot on the clipboard.</p>

  13. <blockquote>

    <p>Theoretically, white balance refers to the color temperature of a black body radiator in which all three channels are affected in a predictable manner.</p>

    </blockquote>

    <p>No, white balance has nothing directly to do with color temperature. All white balance refers to is the ratio of the different color channels. Ideally for a white object, after white balancing, the red, green and blue channels would all be equal. Using the color temperature as one of the white balance control inputs sometimes makes it easier to correct the white balance, but is not actually necessary.</p>

    <blockquote>

    <p><em>The white balance only alters the relative intensity of these three entire color channels, yes. Color correction does more than that, affecting different colors differently.</em></p>

    </blockquote>

    <p>Based on your response, I am not sure that my point there was clear. What I was trying to emphasize was that for a scan the red channel is typically sRGB red or Adobe RGB red or perhaps Prophoto RGB red, whereas for a digital raw file it is the camera’s red. There is a fundamental difference between the two, and converting from one color space to another color space may lose information.</p>

    <blockquote>

    <p>It is possible to retain "raw" film scans then make adjustments later.</p>

    </blockquote>

    <p>Of course it is, but to preserve all of the information contained in the film it is necessary to save the actual film. The scans of the film contain less information than the film does. It is the film itself that contains roughly the same amount of information as the digital raw file does, not the scan.</p>

    <p>Not only may it be better to correct white balance during scanning (when it may take place <em>before </em>color correction does) than to do so after scanning, but it may be better still to use filters to correct white balance when shooting film. Actual film could in theory be separated into its individual color channels, making the filters unnecessary, but it would be complicated to ensure this was done correctly.</p>

  14. <p>I am afraid I am not familiar enough with either NX2 or Elements to directly answer your questions, however if you can post four images then I could tell you which of the two is accurate and perhaps take a guess at how to get the other one to be accurate as well.</p>

    <ol>

    <li>An image that does not match between NX2 and Elements</li>

    <li>A screenshot of the image as seen in NX2</li>

    <li>A screenshot of the image as seen in Elements</li>

    <li> Any image with your monitor profile embedded. The image does not matter at all, rather I am looking for the monitor profile. Under MacOS it is possible that the screenshots will already contain the monitor profile.</li>

    </ol>

    <p>It should then be possible to determine which of the two programs correctly transforms from the image’s color space into the monitor’s color space.</p>

  15. <blockquote>

    <p>What does the discussion on dynamic range claims between film and digital have to do with the OP's pointing out the abrupt behavior of LR's tools on his scans?</p>

    </blockquote>

    <p>It has nothing to do with his question about Lightroom's behavior, but may be relevant to his topic and question about whether film scans or digital raw are more ‘flexible’.</p>

    <p>I apologize for responding to the troll posts, though.</p>

  16. <blockquote>

    <p>DXO's scale are all elevated compared to DPREVIEW's.</p>

    </blockquote>

    <p>Dpreview's testing methodology for testing dynamic range does not appear to be very good. They use the brightness control when they test raw files (ACR Best: Exp. -1.10 EV, Blacks 0, Brightness 125, Contrast 0, Curve Linear). If ACR’s brightness control is the same as that of Lightroom, then it is a nonlinear transform. Color correction is probably also in effect. Neither of those two things seem conducive to accurate test results.</p>

    <p>By comparison, DxO claims to take the R, Gr, Gb and B channels directly from the raw file and analyze them separately. That seems like a more reasonable method for a dynamic range test than relying on Adobe Camera Raw without even zeroing out the brightness control.</p>

    <blockquote>

    <p>Perhaps you need to learn the tools your using better?</p>

    </blockquote>

    <p>Your initial post was rude, unhelpful and misspelled the word “you’re”.</p>

  17. <blockquote>

    <p>Far exceeding the much more expensive D3x, not likely - 9 -> <a rel="nofollow" href="http://www.dpreview.com/reviews/nikond3x/page21.asp" target="_blank">http://www.dpreview.com/reviews/nikond3x/page21.asp</a></p>

    </blockquote>

    <p>According to DxOMark, the dynamic range of the D7000 is very slightly better than that of the D3X at ISO 100. It is possible that Nikon is gaming the test conditions or that DxOMark has a poorly designed test, but without some evidence of either one I see no reason to dismiss the test results.</p>

    <p>The price of something is no guarantee that it exceeds cheaper items at every metric.</p>

  18. <blockquote>

    <p>That is some imagination to think that any DSLR can achieve 14 stops of dynamic range at any ISO. Please share which one has done so . . . ;-)</p>

    </blockquote>

    <p>The Nikon D7000 is allegedly 13.87 Ev of dynamic range at ISO 100. I am willing to round that off to 14. http://www.dxomark.com/index.php/en/Camera-Sensor/All-tested-sensors/Nikon/D7000</p>

    <blockquote>

    <p>You also have this wrong as the film itself is the "RAW" material . . .</p>

    </blockquote>

    <p>I was describing the <em>scan </em>of the film, not the film itself.</p>

  19. <p>Steve’s examples are posted at such a low resolution that diffraction would not become visible until somewhere near f ⁄ 64.</p>

    <p>Diffraction blur does not appear suddenly at an exact aperture, but rather gradually. The aperture at which it appears will depend on one or more of the following:</p>

    <ul>

    <li>Pixel pitch of the sensor (or grain size of the film)</li>

    <li>Antialiasing filter</li>

    <li>Final output resolution</li>

    <li>Final output size</li>

    <li>Final output viewing distance</li>

    </ul>

    <p>Assuming 12 megapixels and being able to see every pixel, e.g. 100% zoom, diffraction blur should be visible at very roughly:</p>

    <ul>

    <li>f ⁄ 4 on a 1 ⁄ 1.8″ sensor compact</li>

    <li>f ⁄ 8 on 4 ⁄ 3″</li>

    <li>f ⁄ 11 on APS-C 1.5x or 1.6x crop</li>

    <li>f ⁄ 16 on 35 mm</li>

    <li>f ⁄ 32 on 6x4.5 cm</li>

    <li>f ⁄ 64 on 5x4″</li>

    </ul>

    <p>With more pixels, diffraction blur may be visible at larger apertures; with fewer pixels, diffraction blur may not be visible until smaller apertures. If one is not able to see all of the pixels, e.g. too far away, printed too small, then diffraction blur will depend on the smallest features that are visible instead of the size of a pixel.</p>

    <p>To avoid visible diffraction blur, shoot at apertures larger than these. Exactly how much larger will depend on the factors listed above. The link above, http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm, seems like an excellent place to get more accurate numbers.</p>

  20. <blockquote>

    <p>However, whatever you are seeing on the screen consists of red, green and blue pixels. Color balance only alters the relative intensity of these colors. At that point, the conversion is already accomplished and a color space assigned.</p>

    </blockquote>

    <p>The white balance only alters the relative intensity of these three entire color channels, yes. Color correction does more than that, affecting different colors differently. With the scans, the color correction has already been done in the files. With the digital raw files the color correction has not been done yet, rather the software performs it before showing you the output and it can be redone from scratch if you change the color space.</p>

    <blockquote>

    <p>Dynamic range of capture of the latest generation of DSLRs is comparable to the best negative color film and four to five stops better than reversal film.</p>

    </blockquote>

    <p>I imagine that some negative film still does better, but granted they have gotten the read noise down so that some DSLRs can manage around 14 stops of dynamic range at their minimum ISO.</p>

    <blockquote>

    <p>I'ld like to know where he gets proof of his claims, just out of curiosity, not that I dispute what he said.</p>

    </blockquote>

    <p>For the details about Lightroom I am partly speculating and partly half remembering articles I had read about Lightroom using a varying combination of two different camera profiles to correct white balance. But it is a matter of fact that the digital raw files are in the camera’s native color space while the scans are not.</p>

  21. <blockquote>

    <p>If you were the one about to send a book to them. Would you just do every correction on s-RGB or would you soft proof to their profile and then send as S-RGB, even if the image does not look as good as when you corrected it on Adorama Pix profile?</p>

    </blockquote>

    <p>I would probably just send them the sRGB images and let them make the adjustments for their printers. They have both more information than I have and more experience with the specific printer. Unless I was dissatisfied with how a photo book turned out, there would be little to no incentive to spend my time to achieve the same results they offer at no extra cost.</p>

  22. <blockquote>

    <p>Although as previously mentioned the monitors might not display this Profile/color space here is the same image converted to Adorama Pix profile.</p>

    </blockquote>

    <p>It seems that the profile is enormous at 1.47 MB, and is not attached, so unless people download the profile and assign it to the second image, the data in it is meaningless.</p>

  23. <blockquote>

    <p>For example, I adjust white balance and a few notches warmer bring on a relatively larger blast of yellow than when I adjust a DNG. Why is this? Are scans less flexible during post-production than raw digital files?<br /> And when should one adjust colour temperature, curves, and saturation would you say? Do most people do it during the scan?</p>

    </blockquote>

    <p>I think that the difference is that Lightroom changes its color correction for raw files depending on the white balance settings. The scans are already color corrected, so the white balance controls change the white balance and nothing else. Since the two cases will be running different code, it is also possible that the Lightroom developers just neglected to normalize the white balance controls to the same scale for both.</p>

    <p>In terms of white balance, scans are less flexible than digital raw files; see my post above. Negative film tends to have significantly greater dynamic range than digital sensors, though. Positive film tends to have only slightly greater dynamic range than digital sensors.</p>

    <p>If you can get white balance, curves and saturation close to what you want during the scan, that would probably be better for a 24-bit (8 bits per channel) scan. For 48-bit scans I would probably recommend getting only white balance close, and leaving a linear tone curve and unmodified saturation.</p>

×
×
  • Create New...