Jump to content

ethervizion

Members
  • Posts

    152
  • Joined

  • Last visited

Posts posted by ethervizion

  1. <p>You are misunderstanding the basic concept of layers. You're are thinking of them as additive processing steps (Lightroom works like this, but it's not called layers). In reality, layers are are like layers of transparency films (like the old days of overhead projectors). The reason why you only see your layers individually is because they have no transparency! If you take one layer which is completely opaque and lay it on top of another, you will only see the top layer.<br>

    As mentioned, the basic editing process that you are following is best achieved with adjustment layers. But, even in Photoshop, these only go so far. You're better off doing non-destructive editing in a program like Lightroom.<br>

    In summary, traditional layers are really only useful if they have some transparency to show what's beneath them. For example, if you only want to selectively sharpen, you could make a copy of the layer, sharpen the one below, and then "erase" (make transparent) the portions of the upper layer where you want to reveal the sharpened image content from the lower layer. If this type of selective editing is not what you're trying to achieve, then essentially layers are not the right tool for what you want to do.</p>

  2. <p>For what it's worth, here's my philosophy on archiving and backup:<br /> 1) The reliability of <em>all</em> storage media should be considered a random variable. Regardless of manufacturers' claims, nobody will replace your data when it's lost.<br /> 2) Given 1), any archiving strategy should involve verification of the archive/backup at <em>regular, frequent intervals</em>.<br /> 3) Verifying a large collection of CDs/DVDs is impractical.<br /> 4) Given all of the above, my archive/backup strategy involves two sets of hard drives: i) one set running in a dedicated server (sitting next to my regular desktop), upon which daily incremental backups are performed (additionally, the HDs in the server are mirrored using RAID-1 so that I can easily recover from the failure of one of the HDs) -- this server can run diagnostics at a regular interval so I know if there's any data degradation; ii) a set of external HDs which I store offsite and do full backups to once a month.<br /> 5) If I were more paranoid than I already am, I would do the offsite backup once a week.</p>
  3. <p>If you're shooting RAW, then the in-camera setting is nearly irrelevant (e.g., it effects the in-camera histogram, but not much else since the image itself has not been rendered into a colour space yet). In terms of output from LR, it depends on what you're outputting for. For web, go sRGB for greater compatibility. For printing, it depends on the lab -- use a good lab and find out what they want as input.</p>
  4. <p>It is inadvisable to have ProPhoto JPEG output. The reason being that ProPhoto is a wide-gamut colour space, and JPEG only supports 8-bits. Having such a wide-gamut colour space spread over an 8-bit representation may result in un-smooth tonal gradations. Furthermore, outputting a JPEG that will ultimately have its colour space converted is not wise. In general, JPEG should only be used as a final output.<br>

    Regarding your reference to Trey Ratcliff's method, it's not a big deal to use the JPEG as input to HDR processing because you will be combining multiple images which compensates for the limitations of JPEG.<br>

    Suggestion: when you output from Photoshop, save it as a 16-bit ProPhoto TIFF file. This will be kept as your "master" which you can print from. For web output, convert to sRGB, then 8-bit, then output JPEG. If you don't want the storage space requirements of a 16-bit TIFF, then you should probably convert to AdobeRGB or sRGB before final storage in some 8-bit format (e.g., TIFF or JPEG).</p>

  5. <p>Ellis probably got all the points, but I would just summarize to say that film and digital sensors are just two different recording media. Neither is "real"; each has its own unique characteristics. Film is analog, but still "samples" the light by way of discrete grain; it does not have infinite spatial resolution. Nor does it have infinite gamut and dynamic range. A "real" raw photo would make a recording of every single photon that entered the camera lens -- obviously an impossibility.</p>
  6. <p>When you ask "can we," do you mean "is it technically possible," or "is there currently software that does this?"<br /> <br /> If you mean the former, then the answer is yes, it is technically possible. The algorithms typically used to match images and stitch the edges would have to be translated into the mosaic (raw) domain. There is more and more interest in doing processing in the mosaic domain, and this application is a pretty good idea.<br /> <br /> If you mean the latter, then the answer is that I don't think there is currently any software that does this.<br /> <br /> Note to people that say that raw files are not images yet: this is not exactly true. They are not full RGB images, but they are (typically) RGB mosaic images (sometimes they are not RGB, but they are generally always mosaic). A lot of processing can be done directly to this data before de-mosaicing to a full RGB image (and this is how raw processors such as ACR and Aperture work).</p>
  7. <p>@Tim Lookingbill: you're looking at the wrong spec. This describes the coded (compressed) data format. Like almost all modern lossy coding schemes, the algorithm operates in the luminance/chrominance space (like YUV or YCbCr).</p>

    <p>What you are looking for relates to the container format. While I have not looked at the container format spec., I'm sure that it supports specifying an arbitrary ICC colour profile, likely via an EXIF tag. This should be a non-issue.</p>

    <p>What I'd be more interested to know is whether it supports 16 bpp, which would give it a wider scope of usage compared to JPEG. This is where JPEG2000 excels, but it never got much traction (I suspect mostly because browsers didn't support it).</p>

  8. <p>For JPEG, it is technically impossible to know the exact size that the file will be until the compression has actually been performed. Programs that tell you what it will be before you've saved the file are actually performing the compression internally (without saving) to see what it will be. This is why, in Photoshop Save for Web, after you change the quality, there is a processing delay before you get the result. If you're looking for a feature that allows you to enter a file size and then it produces it (without internally iterating through a series of quality values and determining the result, which would be very slow), it will never happen. It could only be done in approximation, and sometimes that approximation will be way off (depends on image content).<br>

    As an aside, this feature is actually possible with JPEG2000, and other wavelet-based embedded codecs. They produce progressive code (compressed data) streams that can be truncated to the size that you desire.</p>

  9. <p>It's important to determine whether the moiré is coming from the original image (due to the limited resolution of the sensor and the high frequency pattern in the shirt and/or the demosaicing algorithm) or due to some poorly controlled resizing during processing. I would suggest taking the original raw photo in Aperture and exporting a full res JPEG, without any sharpening or other editing (i.e., whatever Aperture defaults to, and without any sharpening -- sorry, I don't know Aperture, so I can't be more specific). Next, view the JPEG at 100%. If the moiré is there, then there's not a whole lot you can do except: a) choose a different raw processing software (i.e., that uses a different demosacing algorithm); b) remove the moiré in post-processing.<br>

    If the moiré does not appear in full res JPEG, then the noise is being generated during your processing. You should look at where in the process you are doing any resizing and sharpening.</p>

     

  10. <p>I find it to be very unethical that you would capture and disseminate the photo of someone who wished to not be captured. Regardless of what the law says you can do (the law has little to do with ethics), don't you find something wrong with using a photo where your main subject did not approve? It's one thing if the subjects are unaware (though, there are still strong ethical concerns in those cases), but in this case the subject was aware and objected. I'm not one to go around trying to give people lessons on morals, but I do find this to be rather shameful -- your "right" to take his photo does not supersede his right to privacy. Whether you believe in karma or not, you might change your view if you suddenly land up being the objecting subject one day. I hope that someone has a camera handy the next time you're in a compromising situation...</p>
  11. <p>If the monitor has two inputs (e.g., DVI and VGA), you could easily use the same monitor on both computers. Use one computer on one input, and the other computer on the other input, and select the input on the monitor as needed. I do this, and use Input Director (free) to use the same mouse and keyboard (the computers must be networked, as you plan to do). Works very well.</p>
  12. <p>Correct. It will generally depend on the "sophistication" of the receiving party. If you're not sure about this and don't know what software they're using, then it may be best to apply the rotation to the image data and resave.<br>

    Note, this only applies to non-raw formats (JPEG, TIFF, etc.). For raw, you'll never apply the rotation directly to the image data (the image data always remains unaltered) -- just to an exported version (JPEG, TIFF, etc.). But, all raw viewers I'm aware of will recognize the rotation EXIF tag and display the image with correct rotation.</p>

  13. <p>The correct rotation is simply an EXIF tag. Since the image data itself is not rotated, in order to view the correct rotation, you need to use viewing software that recognizes the EXIF tag and displays the correct rotation. Whatever you're using to view the images on the PC is not doing this. Either use different viewing software on the PC that does utilize this EXIF tag, or use software that will apply the rotation to the image data and resave the image. This will allow you to view the images with the correct rotation using any software. However, it has the drawback of altering the original images.</p>
  14. <p>What Brett and Mirek said. Especially about being wary about the comparison between sRGB and Adobe RGB. Simply put, Adobe RGB can represent some "extreme" colours that sRGB can't. But, unless you have a wide gamut monitor, your monitor cannot display those colours anyway. In other words, if you convert a given image back and forth between sRGB and Adobe RGB, you're likely to see no difference whatsoever on your monitor (or at least a very minor difference). Now, this doesn't mean that colour space doesn't matter, but it just means that you're probably not going to encounter too many scenarios where it's a problem at this stage in the game.<br>

    Simple workflow suggestion:<br>

    1. Shoot in raw and don't worry about the in-camera selection of colour space. As noted, this setting will affect the histogram, but unless you're very closely relying on this during shooting, then it won't matter.<br>

    2. Do your editing in a raw processor like Lightroom or Adobe Camera Raw. At this point, you still don't care about colour space. These raw processors operate internally on a wide gamut space with 16-bits per channel. In other words, the software is ensuring that you're operating in a space that can represent any possible colour you care about (whether your monitor can display it is another question).<br>

    3. For final output:<br>

    a) Web: set the output colour space to sRGB -- this will give you the best chance of others seeing approximately the correct colour (at least better than any other cololur space choice).<br>

    b) Print: it depends on the output device and the process. If a service is doing the printing, see what they want. If they don't know, then pick a new service or just use sRGB. If you want a representation that is most flexible without any loss, then you could do 16-bit ProPhoto RGB (but only if it's TIFF). If it will be a JPEG output, then it must be 8-bit (i.e., limited precision) and you should use a colour space that most closely matches the output device. But again, if you're not sure if the process is fully colour managed, then you should generally fall back to sRGB.<br>

    And to answer your original question: your editing process will not be any different whether you're working in sRGB or Adobe RGB.</p>

  15. <p>As Brett noted, if you shot raw, it wouldn't have made any difference. And as Mark noted, it probably still wouldn't make a difference in JPEG. The real question is, what difference does it make to YOU? Being more professional is not about selecting settings that some "pro" says you should use. Being professional is about taking full technical and creative control over the process. Not to be too harsh, but if you don't know why you should be doing something one way or another means that you're not in control.</p>
  16. <p>This is actually Moiré, plus the effect of the Bayer pattern colour filter used in the sensor, plus the raw processor (demosaicing algorithm). A raw image only has one colour component (R, G, or B) per pixel location and the raw processor (in camera or on computer) interpolates (demosaics) the remaining colour components. When you have a high frequency pattern like that, the interpolation produces false colours due to being forced to use non-similar pixels as input from the surrounding region. However, some demosaicing algorithms are better than others in mitigating this effect (as well as appropriate choice of additional processing components and parameters, such as sharpening). I'm wondering what raw processor you used and if you tried others? While it seems that Patrick has been able to mitigate the effect using post-processing, it would be interesting if you can reduce or remove it from the beginning using an appropriate raw processor.</p>
  17. <p>Dave: "Release Candidate" simply refers to the final stage of development, before it is declared to final release version. It is still a beta product. It may refer to a full version or an update/upgrade. In this case, it is the v.3.2 update -- you must first have v.3 installed, and then install this update. But, unless you need a specific feature, you should wait for the final release and stick with v.3 for now.</p>
  18. <p>Others will likely have more to say on this, but I'll start off by saying that printing digital B&W is a whole science and art on its own. There are no easy answers, but note that even though your file may have only had gray information, you're working in an RGB system (JPEG represents in the data in RGB, and the printer will involve an RGB/CMYK process). From my limited experience, using such a system will almost always have a colour cast of some sort. If this is not tolerable, you need to use a specialized process, which likely involves black-only inks, or some other specialized ink system.</p>
×
×
  • Create New...