Jump to content

Recommended Posts

<i>"The NEF from the scanner is still baked. It's not raw."</i><br><br>You're right and you are wrong, Andrew. 'Raw' is such a hype. Everything is 'baked'.<br>'Raw' is what you get out of a machine without you having had a chance yet to mess with it. No more. That doesn't mean it is in any way untouched, unprocessed.<br>When the machine applies a pattern that has yet to be removed for the image to be 'enjoyable', raw may also mean not yet fit for consumption.<br>So yes: the NEF you get out of the scanner is baked, in that it is already fit for consumption (there was no need to remove a pattern). And also because it is the result of whatever the machine itself did that had an effect on the image. And it is indeed 'raw' as well, because it is what the machine produces without further ado.<br><br>You are using the word as if it is a synonym for 'the best possible (as far as that scope mentioned earlier is concerned) data set'. It can be better than the result of doing things to the original data, yes. And there is a big difference between, say, a 16 bit data set and a 8 bit data set that was put through a JPEG torture. White balancing raw only works better if it contains a 'wider' data set than another 'processed' file format. But that's not an inherent quality of 'raw' vs 'baked', i.e. there is no reason why only 'raw' files can hold that wider data set.<br>It would be rather pointless to have files that have yet to be turned into something fit for consumption unless the result of 'cooking' is in some way less than what the raw data set holds in potential. The benefit of such files then should be in the choice they offer in how we want the unavoidable limitation to work out. So we can perhaps safely assume that there is a (big? i don't know it is) difference between the original (a much better word than 'raw') and changed data. Else noone in his or her right mind would bother with 'raw' files. But it is not an essential difference between 'raw' and 'processed' data, but rather a difference between the dataset one type of device can create and the dataset other devices can display.<br><br>Parametric editing indeed isn't limited to either raw or baked data. It's no more than a fancy name for storing the original data set as well as everything you did to mess it up.<br><br>Or, in short: samples of a new jargon, coined by a new community, to say simple things in a confusing way. Nothing new under the sun. ;-)<br><br>Anyway, the Nikon Scan software does produce 'raw' files that allow parametric editing.<br>I still don't know why i would ever want to make use of that.
Link to comment
Share on other sites

  • Replies 68
  • Created
  • Last Reply

Top Posters In This Topic

I'm all for scanning only once, but there's the issue of the resolution of the source material. I understand scanning at a

higher resolution than the source, for a multitude of reasons, but if (if) the source is a print with 300dpi, then I don't see

the benefit of scanning at higher than 600. There won't be more detail nor less aliasing.

 

Scanning according to the size of the desired output is what was traditionally done with source material that had a greater resolution than the scanner could capture. Here it's the opposite.

Link to comment
Share on other sites

<blockquote>

<p>'Raw' is such a hype.</p>

</blockquote>

<p>Agreed. So let's be clear.<br>

Non rendered raw camera data as I've shown is vastly different data and the resulting image one produces uses a vastly different method than a raw scan (set at some undefined setting that we are told is raw) which is baked RGB values. <br>

Whether you use instructions to alter baked RGB values or instructions to render new RGB data, those two processes are not the same nor equal. Again back to the JPEG vs. camera raw editing abilities with respect to white balance due in large part to the difference between a baked WB and one that doesn't yet exist, to be rendered once defined!</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>but if (if) the source is a print with 300dpi, then I don't see the benefit of scanning at higher than 600.</p>

</blockquote>

<p>Those values are kind of meaningless. You've got an original of a fixed size and you've got a max number of real pixels you can scan along that length. How you divide up the values in inches or any other metric can take place later. Give me all the real data if the goal is an archive of what I just scanned. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<i>"Agreed. So let's be clear."</i><br><br>The difference you are talking about is the difference between a bayer pattern data set and a data set having another pattern. Colour values arranged in different ways. Of course you do things differently when you are dealing with different 'original material'. It is not a reason, there is no reason, to reserve the word 'raw', or unprocessed' for either one of them. A raw scan is as raw, or baked, as the raw camera data.<br><br>That JPEG vs camera raw thing? Yes, if we start out assuming that one is already messed up, the other still waiting to be messed with, there is a difference between one being already messed up, the other still waiting to be messed with. ;-) It's still not a raw vs processed thing per se.
Link to comment
Share on other sites

The print is N inches wide and in each of those inches there are at most 300 discernible points, whether you are looking

with your bare eyes or a microscope. That's not meaningless. When the CD spec was defined with 44,1 kHz, that wasn't

meaningless. When 'archival' digital was defined with 96 and later on 192 kHz, that wasn't meaningless either. All those

values were based on what the usage would be and/or how much detail was there in the original, not on how much was

technologically feasible.

 

One can never capture analog data perfectly in a digital medium, so one does what makes sense: either get enough data

to fill some need, or get as much data as is meaningful (in this case, the resolving power of the paper). Beyond that point,

there's no benefit to a higher sampling frequency. You might as well increase the size in 'PhotoShop'.

Link to comment
Share on other sites

<blockquote>

<p>The difference you are talking about is the difference between a bayer pattern data set and a data set having another pattern.</p>

</blockquote>

<p>One is a data source to <strong>produce</strong> RGB pixels. It's read only. Yet to be baked (rendered) The other <strong>is</strong> <strong>baked</strong> with an RGB set of values and color appearance. Big, big difference in data and processing! </p>

<blockquote>

<p>That JPEG vs camera raw thing? Yes, if we start out assuming that one is already messed up, the other still waiting to be messed with, there is a difference between one being already messed up, the other still waiting to be messed with. ;-)<br /></p>

</blockquote>

<p>Rendered RGB data, JPEG or not is rendered and messed up. Raw isn't either. Real raw (from a camera sensor). Again, big difference and you continue to ignore the role here: try fixing the wrong WB in a TIFF or JPEG versus a raw camera file where there is as yet no actual baked WB (it's just a piece of metadata). </p>

<p>Do this: Set your camera to shoot raw+JPEG. Go inside with tungsten like illuminant, set camera for daylight WB. Capture both. Now fix both any way you wish (for raw, there's only one way, render it using a raw converter). Get the point? </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>... get enough data to fill some need,</p>

</blockquote>

<p>The need is undefined! The output device and it's technology today or in the future is unknown. So you scan at the <strong>highest optical resolution</strong> you can and decide what's necessary in the future. That's the workflow of scan once, use many! Unless of course you subscribe to the scan many, use many workflow which is fine but for many, very counter productive. Especially if there's a lot of work to be done editng the image <strong>after</strong> the scan! </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? <br /> How is an ugly 16-bit image a better start than a better looking 16-bit image?</p>

</blockquote>

<p>Some transforms are nonlinear and therefore can be irreversible, for example, many color space conversions, clipping. A slightly dim scan in the scanner’s color space may look terrible compared to a brighter scan converted to AdobeRGB, but the former retains all data while the latter throws some out.</p>

<p>Applying tone curves adds quantization noise. Those are also irreversible if you don't know exactly what the tone curve was. At 16-bit, the quantization may not matter, but not knowing what the tone curve was in order to get back to linear data can be critically important. Even for the <em>de facto</em> standard of sRGB, the tone curve might be true sRGB with a linear ramp followed by a 2.4 gamma curve, or it might be a 2.2 gamma curve (which it isn’t supposed to be, but lots of hardware and software plays fast and loose), with significant differences in the shadows between the two.</p>

<p>As for scanning resolution, I think I would recommend a minimum of 1200dpi with the assumption that the photos are 4″x6″ prints from 35mm film. Some films have some modulation transfer function remaining out past 100 line pairs per mm. If you have different sized prints, different sized film, or more information than I do about the MTF of your film and/or lenses, you might adjust that minimum scanning resolution somewhat. Scanning at the scanner’s native optical resolution would be preferred if you have the storage space and spare processing time.</p>

<p>Finally, note that most of the time when people downsample images it is done incorrectly, and will darken some bright edges and small highlights, so if you plan to scan at a very high resolution and then reduce the resolution later, you probably wish to take this into account and rescale them in linear color space.</p>

<p> </p>

Link to comment
Share on other sites

<p>The discussion is progressing nicely where it concerns colour, but I feel we've hit a wall some time back where it concerns resolution. It would be nice if we could get unstuck, this is a matter that I'd be making use of as well (I've a number of prints to scan but haven't yet decided on the workflow).</p>

<p>1. I think it's been established by now that the <strong>purpose</strong> of the scanning is undefined. That's why it doesn't make much sense to say 'it depends on what you want to do with it'. That would be unavoidable if the available tools weren't enough to fill all possible needs. But there has been no evidence that they aren't. There is no point in using a higher sampling frequency than needed in order to capture all the detail. <strong>If</strong> you are getting all the detail, you don't need to scan again, no matter whether 10 years from now you have a printer that is 10 times better. And if you get a scanner that resolves 10 times more, will you now redo the scans because the first ones were not done at that resolution, even tho the original resolution was enough to capture everything that was capturable?</p>

<p>2. We all know that film can go quite far in terms of line pairs per cm. But here we're talking about prints. Does anyone have hard or soft data on how many lpcm photo paper can achieve? That, <strong>as far as I can understand</strong>, is the only value of relevance. It doesn't quite matter what the original film was if the paper is the limiting factor. A Durst Lambda, <strong>if I'm not mistaken</strong>, prints up to 400 dpi. That would <strong>suggest</strong> that the maximum resolving power of paper is not more than that (otherwise there would be equipment able to print finer, <strong>is there any?</strong>). Per sampling theory, that gives 800 dpi as enough to capture all the detail in a Durst Lambda print, whatever its specifics. Is there a flaw in this reasoning? Do museums scan prints with 4800 dpi or more just because their scanners claim to have that resolution?</p>

<p>Aiming needlessly high is the first step towards quitting.</p>

Link to comment
Share on other sites

<blockquote>

<p>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? <br />How is an ugly 16-bit image a better start than a better looking 16-bit image?<br>

----<br>

Some transforms are nonlinear and therefore can be irreversible, for example, many color space conversions, clipping. A slightly dim scan in the scanner’s color space may look terrible compared to a brighter scan converted to AdobeRGB, but the former retains all data while the latter throws some out.</p>

</blockquote>

<p>You've raised some interesting points while not answering the question. <br>

Agreed about various transforms. In high bit, it's moot. That's why we have high bit capture and editing. I don't know what you mean by a 'dim scan in the scanners color space' and further. The scan either is dim as you point it or it's not. We're just talking about RGB values in a defined color space so the numbers have a scale. If it's dim, why keep it dim if that's not the goal for representing the original from the scanner? <br>

The numbers and color spaces can be different and appear exactly the same in terms of color and tone. Yes, a linear scan <strong>without</strong> a profile defining that scale looks dark and ugly. I've got plenty of examples I could share showing that as soon as you <em>Assign</em> the proper profile, the dark image no longer appears dark (because <strong>it never was</strong>!). A lack of color management made it look dark. So I'm not clear on what you're saying and would love to see two low rez examples of this with proper tagged profiles. <br>

But again, the question goes unanswered: <br>

<em>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? </em><br /><em>How is an ugly 16-bit image a better start than a better looking 16-bit image?</em><br>

<em><br /></em>You've got a scanner and an original and presumably you want to match or improve the scanned version of the original. Presumably you're color managed such the display, the scanned data and the scanner driver or other app shows numbers correctly. You can set the scanner to give you a proper appearance or you can do it somewhere else. Assuming you've done your homework and the scanner driver provides the tools you need (much like a raw converter you'd select for that kind of data), <strong>why</strong> produce anything in the scanner RGB color space, it's native color space and gamut, that <strong>doesn't</strong> look as close to your goal as possible? <br>

Yes, color space conversions cause data loss and we've accounted for that with high bit but no reason to do so if it's pointless, nor other edit. You can't have anything but scanner RGB initially and converting it to anything <strong>but</strong> the output color space is just another conversion. Leave the data in scanner RGB tagged as such in high bit, now you've got the most data and it at least looks as you desire. If you feel not altering the scanner software gives you more data but an ugly color appearance, you're going to edit the data to make it look good anyway so it's moot. You can pay me with data loss now or later. There's no free lunch why not use the scanner driver and produce the best data at the get go? </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Sampling theory actually says that scanning the 400ppi print at 400ppi is sufficient, but some margin is a good idea, so 800ppi sounds like plenty unless the scanner’s anti-aliasing filter is terrible. Also note that the edges of each printed pixel might possibly contain spatial frequencies above even 800 line widths per inch, not that there is necessarily any point in capturing it.</p>

<p>I would expect the photo paper itself to be capable of higher resolution per mm (let alone per picture height) than the film. I am having difficulty finding a reasonable assumption for the modulation transfer function (MTF) of a typical enlarger lens, though. Without some known, low enlarger MTF, I would assume that the enlarger resolution per picture height will be on the order of the film resolution per picture height and therefore only lower the final resolution slightly.</p>

<p>The better the MTF of every step in the process is known, the more accurately digital sharpening could restore the image toward 100% MTF. That adds significant noise, though, and might also unavoidably add sharpening halos; I would have to think about the latter.</p>

Link to comment
Share on other sites

<blockquote>

<p>I don't know what you mean by a 'dim scan in the scanners color space' and further. The scan either is dim as you point it or it's not. We're just talking about RGB values in a defined color space so the numbers have a scale. If it's dim, why keep it dim if that's not the goal for representing the original from the scanner?</p>

</blockquote>

<p>Dim meaning “definitely not clipped”. The closer it gets to 100% brightness, the higher the odds of clipping a pixel somewhere by mistake. It needn’t be very dim. The primary goal is not to clip it, since making it brighter is doable and unclipping it is not.</p>

<blockquote>

<p>The numbers and color spaces can be different and appear exactly the same in terms of color and tone. Yes, a linear scan <strong>without</strong> a profile defining that scale looks dark and ugly. I've got plenty of examples I could share showing that as soon as you <em>Assign</em> the proper profile, the dark image no longer appears dark (because <strong>it never was</strong>!). A lack of color management made it look dark. So I'm not clear on what you're saying and would love to see two low rez examples of this with proper tagged profiles.</p>

</blockquote>

<p>I understand this, but I highly doubt that every scanner in existence is even capable of tagging scanned images with its native color profile. A lot of them are only going to even give you standard color spaces and/or not tag them as such. You seem to agree that scans tagged with the scanner’s profile are better than scans converted to and tagged with some other profile?</p>

<blockquote>

<p>But again, the question goes unanswered: <br /><em>Why would anyone want an image, unspoiled by adjustments unless it produced the best quality image? </em><br /><em>How is an ugly 16-bit image a better start than a better looking 16-bit image?</em><br /><em><br /></em>You've got a scanner and an original and presumably you want to match or improve the scanned version of the original. Presumably you're color managed such the display, the scanned data and the scanner driver or other app shows numbers correctly. You can set the scanner to give you a proper appearance or you can do it somewhere else. Assuming you've done your homework and the scanner driver provides the tools you need (much like a raw converter you'd select for that kind of data), <strong>why</strong> produce anything in the scanner RGB color space, it's native color space and gamut, that <strong>doesn't</strong> look as close to your goal as possible?</p>

</blockquote>

<p>I gave three examples. One was the color space, on which I believe we agree. Another was the clipping, which you appear to disagree with precisely how dim it should be, but probably agree that it should not be clipped? No robust engineering design relies on something being exactly perfect, there always needs to be design margin.</p>

<blockquote>

<p>Yes, color space conversions cause data loss and we've accounted for that with high bit but no reason to do so if it's pointless, nor other edit. You can't have anything but scanner RGB initially and converting it to anything <strong>but</strong> the output color space is just another conversion. Leave the data in scanner RGB tagged as such in high bit, now you've got the most data and it at least looks as you desire. If you feel not altering the scanner software gives you more data but an ugly color appearance, you're going to edit the data to make it look good anyway so it's moot. You can pay me with data loss now or later. There's no free lunch why not use the scanner driver and produce the best data at the get go?</p>

</blockquote>

<p>The thing is that the output color space may not be known at scan time, or multiple output color spaces may be needed for different applications. All I had been recommending with respect to color was leaving the color space as native, and I recognized that not all hardware or software has good built-in support for a color managed workflow.</p>

<p>Finally, my third example was tone curve. If available, I would want 16-bit (or slightly higher even, if available) <em>linear</em> data, similar to but not identical to what is available from a digital camera. Nonlinear tone curves are unnecessary until required by an output profile or used for an edit.</p>

<p>Edit: I suppose, as you pointed out, that there is nothing ugly about the linear native color space. That leaves the clipping, for which I maintain that it is safer to have the scan be too dim by an amount equal to the tolerance of its brightness than to set the white point exactly and risk clipping a hot spot somewhere.</p>

<p>TLDR: Clipping is usually bad.</p>

Link to comment
Share on other sites

<blockquote>

<p>Dim meaning “definitely not clipped”. The closer it gets to 100% brightness, the higher the odds of clipping a pixel somewhere by mistake</p>

</blockquote>

<p>First time I've heard this (I started scanning in the very early 90's, Leaf 45 and 35, onto ScanMate, and Howtek, Imacon with stints of Nikon, Polaroid and Minolta too).<br>

I understand what clipping is. It's view and control is seen all over. Be it by scanning, shooting or anything else, you never want to clip data you want to retain. Dim is a different story alltogher! Further there's saturation clipping. There's shadow clipping which is fair game in degree, it's based on how you wish to represent the image. More black clipping = less visisble noise if that's useful. </p>

<blockquote>

<p>I understand this, but I highly doubt that every scanner in existence is even capable of tagging scanned images with its native color profile.</p>

</blockquote>

<p>Then we should ignore it just as we'd ignore an uncalibrated display showing us a ProPhoto RGB image outside a color managed app! Most are able and should operate as I describe. <br>

<br>

I'm sorry, none of your examples clearly explain <strong>why</strong> one would produce undesirable appearing data from the scan stage and instead do it later. The only rational reason I can see doing this is the scanner software is very poor (your bad) and all data being equal at this point (because it is), you're going to do the heavily lifting on all those pixels in Photoshop or elsewhere when it <em>could</em> have been done at the scan stage. And that's still a far cry from non baked, raw camera data that has to be rendered! </p>

<blockquote>

<p>If available, I would want 16-bit (or slightly higher even, if available) <em>linear</em> data, similar to but not identical to what is available from a digital camera.</p>

</blockquote>

<p>You can get that in a decent controlled scanner <strong>and</strong> it can look lovely based on your scanner settings and it's profile. You'll have to convert to some gamma corrected space at some point but high bit, wide gamut, linear scanner data with embedded profile is fine and the 'rawest' data thus far. And lastly, it's not anything like the data from a digital camera. It's true RGB data (it's not interpolated RGB which is actually better!). </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>The thing is that the output color space may not be known at scan time, or multiple output color spaces may be needed for different applications.</p>

</blockquote>

<p>Doesn't matter, you've got scanner RGB. Or a Quasi-Device Independent RGB working space (ProPhoto RGB etc).</p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>In re "large"<br>

I said</p>

<blockquote>

<p>especially if you save large enough files <strong><em>up to the limits of your scanner</em></strong> [emph added]</p>

</blockquote>

<p>I also indicated, more indirectly, that as much depended on the quality of the original print or slide [blood:turnip].</p>

Link to comment
Share on other sites

<blockquote>

<p>Sampling theory actually says that scanning the 400ppi print at 400ppi is sufficient, but some margin is a good idea, so 800ppi sounds like plenty unless the scanner’s anti-aliasing filter is terrible.</p>

</blockquote>

<p>The statement makes no sense to me as stated. 400ppi <strong>alone</strong> is meaningless, it's just a resolution tag. <br>

I've got a 35mm which is 1.5 inches. You're saying 1.5x400 PPI, even 800 PPI is sufficient data? No way. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<blockquote>

<p>Doesn't matter, you've got scanner RGB. Or a Quasi-Device Independent RGB working space (ProPhoto RGB etc).</p>

</blockquote>

<p>Scanner RGB we have already established is fine. Converting to a working space is not ideal since one might wish to use different working spaces for different eventual output targets.<br>

If not for the ubiquitous nonlinear tone curves, I would prefer to always work directly in the output space, eliminating the possibility of having to deal with out-of-gamut colors.</p>

<p>And finally, please argue with somebody else. You asked why somebody might want an ugly image, I told you why I might want a dim image, you both accused me not answering the question and simultaneously implied that I gave a stupid answer to the question.</p>

<p>In my legally protected opinion, the Photo.net moderators should ban you from Photo.net. If my saying so violates the terms of service in any way, then they should ban <em>me</em> from Photo.net.</p>

Link to comment
Share on other sites

<blockquote>

<p>Converting to a working space is not ideal since one might wish to use different working spaces for different eventual output targets.</p>

</blockquote>

<p>That's not how RGB working space's work. They are by there very design output agnostic, based on theoretical display behavior. If you want to avoid one more conversion, stick with Scanner RGB. If you want a well behaved <em>editing space</em>, examine the Scanner RGB gamut in 3d, pick one of the RGB working space's that match closest (probably ProPhoto RGB and <strong>especially</strong> if you ever wish to use ACR engine for processing).</p>

<blockquote>

<p>You asked why somebody might want an ugly image, I told you why I might want a dim image, you both accused me not answering the question and simultaneously implied that I gave a stupid answer to the question.</p>

</blockquote>

<p>The answer was a bit dim ;-). Make the image appear any way you wish, it's not a discussion of aesthetics but workflow. You like dim <strong>or</strong> you want to keep from clipping highlights? You can produce both in one archive, in one wide gamut output agnostic space (Scanner RGB or RGB working space).You use the term <em>brightness</em> but hopefully understand that's based on perception, not the values within the document you are editing. You're always going to be dealing with some Out of Gamut (OOG) colors. </p>

<blockquote>

<p>In my legally protected opinion, the Photo.net moderators should ban you from Photo.net</p>

</blockquote>

<p>Because we might disagree about technology? I'm not sure we even disagree, I can't understand the rational for your workflow, you never explained it. But some of your facts are wrong (like the need to apply differing working space based on output). It isn't why they exist nor were designed way back in 1998 with Adobe Photoshop 5 (which I should point out, I had something to do with in terms of this architecture).</p>

<blockquote>

<p>If my saying so violates the terms of service in any way, then they should ban<em>me</em> from Photo.net.</p>

</blockquote>

<p>Such extremes, why not just not come here anymore?</p>

<p> </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>Joe, here's <strong>Adobe's</strong> white paper on RGB working spaces: <br>

http://www.adobe.com/digitalimag/pdfs/phscs2ip_colspace.pdf</p>

<blockquote>

 

 

 

<p>Understand that while using a working space that might clip some colors or sending a smaller gamut image to a wider gamut printer may not utilize all the colors possible, using sound color management will still produce acceptable prints. You will not use all the colors you <strong>captured</strong> or<strong> could have</strong> reproduced. </p>

 

 

 

</blockquote>

 

 

 

<p>I'm in total agreement with the author!</p>

 

 

 

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

<p>I think having a conversation based on mutual respect would be great. Further, as much as I have disagreed, sometimes strongly, with Andrew, there are some here, including Andrew, that have quite a bit of experience and ought to get just a bit of deference.</p>

<p>That said this:</p>

<blockquote>

<p>I plan to do all my post processing work at a later date with Lr or Ps. Right now, I just want to gather the all the available data from my scanned photos.</p>

</blockquote>

<p>is a bad idea. Scanning is an art, not a science. It took me a long time to learn how to get the most out of my scans. (And I am a professional drum scanner operator.) Without going all the way thru the process, you will likely scan everything and then have to rescan half of it again when you figure out how to set up the scanning curves the way you want... to get the results you want.</p>

Link to comment
Share on other sites

<blockquote>

<p>Because we might disagree about technology?</p>

</blockquote>

<p>Because, in the words of another recent poster who I believe captured the spirit of the problem, I think you are a “nasty bully”. I am sorry, but such behavior does not deserve “deference”.</p>

<blockquote>

<p>Such extremes, why not just not come here anymore?</p>

</blockquote>

<p>Very last post, and then I intend to do just that.<br>

If you recall, I was gone for <em>years</em>. I'm staying gone this time, unless I get an email notification of your lifetime ban. Good day, all.</p>

Link to comment
Share on other sites

Fast moving thread. have a short sleep, go to work, and see what happens.<br><br><i>"One is a data source to produce RGB pixels. It's read only. Yet to be baked (rendered) The other is baked with an RGB set of values and color appearance. Big, big difference in data and processing!"</i><br><br>Not at all, Andrew. Both are data sets representing an image. In different formats, yes. The one requires more work to resemble that image than the other. But that's not a fundamental difference.<br>The only difference that could put some sense in the 'raw'- çooked' divide would be if the 'raw' version of everything held potential to become many different things, while the baked 'cooked' version is what it is excluding any other possibilities of what it might have been.<br>Such a difference can indeed exist. But neither the fact that camera raw (your chosen archetypical 'raw' format) still contains that pesky bayer pattern and a scanner 'raw'does not, nor the fact that the first has it's RGB values distributed over more data points than a 'straight' RGB file is about that. That difference is one between a format that needs more work to get at the thing it is supposed to represent and a format that needs less work. Not about one format holding numbers of possible images vs another format that does not.<br>It (that still not fit for consumption camera raw) does fit the hype 'raw' description: it requires that work, call it cooking or baking if you like, to make it resemble what you were expecting: an image. That does not mean however that a format that does not need that work fresh out of the camera (or scanner) is already 'cooked'. It is just as 'raw'.<br><br>Maybe this helps to clarify: raw files are very much like banana's. Some are unpalatable in raw state and need to be cooked or baked first. Some are perfectly enjoyable fresh from the 'tree'.<br>Both start out as raw as the other. The fact that one is enjoyable in raw state does not mean it has been cooked on the tree.
Link to comment
Share on other sites

<blockquote>

<p>The only difference that could put some sense in the 'raw'- çooked' divide would be if the 'raw' version of everything held potential to become many different things, while the baked 'cooked' version is what it is excluding any other possibilities of what it might have been.</p>

</blockquote>

<p>That's exactly right and a big difference! It's what distinguishes between the words and the extremes: raw and cooked. </p>

<blockquote>

<p>That difference is one between a format that needs more work to get at the thing it is supposed to represent and a format that needs less work.<br /></p>

</blockquote>

<p>Agreed. </p>

Author “Color Management for Photographers" & "Photoshop CC Color Management" (pluralsight.com)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...