Jump to content

Interpolation "how to" article now online


Recommended Posts

The article is well intentioned. However, I have a few issues with it.

 

You first discount (via mention of step-wise interpolation), and then perpetuate, the oft-mentioned strategy of sharpening at intermediate or different resolutions than your final printed resolution. This, to me, fails a simple sanity test. First, if I do not know what my final resolution will be, why would I sharpen with an eye towards best results at the native resolution, when I will likely need further sharpening if up-sampled? That, after all, is the point of your article. You don't give much of a reason except that it's handy for immediate printing, which is fine, but why do you go further and suggest it as a base for further processing?

 

Moving on from that issue, if I do know what my final resolution will be, just what specific, quantifiable, benefit do I gain by up-sampling to a higher than final resolution, then sharpening, then down-sampling? Have you worked out any math or quantified and compared the resulting errors that result from the various options? You do state that this is analogous to, or a partial substitute for, "fractal sharpening" (as compared to interpolation?). I think that this is a spurious argument, even ignoring the heavily marketed but by now apparently marginalized benefit of a fractal-oriented algorithm.

 

With all due respect: In the absence of proven objective reasoning, one may too-readily engage in superstitious behavior. Instead, in my opinion, one should adopt an Occam's Razor approach, and do the least processing, and only processing that has specific, well-understood benefits. That, I suggest, would lean towards avoiding intermediate sharpening and re-sampling steps.

 

Cheers.

Link to comment
Share on other sites

If you don't sharpen fine detail that's lacking in contrast, then you're going to get a larger, soft image that's very difficult to sharpen because the notion of edges has been lost. An edge being a gradient whose slope (first-order differential) is close to atan(1/pixel_pitch/sqrt(2)/2). You want to retain this as you shrink the pixel pitch, for an edge to remain an edge. All other gradients however should retain their slope and merely get interpolated to fill the areas between the defining edges.

 

Sharpening here obviously doesn't mean creating halos or "looking sharp" -- the latter usually implies aliased. Aliased images are fine for output, but resizing them also resizes the artifacts. The purpose of sharpening in this context is to bump up the MTF of fine spatial detail a little bit.

 

Less if often more, but the least is always nothing.

Link to comment
Share on other sites

Grepmat: The intention of the article was to give PHOTOGRAPHERS a methodology whereby they could easily increase the size of an image for printing and maintain image quality at the same time. I specifically AVOIDED any discussions of the math involved that defends my method, as experience tells me the instant one does that, they lose about 95% of the readers.

 

Jan has summed up the reason very accurately with these two statements: If you don't sharpen fine detail that's lacking in contrast (early on), then you're going to get a larger, soft image that's very difficult to sharpen because the notion of edges has been lost; the purpose of sharpening in this context is to bump up the MTF of fine spatial detail a little bit.

 

If people are truly interested in the math, I can go there. BUT explaining how third-order equations generate more accurate results than simple linear equations when interpolating pixels does not strike me as a discussion that will appeal to the bulk of digital photographers. I like to have it suffice to say that we want to alter the transitions near the brightest or darkest pixels more slowly than the middle-toned pixels during an interpolation.

 

I'll toss out a formula (valid for x values -1 through 2) and if there is serious interest, we can discuss how we got there, and where we go from there:

 

x(t) = -2(2a+1)t^3 + 3(2a+1)t^2 - 2at

 

Cheers,

Jack

Link to comment
Share on other sites

Jan: I certainly understand the necessity for sharpening, and that it must be done sometime. What I do not see are the benefits from intermediate changes in resolution or sharpening steps.

 

Rephrasing what you are discussing (correct me if I am wrong), the radius of sharpening should be increased according to the degree of enlarging. This is common wisdom and, of course, it makes sense.

 

On aliasing: Enlarging an image will not cause aliasing, nor will unsharp masking. Making assumptions about just what photoshop is doing, the latter only enhances the contrast of edges, and does not create new ones, nor is there as much high frequency information available in an interpolated (enlarged) image as fodder for aliasing anyway.

 

So, friends, uprez only once to your final image size, sharpen only then and adjust the radius according to the amount that you have enlarged the image. Clean and simple.

 

Cheers.

Link to comment
Share on other sites

I don't think the case for Jack's method depends on the math as much as on looking at

one's own prints.

 

In my case, at least, I arrive at an acceptable level of sharpening with fewer halos if I apply

some "capture sharpening" In addition to "output sharpening." I use the PhotoKit Capture

Sharpeners (at reduced opacities) rather than Jack's method, but I'm sure the evidence for

(or against) earlier sharpening is visible, not theoretical.

 

I've said "one's own prints" and "in my case" because (as Andrew Rodney pointed out a

while back on this forum) the results of different methods of resing up and of methods/

amounts of sharpening interact; and assumptions about print-viewing distance &

preferences as to how much sharpening is desirable are going to vary considerably from

person to person or job to job. We're dealing with a bundle of interacting judgments that

apply to our own output, not across-the-board.

 

I'm also puzzled by the objection that one doesn't usually know what the final size &

resolution might be. Isn't the opposite most often the case? Whether you're working on

an artist's portfolio or a job for a client, don't you ordinarily know the size-resolution

parameters?

Link to comment
Share on other sites

Kirk: Well said!

~~~

 

Grepamt: You worte: >>So, friends, uprez only once to your final image size, sharpen only then and adjust the radius according to the amount that you have enlarged the image. Clean and simple.<<

 

While this may sound good to you in theory, it simply does not bear out in prctice. I have done innumerable comparisons and suggest you do the same -- actually make some comparison prints using your method and mine before you make such erroneous claims... While it's true you may see little difference between the two methods in a 4x uprez, you will see significant differences in a 16x uprez.

 

The fact is, sharpening slightly during conversion, sharpening again at the native size as you would for print output, and fianlly sharpening EDGES fractally at the uprez as described in my article will deliver superior results when printed -- Plain and simple ;)

 

If you don't believe me, try it. And then post your comparisons for all of us to judge.

 

Cheers,

Jack

Link to comment
Share on other sites

Hello Kirk,

 

I agree that there is no "one size fits all" solution, and that if the photographer (or client, etc.) is satisfied, then all is well. In Jacks case, I'm sure he has honed it until it fits his style and he is happy with it. More power too him.

 

However, he's suggesting that other people take steps that make little sense, in my opinion. For example, unless there is at least a rational explanation for upsampling to 20% over the final print size, then sharpening, then down-sampling, why do it at all? His explanation, that this method is "a simplified form of fractal sharpening" is, if you can pardon me, ridiculous. It has nothing to do with fractals at all, nor does any other benefit to it come to mind.

 

On the issue of not knowing the final print size, that also depends on one's purpose. Of course, if you do know, why bother sharpening it at a resolution that you won't print at anyway? The only explanation he offers is that, in his opinion and by his experience, it works better.

 

Some, if not all, of the steps that one takes in this process are irreversible. That is, information is thrown out at nearly every step: If I upsize and then downsize, I will not wind up with the image I started with. Given that some information is thereby lost, the image will be blurrier or suffer other negative impacts. So, when in doubt, one should be parsimonious and not resample or sharpen twice when once will do, etc., unless there is some well-understood benefit.

 

Cheers.

Link to comment
Share on other sites

>> he's suggesting that other people take steps that make little sense, in my opinion. <<

 

And therein lies The Problem. You are expressing only your OPINION -- an opinion apparently reached by you without even having tested the procedure as described... If you actually try it, you will not be so hasty to dismiss it ;)

 

Cheers,

Jack

Link to comment
Share on other sites

I have tried at least a simplified version of what you propose. What follows will be two images (if I succeed in posting the images).

 

The first is the original photo after bicubic upsampling by a factor of two (attached here). It is cosine function in two dimensions. There is normal aliasing in the original image due to its being sampled. The aliasing is blurred in the upsampled image.

Link to comment
Share on other sites

This next image is the *difference* between the prior one and the original image upsampled to 2.2 times and then downsampled to 2 times (same as the prior image). The difference has been enhanced for contrast. This image represents the error (lost information) between upsampling by 2x and upsampling by 2.2x followed by down sampling to 2x.

 

Mind you, this is just a quick and dirty test. Indeed, I might have even made a mistake in doing it. But it is an objective test that demonstrates the degradation in an image due to multiple changes of resolution. Doing the same test with sharpening would be difficult since different sharpening parameters would be needed for the different upsampling levels, and these might not be equivalent once sampled to the same resolution.

 

Cheers.

 

Cheers.

Link to comment
Share on other sites

Here is the same test, but including sharpening. A large amount of sharpening was used, with one pixel radius (and 1 or 1.2 in the upsampled and then downsampled image case; it made little difference). Note that it looks similar to before but is, in fact, dramatically worse - like 10 times worse - because much less contrast enhancement was needed. The error is plain without enhancement, which means that it will be plain in an ordinary photo. Of course, unlike in the previous test, this might not be considered an "error" but may be an "improvement" due to one's opinion. I mean this seriously and not in jest.

 

Cheers.<div>00AHML-20688684.jpg.198e2930cb393c2acb2b5257fdb038c4.jpg</div>

Link to comment
Share on other sites

If I understand what you did, you are comparing the difference between two differently upscaled images, and claiming they are different -- and hence your result is not surprising... Moreover, your test image has has no real edges so it becomes even more academic as it relates to interpolation of most traditional subjects.

 

More important to us as photographers, is which method generates a 4x or 16x PRINT image that most closely resembles the original NON-interpolated PRINT image... If you want to use a standard target, I'd suggest an ISO 12233 test chart for the base image, interpolated to 16x -- at least it has some edges. Then you can compare the results of our two different methods directly on both a standardized image as well as a "real-world" photograph of your own.

 

But I can save myself the trouble of doing this, because I have already done these comparisons multiple times. Instead, I chose to post a real-world example for everybody to follow in the article. Lastly, any results we post here on the web become academic because they don't accurately portray how a final PRINT will look, and that was the point of the article -- making a big print that looks good.

 

Regretfully, I do not have the time (nor interest) for bantering this about with you further. If you disagree with my methods, so be it -- I suggest you simply continue to use your own.

 

As for everybody else, I suggest you at least try my method with an image of your own and compare the results for yourselves -- if you do, you will see it works.

 

Cheers, Jack

Link to comment
Share on other sites

Jack: First you challenged me to try it. When I did and presented clear evidence, you first say I'm wrong and/or that my technique is invalid, and then (after a bunch of red-herring arguments) you dismiss the above efforts as irrelevant "banter".

 

I could debunk your objections and red herrings. I could also compare the resulting images back against the original. Those results would provide evidence that your method will also be further from "most resembling the original non-interpolated" image. Would you like me to? I suspect not.

 

P.S., have fun "fractal sharpening."

Link to comment
Share on other sites

Wrong wavelength - Jack is broadcasting on the PRINTING channel (he put it

in caps), & you just aren't listening to that one. A 'test' with an image with no

edges has no bearing on the question of how to avoid sharpening artifacts

when res-ing up for printing. You need a test in which PRINTS look same/

better/worse.

 

But please let's just leave it there.

Link to comment
Share on other sites

No, Kirk, printing files does not magically invalidate the sort of methodology I discussed; that was one of Jack's red herrings. He is hoping that he can stay in the world of his own subjective opinions, and hide from even discussing the possibility of objective measures and comparisons of quality between various strategies. You can leave it there if you like. I know Jack will.
Link to comment
Share on other sites

Sorry Grepmat. Here's a perfect case where math doesn't give you what you need. I have used Jack's method for a few years now....with the exception of the 120% over rez step. But with a few sample I ran, his method produces less halos, better sharpness, and less noise (sharpening artifacts. That 1st initial sharpening step (I use 500%, 0.3 pixels, threshold 2) helps remove the effects of the AA filter over the sensor. This step is used by many photographers to maximize sharpness.

 

Sorry, you lost.

 

Maybe you should leave it there!

Link to comment
Share on other sites

So is everybody saying that Photoshop upsampling, even CS's new

Bicubic Smoother, doesn't work as well as it could? I wonder if

"adaptive resampling" in Durst Lambda and Cymbolic Lightjet printers

is better. Bart van der Wolf has already shown that <A

HREF="http://www.xs4all.nl/~bvdwolf/main/foto/down_sample/down_sample.htm">

ImageMagick beats Photoshop</A> for downsampling.

Mind you, in real-world photographic images we are

probably talking about minute distinctions.

Link to comment
Share on other sites

Dave, you forgot to add "in my opinion." I'm glad it works for you.

 

What I'm suggesting is that it should be possible to quantify the quality of different approaches. That's how all science and engineering is done - including the science of imaging. If you can't measure something, you can't understand it in more than a superficial fashion. As just about any question in imaging, such as just how much anti-aliasing filtration should be applied? The answers were not made based on some popularity contest, they were made by scientists, mathematicians and engineers working from first principles and building on established techniques. Go take a look in an imaging journal some time. I suggest SPIE's Journal of Electronic Imaging as being especially relevant. I subscribe to it, and it's fascinating stuff. You will find that the authors in it do a lot of stuff that looks like what I did above.

 

By the way, Bill, you can safely use the bicubic resampling. It's fine. I was only suggesting that doing it more than once is not a good idea. I see that Dave, at least, tacitly agrees with me on that point.

 

Cheers.

Link to comment
Share on other sites

Bill:

 

I think it is safe to say there is no single ideal interpolation algorithm suitable for EVERY image...

 

In fact, if you look at an image posted earlier in this thread, you can see it is essentially a grayscale gradient -- rather a series of grayscale gradients -- arranged in an attractive geometric shape. By their nature, gradients are fairly linear, so in fact, linear interpolation (PS's bi-linear for example) would most likely yield a better result than any 3rd order (cubic) function would (At least in theory it should, I have not tested it on that image).

 

However, the typical photographic image rarely has such a conglomeration of smooth, graduated tones. Instead, it has a mix of smooth gradations, harsh edges and softer edges. Moreover, we also have color gradients involved. Each of these would require a different interpolation algoithm to render optimal results. Since it is difficult to target these areas differentially in an image, we are in a sense screwed before we even begin being stuck with only one algorithm to apply over the entire image.

 

In an uprez, straight bicubic was a bit too harsh (steep) and presented artifacts in smooth tones and jagged edges; bicubic smoother "flattened" the 3rd order "s-spline" to help these middle-tone areas maintain smoothness, yet still maintains a steeper rate of change near the ends to help keep edges looking like edges and not go totally soft. But in itself it is not enough, hence the edge sharpening step on the over-rez to bring them back.

 

When downsizing, regular bicubic was not strong (steep) enough and hence we needed to add sharpening at intervals on the way down lest we lost the edges. Bicubic sharper overcomes this by giving us a more agressive 3rd order equation for the downrez, helping to better maintain edge delineations.

 

My goal was to find a combination using available functions that work togehter in a way to be optimal for most "normal" images. As such, there will always be specific images that fall outside the norm where less or more agressive actions would generate a better interpolation. Here again, the over-rez and image-specific sharpening step helps one "tweak" the results to favor the characteristics of the image at hand.

 

Cheers,

Jack

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...