Jump to content

jackflesher

Members
  • Posts

    2,921
  • Joined

  • Last visited

Posts posted by jackflesher

  1. Spacing the front and rear elements out with shims between them and the shutter is a good suggestion. Also, I would look to older triplet or maybe even Tessar designs to do this with since they will likely have more uncorrected abberations to begin with.

     

    You mentioned you can polish and grind. Another thought is to polish off any coatings on the lens elements, increasing flare and dropping contrast.

     

    Lastly, you could probably build your own doublet out of somehting like an old (inexpensive) Ektar...

  2. Bill:

     

    I think it is safe to say there is no single ideal interpolation algorithm suitable for EVERY image...

     

    In fact, if you look at an image posted earlier in this thread, you can see it is essentially a grayscale gradient -- rather a series of grayscale gradients -- arranged in an attractive geometric shape. By their nature, gradients are fairly linear, so in fact, linear interpolation (PS's bi-linear for example) would most likely yield a better result than any 3rd order (cubic) function would (At least in theory it should, I have not tested it on that image).

     

    However, the typical photographic image rarely has such a conglomeration of smooth, graduated tones. Instead, it has a mix of smooth gradations, harsh edges and softer edges. Moreover, we also have color gradients involved. Each of these would require a different interpolation algoithm to render optimal results. Since it is difficult to target these areas differentially in an image, we are in a sense screwed before we even begin being stuck with only one algorithm to apply over the entire image.

     

    In an uprez, straight bicubic was a bit too harsh (steep) and presented artifacts in smooth tones and jagged edges; bicubic smoother "flattened" the 3rd order "s-spline" to help these middle-tone areas maintain smoothness, yet still maintains a steeper rate of change near the ends to help keep edges looking like edges and not go totally soft. But in itself it is not enough, hence the edge sharpening step on the over-rez to bring them back.

     

    When downsizing, regular bicubic was not strong (steep) enough and hence we needed to add sharpening at intervals on the way down lest we lost the edges. Bicubic sharper overcomes this by giving us a more agressive 3rd order equation for the downrez, helping to better maintain edge delineations.

     

    My goal was to find a combination using available functions that work togehter in a way to be optimal for most "normal" images. As such, there will always be specific images that fall outside the norm where less or more agressive actions would generate a better interpolation. Here again, the over-rez and image-specific sharpening step helps one "tweak" the results to favor the characteristics of the image at hand.

     

    Cheers,

    Jack

  3. If I understand what you did, you are comparing the difference between two differently upscaled images, and claiming they are different -- and hence your result is not surprising... Moreover, your test image has has no real edges so it becomes even more academic as it relates to interpolation of most traditional subjects.

     

    More important to us as photographers, is which method generates a 4x or 16x PRINT image that most closely resembles the original NON-interpolated PRINT image... If you want to use a standard target, I'd suggest an ISO 12233 test chart for the base image, interpolated to 16x -- at least it has some edges. Then you can compare the results of our two different methods directly on both a standardized image as well as a "real-world" photograph of your own.

     

    But I can save myself the trouble of doing this, because I have already done these comparisons multiple times. Instead, I chose to post a real-world example for everybody to follow in the article. Lastly, any results we post here on the web become academic because they don't accurately portray how a final PRINT will look, and that was the point of the article -- making a big print that looks good.

     

    Regretfully, I do not have the time (nor interest) for bantering this about with you further. If you disagree with my methods, so be it -- I suggest you simply continue to use your own.

     

    As for everybody else, I suggest you at least try my method with an image of your own and compare the results for yourselves -- if you do, you will see it works.

     

    Cheers, Jack

  4. Kirk: Well said!

    ~~~

     

    Grepamt: You worte: >>So, friends, uprez only once to your final image size, sharpen only then and adjust the radius according to the amount that you have enlarged the image. Clean and simple.<<

     

    While this may sound good to you in theory, it simply does not bear out in prctice. I have done innumerable comparisons and suggest you do the same -- actually make some comparison prints using your method and mine before you make such erroneous claims... While it's true you may see little difference between the two methods in a 4x uprez, you will see significant differences in a 16x uprez.

     

    The fact is, sharpening slightly during conversion, sharpening again at the native size as you would for print output, and fianlly sharpening EDGES fractally at the uprez as described in my article will deliver superior results when printed -- Plain and simple ;)

     

    If you don't believe me, try it. And then post your comparisons for all of us to judge.

     

    Cheers,

    Jack

  5. Grepmat: The intention of the article was to give PHOTOGRAPHERS a methodology whereby they could easily increase the size of an image for printing and maintain image quality at the same time. I specifically AVOIDED any discussions of the math involved that defends my method, as experience tells me the instant one does that, they lose about 95% of the readers.

     

    Jan has summed up the reason very accurately with these two statements: If you don't sharpen fine detail that's lacking in contrast (early on), then you're going to get a larger, soft image that's very difficult to sharpen because the notion of edges has been lost; the purpose of sharpening in this context is to bump up the MTF of fine spatial detail a little bit.

     

    If people are truly interested in the math, I can go there. BUT explaining how third-order equations generate more accurate results than simple linear equations when interpolating pixels does not strike me as a discussion that will appeal to the bulk of digital photographers. I like to have it suffice to say that we want to alter the transitions near the brightest or darkest pixels more slowly than the middle-toned pixels during an interpolation.

     

    I'll toss out a formula (valid for x values -1 through 2) and if there is serious interest, we can discuss how we got there, and where we go from there:

     

    x(t) = -2(2a+1)t^3 + 3(2a+1)t^2 - 2at

     

    Cheers,

    Jack

  6. >>State of the art in flatbed is something like a Creo, which will blow away any Imacon on large format<<

     

    But let's NOT forget to mention that the particular model Creo flatbed that would in fact "blow away" the Imacon still requires oil to be used for mounting. And if you're going to bother with oil, you might as well pick up a used Howtek and get real drum scans...

     

    Granted, in my response above I only said "flatbed," and probably should have clarified that a bit further so it was clear I was referring conventional consumer-grade flatbed scanners, like the Epson 3200.

     

    Cheers,

    Jack

  7. Andrew:

     

    FTR, MTF graphs don't necessarily tell a real story -- moreover in the specific case of Canon, they are theoretically, and not empirically derived. So if a buyer wants to "learn from the experiences of others," it just shows me they have a certain amount of wisdom...

     

    Unfortunately, it is exactly the snippy and banal responses such as yours above that prevent many people form even asking a question here in the first place. Granted, a lot of stupid questions do get asked and may deserve a "try the search function" response, but IMO this question was not one of them.

     

    Cheers, Jack

  8. >>it makes a lot more sense to risk a $60 filter than it is the front element of a $1000 lens. And thats true even if the lens can be repaired at a cost of $300.<<

     

    Actually, the lenses I was referring to are the big L lenses and run more like $5000... And you CAN'T put a filter on the front of them anyway -- and even if you could it would cost a lot more than $60! If you don't believe me, go price a 105mm B+W UV filter ;) IMO $300 is a relatively minor repair for a $5000 lens should you ever need to do it.

     

    However, my point was to answer the original question about lenses that CAME with protective front elements...

     

    Cheers,

    Jack

  9. Here are mine, in order of relative sharpness wide open. Note that I no longer own the ones asterisked, but have at one time or another and have placed them on this list from memory. As such, I would not argue with someone who currently owns them and would rank them differently.

     

    Superior wide open:

     

    200/1.8; 135/2; 500/4*; 85/1.2 (at f2 this moves up equal or better than the 135/2); 300/2.8*; 400/4 DO; 300/4*

     

    Not bad wide open, but improve to superior upon closing down one stop:

     

    35/1.4; 90TSE; 45TSE; 100 macro*; 50/2.5 macro (from f4 up); 135SF*;

    70-200/2.8 IS; 70-200/4*; 100/2*; 85/1.8*

     

    The following need about 2-3 stops before they get really good, but other than the 24TSE are very usable closed down only one stop:

     

    50/1.4; 24/1.4; 24TSE

     

     

    All IMO, YMMV,

    Jack

  10. >>Do you an example of what "MUCH BETTER" looks like? TIA.<<

     

    Not handy -- I un-installed all of the "old" converters and don't have them any more, and have long since re-processed all of my important images -- I'd have to dig to find an example...

     

    The biggest change I saw was going from Yarc+ and BreezeBrowser of two years ago to the new ACR at the time. Some of us remember processing "linear" raw files before that to try and glean the most from a digital file... Thank goodness those days are long gone LOL!

     

    However, I could probably whip up an example comparing the current DPP (which I consider a bit lacking) to Bibble (very good) to back my claim... However, it would be ideal for us to see the same image carefully processed by DPP, ACR (when available), C1 (by someone who has the beta) and Bibble (just released).

     

    Jack

×
×
  • Create New...