Super Resolution

Discussion in 'Digital Darkroom' started by peter_sanders|2, May 2, 2011.

  1. Hello. I have been looking into super resolution, but I have only been able to find multi-frame super resolution software. Can anyone recommend a good single-frame super resolution software? Thank you.
  2. I believe Photoshop can handle max resolution of 32000 x 32000 pixels in a single picture, providing that your computer system has resources for it.
  3. Frank, you are wrong, I've got stitched frames that work fine at over 20,000 pixels on the long side.
    Peter: IS this what you are referring to?
  4. Are there any solid, technical articles on SR with credible before and after images as it applies to normal photography, not radar images or high-magnification microscope images mentioned in the poorly written Wiki page?
  5. Ellis, Frank said 32,000 x 32,000 so your 20,000 would fall within that...?
    Though reading the Wiki post and some of the links from there, it doesn't look like 'super resolution' is necessarily about having gigantic pixel counts, more about improving the pixels you have or more intelligent up-rezzing. Would Perfect Resize (formerly known as Genuine Fractals) qualify as a single-frame super-resolution tool?
  6. Since most of us do not have a camera with super resolution, stiching perhaps is the only practical way to make big and uninterpolated(not up-resized) super resolution picture
  7. I was imagining that the new Hasselblad sensor with sub-pixel sensor shift is a form of in-camera super-resolution.
    My interpretation of the Wiki article is that stitching wouldn't really be 'Super Resolution', at least not as described-- its a matter of intelligent application of data already captured--with multiple frames (such as video) by considering the slightly different adjacent frames, or for use when uprezzing a single frame.
  8. This is one of the better articles I've found on single image super resolution with examples. No software though.
  9. Can anyone recommend a good single-frame super resolution software?​
    Since there were quibbles, quite justifiable too, about the Wikipedia entry on superresolution, I'll try to explain it from an image processing point of view. Single frame superresolution is an estimation or modeling problem. It involves making assumptions about the image that you're trying to increase the resolution of. That is why it works in microscopy.
    • Assume, for example, that the object you're trying to image is a round spec 1/4 the size of a pixel. That is the "model" of the scene.
    • Now, you have two adjacent pixels illuminated, one is illuminated at 1/2 the brightness of the other. We "estimate" that the center of the round spec is 1/3 the way down a line between the centers of the brighter and dimmer pixel.
    • Now we place our model, the 1/4 pixel sized round dot at that position in a new, much higher resolution grid of smaller pixels.
    The only problem with this is if the model doesn't match the reality of the subject, you get a totally fictitious image. A 3/4 pixel sized triangle at the same location could just as easily have caused the same two pixels in the same brightness relationship.
    Theoretically, it's possible to apply modeling like that to a picture of anything. If you recognize pixelated or blurred eyelashes, you can replace them by perfect, higher resolution eyelashes. Same with skin texture. But, in a complex scene, that may mean you have thousands (literally) of different models to sort through, for every detail in every object in the scene. It's not a containable task. You can also try to estimate how a higher resolution version of the scene would look after beingblurred by a lens and reduced in resolution.
    The problem with all estimation and modelling is that the reduction of the scene from the essentially infinite resolution of nature to the small, finite resolution of the scene is a "many to one" problem. Like in the first example, the particular sensor reading could have been caused by a round object, or a triangular one 3x its size. Many scenes could have created that one image. Hence "many to one", the hardest class of problem to solve. Reversing the process is a matter of guesswork: guess wrong, and you get garbage. There are no "general" solutions, all solutions require "scene specific" knowledge.
    It's possible to "help" the estimation, by coding part of the scene to make it more recognizable. That's also done in microscopy, with weird things done to the illumination so that different types of objects beyond the normal resolution of the optical system will cause larger, more "visible" effects. In a sense, we create scene specific knowledge by adulterating the scene.
    wizfaq superresolution
  10. G.K. Gerber's link is interesting, but I see weird effects similar to what Genuine Fractals showed when I looked at it about a year ago.
  11. Now, it's typically impossible to have a discussion about superresolution without someone bringing up either "fractals" or "wavelets". So, here's a preemptive strike...
    Fractals are a type of estimator that applies to one particular type of detail, a detail with a repeating pattern of smaller versions of itself. That is called "recursion", and that type of detail is found in a very limited number of situations. The Polish mathematician Benoît Mandelbrot was the first to apply them to real-world patterns, using them to analyze coastlines. He was also the one who coined the term "fractal". Probably the most useful aspect of nature that can be analyzed with fractals are "bifurcating" structures, things that keep branching off into smaller and smaller versions. I applied it once to analysis of scans of lung tissue. 3D modelers typically use fractals to make realistic looking trees.
    It is theoretically possible to analyze an image, locate some structures that could be modeled with fractals, and create fractal models of just those structures, so that they could be uprezzed infinitely. The problem with this is that the model won't know where to stop, i.e. it will happily make you a tree with 10,000 branches, and if you zoom in on it 10x, it will generate a magnified view of a portion of a tree so "fuzzy" that it could have 10,000,000 branches. But the bigger problem is that not all parts of the image have a fractal texture. So, you'd have an image of highly detailed trees among pixelated rocks and clouds.
    Remember a program called "Genuine Fractals"? It was recently renamed "Perfect Resize". Several of us in the image processing community have maintained that there don't appear to be fractals actually doing anything in the program.
    Wavelets are just a different type of "generator". Just like fractals, if you can find something that matches a wavelet, you can uprez that something. But more often than not in a real image, there is no match.
  12. How about PhotoAcute? -- Here are some explanations and testing (and even more explanations and testing ;)
    ...but yes, you'd need multiple "identical" shots of the same thing for it to do its magic; guess you can't really create something from nothing...

Share This Page