Jump to content

Is it possible to make a software like this ?


Recommended Posts

<p>Example : You took a photo with your 16 MP camera. Put it in that software (yet to come).Wow !<br>

it became an image taken with a 32 MP camera without loosing any quality. This is how it will<br>

process the image - It will make a copy of the master file and add this two files togather (16 MP<br>

+ 16 MP = 32 MP ). What an idea, isn't it ? </p>

Link to comment
Share on other sites

<p>If I post the same information twice, it doesn't really double the amount of information. It's just the same information repeated. While it doesn't loose any "quality," it doesn't add any, either. If I post the same information twice, it doesn't really double the amount of information. It's just the same information repeated. While it doesn't loose any "quality," it doesn't add any, either.</p>
Link to comment
Share on other sites

<p>On April 1, a few years ago, I suggested that we use not interpolation, that Ellis referred to, but extrapolation, to create an image that extended past the borders of the initial photograph. That way, we wouldn't have to worry about FX vs DX format in digital photography. If someone decides to take up your idea, please ask them to try mine as well.</p>

<p>On a more serious vein, there is a signal processing technique called <em>autocorrelation</em> , which copies some data and uses the relationship between two identical copies to find periodically repeating elements in the original signal. Other than that, as Mike said above, you add nothing by making a copy of the same thing. If you want to visualize that, think of a two-pixel black and white camera which captures an image of light and dark as one white pixel and one black pixel. No matter how many copies you make, you won't get a more detailed image.</p>

Link to comment
Share on other sites

<p>-- "Can you name the software which can do it without loosing any quality ?"</p>

<p>Most applications that handle images (like PS) can do this ... but that isn't the point ... with upsampling you do <strong>NOT </strong> gain any information. So, albeit you can make a 32Mpix image from a 16Mpix image, the information stored in this image isn't the same as if you took the same scene with a real 32Mpix camera (given the lens of the latter camera was good enough, of course). So, don't fool yourself.</p>

<p>Usually, resampling of an image is only necessary if you want to match exactly a certain pixel number ... like when you need to print <strong>EXACTLY </strong> at a certain dpi value. In all other cases, you usually don't gain (much) with that.</p>

Link to comment
Share on other sites

<p>What Hector said is what I wanted to know.If that software can copy pixels without degarding or loosing quality and able to make a 32 MP image (just double of the original size) then it is most welcome. Rainer - There are at least 3-5 good interpolation software other than PS can make the image size bigger.But they also loose quality.If such software is possibe to make, MegaPixel war amoung CCD,COMS makers will surely slow down.</p>

 

Link to comment
Share on other sites

<p>Actually, I think we have a slightly different way to express the same thing:</p>

<p>I say: you do not gain anything by upsizing. I mean, that albeit the image is now 2 times the size (in pixels) it still has the SAME information in it as before.<br>

You say: the image is "loosing quality". But what you seem to mean is: " Experts will not find out that the image was acctually shot with a 16 MP camera."</p>

<p>Upsizing the image does NOT lose information ... it just doesn't add anything new.</p>

<p> </p>

Link to comment
Share on other sites

<p>Actually, if an image is resized by any interpolation technique, it is fundamentally changed. The degree to which it is changed is dependent on the amount of interpolation. A way to test this is to increase an image to a given size and then try to re-interpolate it back to the original size. Then compare the two images on a pixel by pixel basis and the results will show change. The question then becomes, does change equal loss.</p>
Link to comment
Share on other sites

<p>Rashed,</p>

<p>I think you misunderstood me. I was making a joke in my first paragraph above. In the second, I described a very specific process with a very narrow purpose -- to discover periodic events in data. Interpolated data is provides an estimate, an educated guess as to the points that are created.</p>

<p>It is possible to take half of a 35mm frame and project it on a huge movie screen. You have not added information -- you have only made it possible for more people to watch the movie. If you watch up close, you'll see the loss of quality.</p>

Link to comment
Share on other sites

<p>Hector wrote: "I was making a joke in my first paragraph above."</p>

<p>A joke, true, and I did chuckle. But sometimes a joke can also be a good idea. Sometimes when I'm editing, I find myself missing an uncomfortable "nick" in one corner where I would really *like* to crop. This happens sometimes if I must manipulate perspective. Of course the solution is to clone the area in. However, what if we could fill the area with a continuation of the lines and patterns at the edge of the existing image, at least as a first approximation? Then we could touch up the details with cloning. We could use the same approach to "patch over" an area we would like to remove -- like an ex-spouse. I wouldn't mind a tool like that. :-)</p>

Link to comment
Share on other sites

<p>Check out "Image deconvolution". Does what you suggest to some extent, and has its application in astronomy and radio astronomy in particular, but I see little sense for it in "terrestrial" photography, except maybe in desperate forensics.</p>
Link to comment
Share on other sites

<p>The process you describe does not add any information, nor would it double the number of pixels. You would simply lay one pixel on top of another (otherwise you would get a doubling of the image, which would definitely degrade the quality).</p>

<p>The process for increasing the number of pixels is called "resampling", which can be done in Photoshop and most other editing programs. In resampling, a pixel and adjacent pixels are examined and used to interpolate what value the new pixels should take. The original pixel is replaced and others added, so the true quality is degraded to some extent. The purpose is to make individual pixels too small to see in an enlargement without any noticeable artifacts of interpolation.</p>

<p>You can combine two or more images which differ in some material way to make a new image with greater bit depth (not pixel count). In Photoshop, this is called "HDR Merge". If you bracket exposures without moving the camera (i.e., using a tripod), you can combine the images to cover a much larger dynamic range than any one image by itself.</p>

<p>A related process combines two or more images focused on different planes to emulate a greater depth of field than otherwise possible. If something moves, the process may fail, however. This can probably be done in Photoshop (e.g., by selective masking and layering), but there is specialized software available.</p>

<p>There is also a scanning technique wherein the film is scanned two or more times and the images combined. This reduces the random noise present in all scans, particularly in dense areas of the film, by simple averaging. As in statistical sampling, the variations due to noise decrease with the square root of the number of passes made. Each pass takes time, and you have to double the number of passes to see any improvement. Consequently there are diminishing returns above 4 samples.</p>

Link to comment
Share on other sites

<p>Take two images and shift the "camera" by half a pixel from one two the other; then there are also mathematical techniques that would allow you to extract sub-pixel information.<br>

The most <a href="http://de.arxiv.org/abs/0807.3673">extreme application</a> of extracting sub-pixel info from a frame I have seen was of the order of 1/100 of a pixel, but of course this was a very specific application: defining the photocentere of an object in a spectrally dispersed frame wrt. to the neighbouring pixels/spectral bins.</p>

Link to comment
Share on other sites

<p>simply put: no, software cannot create information that is not there. To get more resolution, you either need more input data (=multiple pictures) or software that "guesses" or interpolates how to fill the gaps. Interpolation algorithms will surely become nicer, but the fundamental problem remains.</p>
Link to comment
Share on other sites

<p>Sarah - <em>"what if we could fill the area with a continuation of the lines and patterns at the edge of the existing image, at least as a first approximation?</em></p>

<p>This is called "inpainting". Define a border (automatically, by finding a "spot" or "blemish", or manually, by drawing a border), sample textures outside the border, and draw those textures inside the border in a reasonably intelligent way. Here's some reference material, but it may be a bit over your head.</p>

<p><a href="http://www.math.ucla.edu/~imagers/htmls/inp.html">UCLA inpainting links</a></p>

<p><em>Then we could touch up the details with cloning. We could use the same approach to "patch over" an area we would like to remove -- like an ex-spouse. I wouldn't mind a tool like that. :-)</em></p>

<p>The "spot healing brush" in PhotoShop is a crude implementation of inpainting. It's not really suited to doing a whole ex, but it can fill in your blank corner, and knock out some pretty big blemishes (not just zits). Resynthesizer is a slightly more sophisticated inpainter for the GIMP.<a href="http://www.logarithmic.net/pfh/resynthesizer"><br /> </a><br>

<a href="http://www.logarithmic.net/pfh/resynthesizer">Resynthesizer</a></p>

<p>Rashed - <em>"Can you name the software which can do it without loosing any quality ? Experts will not find out that the image was acctually shot with a 16 MP camera."</em></p>

<p>Interpolation works sometimes. It depends on the image source. Some interpolators use <em>superresolution</em> techniques (one of the things that Thomas describes) or <em>deconvolution</em> (another thing he describes). Both of these things involve making intelligent guesses as to the nature of the source material. I.e for astrophotography, if you have four equally bright pixels in a square, you can "guess" that this happened because you had a star at exactly the border of the four, blurred rqually onto all four. Then you can generate a higher resolution image with one star at a location between where the original four pixels were. More useful for general photography is "edge finding". If you find edges, you can interpolate between those edges. That way, a leaf, flower petal, hair, lip line, etc. gets larger and still retains its shape, without getting "fuzzy" or "jagged".</p>

<p>Other "deconvolution" techniques make the assumption that the original image actually had more resolution than it currently has, and then tries to build a mathematical model of how the resolution got "lost", i.e. from lens blur, motion blur, or diffraction. It then tries to reverse the detail losing process. Again, this is sometimes successful, sometimes not.</p>

<p>Stash Rusinsky's SAR program does some of this. It's not a panacea, on some images, the enlargements look natural, on others, cartoonish.</p>

<p><a href="http://www.general-cathexis.com/">SAR Image Processing</a></p>

<p>There are other "guessing" techniques. Fractal interpolation (as far as I know, not a technique actually used in the program called "Genuine Fractals") tries to figure out what sort of "random process" generated a particular detail in an image, and then generate more detail of that type. I've seen some that's scary, reconstructed hair, irises of eyes, and skin textures that had quite a sense of reality. Much detail in nature is generated by random processes, and this technique works for such things.</p>

Link to comment
Share on other sites

<p>I'm still waiting for the machine they used in Blade Runner to zoom in to the photo and figure out there was a tiny mirror on the dresser and in the reflection of the mirror was a gal in a bathtub and Decker made a polaroid of the tattoo on her back so they could track her down. That would be some extreme interpolation. </p>
Link to comment
Share on other sites

<p>What is the point of creating additional pixels that was not there in the first place? Yes, you can upsize the image in Photoshop but no additional information is added. <br>

Interpolating or extrapolating is simply creating information that is not there. If I want to do that I may as well start with a blank canvas in Photoshop and start drawing.<br>

Granted you can take a few shots with different exposure and create an HDR image by merging them together. Maybe that is what the OP wants.</p>

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...