Jump to content

Future visions


tapani

Recommended Posts

<p>Boredom got me thinking about features that might appear in DSLR bodies in

the future - not in the next models, rather in the "might be possible one day"

category. Maybe in 1D mk VI or so...</p>

<ul>

<li>Lens-specific correction data in the camera. The camera would know how a

given lens distorts at any given settings and corrects for anything it can -

pincushion and barrel distortion, vignetting and color aberrations at least.

This can be done in post-processing software, too, but doing it in camera would

have many advantages.

</li>

<li>Single-shot exposure bracketing. Since digital exposures ultimately are just

counting the number of photons hitting a pixel, it might be possible to read

intermediate values off the sensors and keep on letting more in. This isn't

possible with current sensors but there's nothing impossible about it in

principle that I can see. (Basically the same thing is already being done with

multi-minute astrophotography by combining multiple shots.) This would give a

radical boost to effective dynamic range.</li>

<li>Multi-focus shots, or ability to change focus in post-processing. This is

theoretically possible by using two sensors slightly apart and using some heavy

math to compute the direction of light beams - I believe a sort-of functioning

prototype has already been built. There's also some work being done on

multiple-aperture lens constructions. Besides post-exposure misfocus correction,

it would allow unprecedented depth-of-field and possibility of creating 3D

images. (This kind of stuff is being done already from multiple separate,

differently focused shots.)

</li>

<li>Virtual extension tube, or ability to move effective focal plane distance

from the lens without physically moving anything. Could be done by technique

similar to above, i.e., by computing where the light beams would hit the sensor

if it was at a different distance.

</li>

</ul>

<p>OK, that's all for now. Comments on feasibility and desirability of these, as

well as additional semi-scifi features would be welcome.

</p>

Link to comment
Share on other sites

The first item on your list is something that would be easy to include in cameras today since the software already exists for this, and which should also eventually be part of any RAW conversion software - not just dxo optics.

<p>

I like the second item on the list, and it's something I've thought of before too. A multi-sampling while the shutter is open would require a lot of computing power and buffer size, but that shouldn't be a problem with today's available technology, and coupled with algorithms for noise reduction, it could also eliminate shadow noise as it combines the different samplings. With proper tone-mapping and curves, this would make it possible to get as many stops as needed, with no clipping whatsoever if desired, making it possible to get the smooth transitions into light and dark that is one the main differences between film and most of today's digital camera captures.<p>

Changing focus after the shot is something I read about a while ago, and all that would require is a layer of sensors (starts sounding a bit expensive, though), which would cover at least a certain range of focal planes, making it possible to fine tune focus in post-processing, within certain limitations as dictated by the focal length and aperture and its depth of field, also including algorithms for merging the different samplings to get more of the image in focus in addition to tweaking the exact focal plane.<p>

All very interesting stuff, and neither of them sounds like science fiction as some of the technologies already exist and just need to be implemented in future cameras. Just as with other things of high technology, new things take a few years to show up in production items, and I am sure that laboratories around the world are working on marvellous new things that we may yet be unaware of to subtly alter the future of digital photography and facilitating both technical image quality and creative possibilities.

Link to comment
Share on other sites

Topic one could be done ... but this data would be better stored in-lens rather than in-camera (for obvious reasons).

 

Topic two ... you misunderstand the design of ccd/cmos devices ... you cannot do what you ask for.

 

Topic three ... can be done ... the question is, if you're ok, with a sub one mega pixel image as a result of such a focus-me-later-image from a more than 10 mega pixel camera.

 

Topic four ... i believe physics stand against this ... if it isn't in focus, it simply isn't in focus.

Link to comment
Share on other sites

DPP 3.2 does (1), but it's part of the process of conversion from RAW, so unless you are relying on in-camera conversion to JPEG, it would be of limited value. However, it's perfectly possible with curent technology.

 

I'm not sure if (2) is technically possible, but if it is possible to read an image off whilst still accumulating it, then that does have possibilities.

 

I don't think (3) and (4) are real.

Link to comment
Share on other sites

Maybe a bit limited in vision. The <a href="http://www.patentstorm.us/patents/7248928.html">artificial eye</a> would be the ultimate ECF, utilising <a href="http://www.opticsinfobase.org/abstract.cfm?id=142413">dynamic adaptive optics</a> to adjust the focal length of the <a href="http://www.photonics.com/content/news/2006/September/20/84453.aspx">artifical cornea</a> on the <a href="http://news.zdnet.co.uk/hardware/0,1000000091,2135096,00.htm">wearable camera</a>. The <a href="http://www.msnbc.msn.com/id/6089652/">stem cell</a> grown bionic camera would be attached to an <a href="http://citeseer.ist.psu.edu/217428.html">eye control neural network</a> implemented on a <a href="http://www.theregister.co.uk/2000/08/15/boffins_unveil_worlds_most_powerful/">quantum computer</a> would adaptively provide a wearable adaptive lens camera to adapt to all focal lengths. Available in black, white or classic grey, the canon would also solve the <a href="http://www.telegraph.co.uk/earth/main.jhtml?CMP=ILC-mostviewedbox&xml=/earth/2007/11/14/scisurf114.xml">Ultimate theory of everything</a> between shots.
Link to comment
Share on other sites

<p>Re (1), having the data in lens would be better except for all existing lenses - if it could be stored in camera it could work with all old lenses, too. The camera might even have means of learning the behaviour of lenses it has no prior data of. Also, having it available in camera the time of exposure rather than just post-processing would allow various optimizations in focusing, exposure &c.

<p>Re (2), I understand it cannot be done with current sensor technologies with any interesting speed, but there's nothing fundamentally impossible about it. Taking the idea a bit further, even the moment of exposure could be changed in post-processing - imagine a video camera with sufficient quality and speed that you can create good still images from any point in time.</p>

<p>Robin: (3) has already been demonstrated in prototypes, although with rather limited resolution as Rainer says - but maybe there's a new use for the megapixel race? </p>

<p>As for (4), it should be doable with same basic idea as (3): multi-plane sensor. The image is after all just a result of computation, so if you have enough data to compute where the light rays would've hit a plane at certain distance you can determine what the image at that plane would have been.</p>

<p>Glen: I did say "<b>semi</b>-scifi"... :-)</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...