Jump to content

Lytro camera--focus AFTER you take the pic!


Recommended Posts

<p>This is weird... cool... unsettling? Sounds like it will actually get produced, too.</p>

<blockquote>

<p>“You just concentrate on the image and composition, but there’s no need to worry about focus anymore,” Mr. Hernandez said. “That’s something you do later.”</p>

</blockquote>

<p><a href="http://www.nytimes.com/2011/06/22/technology/22camera.html?">http://www.nytimes.com/2011/06/22/technology/22camera.html?</a></p>

Link to comment
Share on other sites

<p>From the article:</p>

 

<blockquote>

<p>[H]e said it was the size of a standard point-and-shoot camera. The picture resolution, he added, was indistinguishable from that of his other point-and-shoots...</p>

</blockquote>

<p>So for now, at least, it's not competing against DSLRs for image quality. The trick of using microlenses to capture the same scene with a variety of focus points is cute; it sounds like the camera is, in essence, taking several shots in parallel and letting you pick through them later. If resolution (in MP terms) is equal to current P&S cameras, say 12 MP or so, then the storage requirements of all that parallel shooting must be pretty high.</p>

Link to comment
Share on other sites

<p>The main concept: it's not just recording which pixels are at what value for tone/color ... it's recording the direction from which a given ray has hit that pixel. Hence their reference to recording information about the light <em>field</em>, rather than the 2D image that has landed on the 2D sensor. It's sort of the opposite of the computer rendering that's done from 3D models and ray tracing. It's quite cool, actually. Not sure how it's all going to apply in real life, but I'm sure they'll sell some of 'em.</p>
Link to comment
Share on other sites

<p>RE: Craig - I have read through his doctoral thesis and they are not shooting multiple images. It is one image that can be recalculated through software. They did, in fact, use a p&s camera at first, but then moved to a Contax 645 with a digital back to solve packaging issues and increase resolution. It's not so cute when you see the formulae they are using to generate the images.</p>

<p>Josh - The thesis will show you how it operates. Much of it is way over our heads, but in the introduction he mentions which chapters would be of interest to photographers. I found the interesting parts for me just by skimming through it. </p>

<p>See all 177 pages of the thesis here: http://www.lytro.com/renng-thesis.pdf</p>

Link to comment
Share on other sites

<p>I wonder whether the technology can be used to generate something similar to focus stacking in the macro range. With focus stacking you need to take many exposures and merge them. This is time consuming and will not work well if the subject moves during the time it takes to make each exposure. The perspective is also changed from what you would get by shooting from the same point and using a small aperture. A single exposure with multiple microlenses would solve the problem of shooting a moving subject. You could use flash or just turn up the ISO if your sensor is good enough. The software might then enable you to use the correct focus points for each distance and merge or connect them. </p>
Link to comment
Share on other sites

<blockquote>

<p>Jeff Adler - I wonder whether the technology can be used to generate something similar to focus stacking in the macro range. With focus stacking you need to take many exposures and merge them.</p>

</blockquote>

<p>No. The plenoptic technique can only reduce DOF to that of the decimated lens aperture. For example, Ren Ng's first camera over at Stamford, had f4 microlenses, at 13.5x the width of sensor pixels (125 micron microlens array over a 9.25 micron sensor). So, he could achieve any DOF between f54 and f4. But computational constraints limit you at the deep DOF end, so he got results more like f28. So, it actually takes away a stop of DOF, you're limited to 1 stop less DOF than you get at a lens's diffraction limit.</p>

<p>The focus stacking technique increases DOF to a theoretical f(effective)*N(shots)/2. So, shooting at a non-diffraction limited effective f8, with a 100 shot stack, you'd have the DOF of...</p>

<ul>

<li>f8 * 100 / 2 = f400</li>

</ul>

<p>But you still get to keep the resolution of f8.</p>

<p>Information theory is a b****!</p>

<p> </p>

Link to comment
Share on other sites

<blockquote>

<p>Looks very promising I must admit.</p>

</blockquote>

<p>Why must you admit that. It's not true, and no one is coercing you to lie, I hope.</p>

<p>To understand why the plenoptic camera is a vessel of that which promoteth growth and is very powerful, see <a href="http://forums.dpreview.com/forums/read.asp?forum=1000&message=38726623">here</a>.</p>

Link to comment
Share on other sites

<p>Joseph, thanks for your explanations on dpreview. (I read your other comments in that thread too.) Very helpful. I'd say the resolution loss and computational requirements pretty much doom this, at least for now.</p>

<p>As a technology professional I find it interesting, but as a photographer I think it's a solution in search of a problem. I'm not looking for cameras that require less skill to use. I don't want to dumb down my shooting experience and make fundamental decisions (like how much DOF) after the fact. Even if Lytro had a working camera that produced images of reasonable quality, I wouldn't want it.<br>

<br />One thing, though, that I find a bit strange about your comments is that although you obviously know what you're talking about, you seem not to know that there's an N in the word "Stanford". It's not exactly an obscure school that nobody's ever heard of, is it?</p>

Link to comment
Share on other sites

<p>Craig, thanks. Glad you enjoyed. I've used that same phrase to describe it, "a solution in search of a problem".</p>

<p>I do tend to do that, swapping Stamford (in the UK) with Stanford (in California). It's especially bad, considering that I've been to both. Including a trip to the university right after the big quake that trashed so much of the campus.</p>

Link to comment
Share on other sites

<p>"I'm not looking for cameras that require less skill to use."<br>

Craig, you are in the minority, as are most DSLR users. A great majority of cameras in use are P&S and most of those users would be happy with a camera that offers the possibility of better photos with little or no extra effort on their part.</p>

Link to comment
Share on other sites

<blockquote>

<p>I also think that Ken Rockwell is dead-on with his opinion on this.</p>

</blockquote>

<p>I think you are right. especially this comment:</p>

 

<blockquote>

<p>I see these things all the time. We can forget about it until someone actually has a product to sell, and then we can evaluate it on its own merit.</p>

</blockquote>

Link to comment
Share on other sites

<p>Bill, this is a pretty cool crowd.</p>

<p>But I'm curious, why do you think that Lytro would be good for surveillance? It's very low resolution, and it doesn't let you see anything you wouldn't already see on a conventional "deep DOF" image. It limits the low light sensitivity, and it requires insane amounts of processing power to operate in real time. I had a look at the calculations in the paper. I'm thinking 300 billion calculations per 16mp sensor capture (that only delivers 0.09mp of output resolution). That's 9 trillion calculations/sec for realtime video.</p>

<p>Every new sensor guy with a says "it will be great for surveillance". The SMaL people said that about their (alleged) high dynamic range sensor, until the market figured out that surveillance cameras shoot through filthy lenses and protective windows (cleaned every six months by an unskilled janitor, if you're lucky) and you can't actually see high dynamic range. The wavefront coding people said it, the structured light people said it, the "flutter shutter" people said it, but in the end, every last one of them simply made surveillance performance worse, without bringing advantages to the table to compensate. (Although structured light did prove useful for game controllers. ;)</p>

Link to comment
Share on other sites

I am only repeating what I hear from venture capitalists, not the idiot ones who pile into me-too investments, but ones who understand technology, like backer Andreessen Horowitz (but not them).

 

All I can imagine myself is that a Casino camera, which currently must be wide angle for full coverage, could be replaced with Lytro. In reviewing a recording, security personnel could focus-in on whatever looked suspicious.

 

Or in a parking garage, the focus could be changed to make a license plate visible anywhere from near to far. Lytro says low-light performance is good, but skeptics abound. Lighting is usually not good in a parking garage.

 

What I would like to know is how the images are stored. Are they as big (10x) as JW's "decimation" implies? And is decimation the correct number, or is it dodecimation (12x) or larger? I suspect a lot of the processing is already done and the surveillance video will be smaller than the sum of its parts.

Link to comment
Share on other sites

<p>I remember back in the late 1970's a company called Pressley came out with a black box hydrogen converted that was going to eliminate the US dependency on foreign oil. It is now 2011 and we are still dependent on foreign oil. As someone already pointed out, a number of years ago the Foveon Sensor would make all camera sensors obsolete. Now a company is publicizing a 'morning after" multifocus sensor, but there is no camera. Until I see and others see a working camera, as far as I am concerned it does not exist and may never exist. </p>

<p>I surmise that the company is is using the promotion to raise venture money to finance further research to see if their ideas can translate into a marketable product. Good luck to them.</p>

Link to comment
Share on other sites

<p>Hey Bill...</p>

<p>It's "decimation" as in getting rid of a lot of stuff, not by any particular base value. The original Ren Ng camera at Stanford had a linear decimation of 1/13.5, or an area decimation of 1/182. Which meant that, for every 0.090mp output frame you got, you had to store a raw 16mp sensor frame.</p>

<p>Your casino scenario is a perfect example of why the Lytro is useless in a security situation. If you had that same camera, churning out 16mp raw frames, you'd only get 90,000 pixel final images, 300x300. From a wide angle shot, at that resolution, you wouldn't even be able to make out individual faces.</p>

<p>Same thing with the parking garage.</p>

<blockquote>

<p>the focus could be changed to make a license plate visible anywhere from near to far.</p>

</blockquote>

<p>The focus can only be changed within the area covered by the decimated lens DOF. If you took the original f4 Lytro, you can refocus anywhere you want within an f28 DOF, but you've got f45 low light ability and diffraction, and a very low resolution. Why not use a real camera, with an f28 lens, and an 8mp sensor (about 10x the linear resolution of a Lytro), and get both better low light ability and much more detail?</p>

<p>For security, as far as I can tell, Lytro literally brings nothing to the table.</p>

Link to comment
Share on other sites

Howard, "morning after multifocus sensor" is the funniest thing I've read this week.

 

I don't see this becoming popular with photographers because (1) autofocus is a solved problem, and (2) most people do not enjoy the digital darkroom. As to other uses, in 3 years I'll give JW a big pat on the back if he's correct.

Link to comment
Share on other sites

<p>My take on this is all practical - they are targeting the wrong market. Normal consumers do not want to have to deal with anything like a RAW file, which is what the camera must produce. Also there is of course no in-camera JPG from the camera, since that would mean they'd have to select one or more focal points which at least the initial camera does not want anything to do with (the initial claims are of great speed in shooting since no AF is done).</p>

<p>Also even for those willing to shoot the cameras native format, what programs will support the camera besides the software that ships with it? Adobe has done an amazing job supporting many cameras but support for this camera is something you can't add like any other, since it would require the addition of some kind of control mechanism to select focal points which is totally unlike any other RAW conversion today.</p>

<p>I think they could actually find a niche to sell this in if they put it out as a mainly video camera, where you could have the focus drifting artistically about.</p>

<p>It will be very interesting to see how much it costs on launch as well.</p>

<p>One last thought on the camera is that it further goes to show how a truly universal format to hold camera RAW data is not possible, as not even DNG in linear mode will be able to hold anything like the totality of this file.</p>

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...