Experimental Camera at Stanford

Discussion in 'Casual Photo Conversations' started by jonk, Sep 16, 2010.

  1. Here is an article on an experimental camera at Stanford which may be what we see in 3-5 years:
    experimental camera
  2. Experiment all they can but as long as the Japanese dominates the camera market, they still do it the Japanese way. Unless you think Kodak, Leica or Foveon can take over the camera market in 3-5 years
  3. Experiment all they can but as long as the Japanese dominates the camera market, they still do it the Japanese way. Unless you think Kodak, Leica or Foveon can take over the camera market in 3-5 years​
    ...and the point is?
  4. Cool! This was really just a matter of time. HDR methods are gaining in popularity, for better or worse. Stitching software has been around for a while, and there is software that allows one to augment depth of field with bracketed focus in the same way that HDR images are assembled from bracketed exposure. It's even possible to eliminate moving objects (e.g. cars and people) from images, leaving only the stationary objects (e.g. buildings). Given that all this is happening, it's only a matter of time before it's incorporated into our cameras.
    John, Nokia, Google, and Apple are already onboard. Looking into my foggy crystal ball, Madam Sarah predicts that the cell phone camera will never be allowed to surpass the high-end dSLR in its capabilities. I'm sure Canonikon et al. will manage to keep up with the crappy cell phone, and then some.
  5. Okay, so it's an over-powered digital camera running Linux, with a bunch of automatic post-processing built in for HDR, shake correction, etc., and the ability to modify the software or add your own. This is actually less revolutionary than it seems. There are already third-party firmware add-ons for some digital cameras (Magic Lantern, for example), and there are already cameras with built-in HDR. The real point of this thing has more to do with programming than photography, as the article itself indicates:
    "I actually think it's a sneaky way of introducing the concept of programming to people who might never have considered it," says Hanspeter Pfister, a professor of the practice of computer science at Harvard. As photographers who worked with film often became chemists in order to modify their art, so too photographers who work with digital cameras could become programmers, he says.​
  6. The SOP seems to be to spend millions on this technology, then hand it off to a company overseas who will make money on it.
  7. Michael: You can trust Stanford intellectual property managers that hey make as much money out if it as possible. The profit these days is in the intellectual property, not the manufacturing. Nobody just hands off these things unless they have no commercial value.
  8. The further into the article I read, the more I felt that the good professor maybe wasn't very current on what the most recent digital cameras already do.
    The examples look considerably less capable than a lot of the HD pictures we see in many portfolios here. Even with a one-picture image, you could get a long way there with just highlight and shadow controls.
    If you plug your digital camera directly into a portable computer and carry the whole thing with you, you'd do better than this Stanford "one-shot" as they used to call them in the early color days.
    Is it totally irrelevant that Leland Stanford Junior University was one of the key players in that marvel of computer science, the Strategic Defense Initiative? The Stanford Research Institute also has widely backed up Shroud of Turin, Face on Mars, Remote-viewing and other cutting edge projects. %}
  9. If you plug your digital camera directly into a portable computer and carry the whole thing with you, you'd do better than this Stanford "one-shot" as they used to call them in the early color days.​
    How would carrying your laptop would allow you to do three exposure blending in camera with your current camera?
  10. Whether the computer is IN the camera or in the attached portable, seems a distinction to me without a difference.
  11. The difference between having just a camera and a camera with laptop connected to it is huge.
  12. Will it fix the dreaded telephone-pole-out-of-her-head syndrome automatically?
  13. SHADOW AND LIGHT: The [Stanford] camera takes several frames at different exposures, then merges them into a single image that captures the full range of intensity within a scene.
    Automatic, in-camera HDR along with focus stacking.
  14. If they want to really impress people and make some real money, they should develop a way to make an HDR look natural
    instead of as though it were taken on another planet. The HDR concept is great except they tend to look like the pages of
    a graphic novel or a Dick Tracy comic strip.
  15. I forget where I saw it, but I want to say that about six months ago or so I saw an article on an "open source" digital camera at MIT. The camera body in that article looked a lot like the one in this article. Could it be that smart people across the country are all working on the same or same type of project? It seems plausible to me.
    I think it's the same project.
    I think it'll be ten years, but when we get control of the software more, and won't have to rely as much on proprietary methods, then we'll see what's up. By the time we get it, of course, every cent of profit will have been milked out of the thing.
  16. Dan, HDR as currently implemented (with tone mapping to cram the dynamic range into a lower dynamic range format displayable on a monitor or printable on paper) will never, ever look natural. Never. That said, an HDR (and non-tone-mapped) image file, with many, many stops of dynamic range, would be fantastic. It could be fractured into multiple layers of different contrasts and judiciously cut and blended. I'd love a camera that could do that for me! OTOH, a sensor with deeper wells (maybe a higher voltage sensor?) might do the same thing.
  17. The difference between having just a camera and a camera with laptop connected to it is huge.​
    Depending on the laptop, maybe in some dimension like total mass, but is this really a technological breakthrough?
    Another breakthrough, let's make everybody learn LINUX to use their camera ;)
    What an improvement over the current situation. LOL
  18. Seems like the industry jumped out way ahead of the academic by about 6 years. I think the next step would be a stereoscope type digicam. Two sensors would work better for in camera HDR and 3-D type applications. And if you had them alternately firing you could get twice the fps for slow-motion stuff. Throw in a third lens for IR and and a Lomo filter and voila!
  19. I don't get the idea that HDR can't look natural. Obviously there are a million over the top examples here on PN, which I personally dislike rather strongly, but plenty of people are doing natural looking exposure blending. I do it all the time with blown highlights, by reprocessing the same raw file or by using a bracketed exposure, then selecting highlight areas and copy and pasting from one photo to another. The results are very natural looking.

    1. Reprocess raw with exposure reduction and output, or use bracketed exp.
    2. Select>color range>highlights
    3. Expand or contract selection as needed ... feather
    4. Copy>paste into photo with blown highlights>align

    My goal is to make something natural that matches the scene, not to create a frankenphoto. It's not hard to do and it's effective. It would not be hard to put the same process into the camera.
  20. Brett and Sarah,

    Good comments from both of you. Thanks. I was being a bit facetious, perhaps because I had a certain expectation for
    hdr, i.e. that it would look natural if you didn't process it too far. Unfortunately, the opposite is true. You would have to work
    very hard to make it look even remotely natural, but what pops out of the software automatically looks really strange. When someone
    comes up with an hdr tool that leans more toward the natural look, I think it will be a big hit and I would definitely give it a
    try. But I have given up on the whole idea for the time being.
  21. Brett, that's a good approach to taming blown highlights. However, that's hardly the conventional HDR approach. What makes HDR work look unnatural is the tone mapping step, which you and I don't do. I think your approach might be replicated by starting with a non-tone-mapped HDR file and applying a sigmoidal contrast curve that would compress the highlights (and also the shadows). As I said, I have nothing against HDR, per se, but it's the tone mapping that makes me ill. It's the tone mapping that has become conflated with the term "HDR" and yields that "HDR look."
    Dan, I admit there's subtle HDR/tone mapping, but it only yields subtle improvement in dynamic range, and it still looks like "HDR" to my eyes. My biggest gripe about tone mapping is what happens around a high contrast border. The already-high contrast of that border is accentuated. The highlight details on the bright side of the border become washed out (in the halo), and the shadow details on the dark side of the border are crushed. Using my general approach of combining layers, I often make a hard (but feathered) "cut" along a high contrast border, e.g. land and sky, and blend different contrast layers on each side of the "cut." I don't think an automated algorithm will ever do that, and it's not something that can be achieved by sliding sliders.
    BTW, maybe you have it all wrong, Dan. I think a "comic book" mode would be a big seller! Just look at all these "cartoon yourself" ads we see on PN! ;-)

Share This Page