Human eye and Camera, the basic differences:

Discussion in 'Casual Photo Conversations' started by dilip_kumar_singha, Oct 25, 2008.

  1. We view through our eyes (the best available optical device in the world) and process / visualize the image
    through our brain (the best available processor in the world) and our sense is tuned to this combination through
    out our existence, also we view through our lens, take snaps and further process it in computer. What are the
    basic differences between the two and which of the differences make a snap striking and appealing?
  2. While the brain, I agree, /is/ the best available processor it's that fact alone that makes human vision so good.
    The eye itself is /not/ the best available optical device by a very long way – it only appears so because the
    processor woks miracles of synthesis to clean up the information.

    For example: the human eye resolves, at best, about 8 lines per millimetre – and over most of its field of vision
    much less than that, and it declines with time from about age 20 onwards. A 35mm film camera, by comparison,
    resolves around 80-100 lines per millimetre in average hand held circumstances. The rich and detailed vision we
    enjoy relies on the eye always being on the move, capturing partial and low grade images which the brain combines
    in order to produce an enhanced final view of the world.

    I'm not just saying that for the sake of disagreeing –it's relevant to your main question.

    What makes the difference in a "snap"? Precisely, in my opinion, that: it is a snap, a single moment fixed, which
    the eye alone cannot do. We see, in a real sense, in digitally synthesised video; photography cuts across that by
    presenting us with the impossibility (in nature) of high fidelity stillness.
  3. Camera usually gives a rectangular field of view of about 20 - 80 degrees while the eye is about a kind of elliptcal 180 degrees.

    Camera is broadly speaking evenly sharp over the whole frame while eye has a small sharp central area surrounded by a wide area which is less sharp.

    Camera can usually adjust focal length while the eye cannot.

    Camera media can cope with an exposure range of maybe 7 stops if you are lucky while the eye can usually cope with a wider range at any one time and usually a wider absolute range of brightness. .

    Camera is usually two dimensional while eye + brain process the image to a three dimensional image.

    Camera gives still image (well, a still camera does!0 while eye + brain create a movin image.

  4. Mine used to be among the 'best optical devices in the world', but they are now less than optimal, even discounting the color-blindness I've always had. Now. I need +1.5 - 2.00 diopters to read newspaper print. Glass doesn't, as far as I know, suffer this decline after five decades or so.

    We have two eyes and see via stereoscopic vision - the camera looks at the world through its single (usually) 'eye'. We scan with our eyes - the camera takes a fixed view. Much of what we perceive is a monatge of images over time, just a few seconds, but is why the landscape I saw as I drove along the road cannot be captured with a camera. Perhaps there is more to Winogrand's apparently flippant answer to the question, "Why do you take photographs?" when he responded, "To see what things look like when they're photographed."
  5. Interesting question. Really interesting how we process with our brain, not through our eyes.

    Good peripheral vision would be maybe 180 degrees, though it is a bit less for most people. That is what our eyes see. However, when you view a photo on your computer, for example, the brain kicks in and focuses on the image, not the surrounding stuff.

    For those purists who believe photos should reflect 'reality,' I guess that means everything should be photographed with a lens wide enough to pick up the entire 180 degrees. Hmmm, never thought of it that way before.
  6. Digital images are more sharp ( if sharpen in camera or in PC ) then your eyes can see naturally !
  7. If we try to simile the human eye in a sensor for a super camera, today this would be impossible. I guess the nearest thing to it is a large format camera in analogues or one of those Nikon's 21.1 MP. I saw that to make the human eye in a digital sensor, this one, would need about 576 MegaPixels. So I guess it is impossible until maybe lots of years.
  8. Real world folks rarely get 50 line pairs per mm on 35mm film; even on their good negatives. Thats because of camera shake; miss focus; they dont use fine grain B&W films; they dont always shoot at F8 ish; and they almost never shoot 1:1000 contast portraits; sunsets; group photos etc. A less maked believe number is say 35 as a tyical number and 50 best case; and 100 a 1 in 100,000 case with the camera on a granite block; with times lights and a 1:1000 contrast object.
  9. The resolution of the human eye is only sharp over a very small angle; ones eye has to rotate to see fine details; ie very very poor off axis. This the "megapixle" level of the eye is really very very low. Its more like two devices; a super super wide angle of say 180 degrees to that resolves very little; less than a Barbie Cam; and a very narrow moderate high res camera thats only got a few degrees where its sharp
  10. The eye is more like a Planet/Martian rover's eye were there is a low res wide angle to get the jist of whats out there; ie where to point the high res camera. With your eye you rotate it to see the fine details of an object
  11. Varied responses are available over the technical part of the differences. If some can throw light over the philosophical part of the question i.e. the striking & appealing features of snaps compared with views through eyes.
  12. In his most excellent response, Felix Grant gave a clear explanation of the technical aspects. However he also put his finger squarely on the reason for the aesthetic appeal of a snapshot. It freezes and synthesizes in a single image what the eye cannot see in real time.
  13. A huge difference is that the eye is always moving, adjusting, and processing while a camera takes a still image. The eye is like a hyperactive video camera that is constantly refocusing and adjusting for exposure and has a patch of dead pixels that automatically photoshops away the dead pixels to fool the viewer.
  14. The human eye isn't color corrected, and suffers from terrible chromatic aberration, both lateral and longitudinal. Another example of where the brain's amazing processing power comes into play.
  15. Film and digital sensors are flat; the human retina is bent quite out of flat.
  16. The human eye is not making a snap. The human eye is constantly roaming, adjusting, learning, processing, re-adjusting
    with the brain. It is a higher order of a motion picture rather than a snap. If our eyes only take a snap over a single moment
    in time we just died. That single moment will be your last image on earth. That is why the camera is so useful in helping us
    capture a moment in a time with a snap. The camera is two dimensional. Our eyes is four dimensional. You have to have
    the continuum of time for our eyes to work.
  17. From what i can remember of science class in high school this is what i would guess the image the eye sees Amber
  18. I think where the human eye exceeds is in its ability to see detail in a wide range of contrast and brightness in the same scene.
  19. What about the brain's processing and selection of images seen with the eyes? Our eyes may focus on this or that, but
    look how often the brain focuses on the portion of the seeable image that is of temporary importance. Like, ignoring the
    appearance of the room while watching TV, for instance.

    With the camera, sometimes it seems as though we can only approximate this portal of focus (I guess that's what you'd
    call it) by selecting different viewing angles with different focal lengths.
  20. I feel that's right, what John has told just before this. Our eyes are controlled by our brain and guided by our emotions. In a movie, the movement and control of camera is guided by some pre-set objectives. In case of our eyes, the objective may be fixed but in most of the cases it is varying. When we are walking through a street then our view through eyes may well be ‘rapidly changing frames of photographs with may be of different focus and may be of different subjects’. (Here eyes are much superior to the camera since such quick change of focus with varying depth is not possible with any sophisticated automatic camera.) Suppose at a particular instant I am watching a beautiful lady then probably I am not watching what her dog is doing or vice versa. But in a photograph every thing within the depth of view is recorded. While viewing this photograph, probably I can take time to watch minutely every detail inside the frame at the same time if something completely different is happening a little away we are ignorant about that. Hence the interesting point, in a photograph (suppose a street photograph) the photographer dictates and decides the subject confined within a frame. But while we are viewing through our own eyes in a street, the subject is varying and is decided by many different factors. In nutshell, in a photograph, a photographer calls our attention to a subject confined within a frame (isolating the surrounding) where we can in much detail imagine and sense the happening to that frame tuned by our taste, knowledge and experience, which makes a photograph ‘striking and appealing’ in our mind. Some more opinion from others may be interesting to have a divergent / complete view.

Share This Page