Non-photorealistic CameraDepth Edge Detection and Rendering using Multi-Flash Imaging
by Kyle Nicholls
The multi-flash camera captures real life images and renders them in a non-photorealistic line-form. Ramesh Raskar describes this process as computational photography. Raskar modestly insists that while "it's something you can't do with film, it certainly is not a replacement for photography, or Photoshop for that matter". Non-photorealistic imaging is an easy way to convey ideas/images and has myriad applications for technical and medical imaging.
|source image||depth edge map||overlay|
what does it do?
The multi-flash camera's non-photorealistic images look like line drawings, but have an advantage over hand made line drawings for they are able depict real-world scenes with precision and, most importanly, speed impossible for the human eye/hand. One may ask, why not use good ol' Photoshop filters to achieve these types of images? Multi-flash is able to detect variations and shape, while Photoshop filters are only able to translate variations and intensity of color. It does not produce intelligent stenciling like PhotoShop's posturization, but the results are similar and more successful.
Additionally, the multi-flash can detect shapes that an ordinary camera would not be able to replicate. For instance, a white shape on a white background would appear as a white plane if processed by a camera, or even the human eye. Multi-flash is able to detect depth discontinuities, through its four positions of flash, producing a line image of the white shape.
how does it work?
The multi-flash camera uses four strategically placed flashes that cast shadows along the depth discontinuities of a scene. The flashbulbs illuminate the scene during image capture creating thin slivers of shadow along the depth discontinuities. The position of the shadows is of course determined by the position of the camera and flashbulb: when the flashbulb is on the right, the shadows are created on the left, and so on. In the example to the right, we see how the shadows on the subject move in each of the four positions.
The shadows of an image are detected by first computing a shadow-free image, which is approximated with the MAX composite image. The MAX composite image is assembled by choosing from each pixel the maximum intensity value from the image set. Then the shadow free image is compared with the individual shadowed images identifying the shadow regions. The correspondence between the position of light and shadow region boundaries produce the depth edges.
Multi-flash imaging promises to facilitate and pioneer complicated rendering of mechanical objects, plants, or internal anatomical parts. Because of its ability to detect depth discontinuities, it may render shapes that would otherwise be difficult to perceive. For instance, a car engine could easily be captured in a non-photorealistic image and then superimposed over an actual photograph of the engine resulting in a superior manual illustration (see example below). Alternatively, a skeleton with complex network of white bones could be efficiently reproduced for instructional medical visualization. Additionally, an endoscopic camera enhanced with the multi-flash technology promises to enhance internal anatomical visualization for researchers and medical doctors.
In the example below we see the source photo of a car engine, the multi-flash output, and an excellent example of a technical illustration (a combination of these two images). In the second row we see examples of other techniques such as histogram equalization, brightening and canny edge detectors -- none of these are nearly as efficient or representative of the source, nor could they be used sucessfully in a technical illustration!
depth edge map
of source photo
|brightened source photo||canny edge detector|