Jump to content

Removing Bayer Mask


Recommended Posts

Has anyone ever removed a bayer mask? Please correct me if I am

wrong but it should be possible to increase resolution by at least

factor of 2. My probably errant reasoning is that there would then

be twice as many B&W pixels as green pixels. Are these mask's

usually directly attached requiring chemical removal or is it

possible to just peel them off? Of course for CMOS imagers with a

layer of microlenses, I assume those would be removed along with

mask.

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

I don't understand the reason for Bob's curt (to the point of rudeness) answer.

 

I am currently trying to get hold of old/discarded/obsolete digicams so I can try to remove

the Bayer filters. I figure the first few tries will probably result in destroyed cameras.

 

One problem with this is that "disposable" point-and-shoots rarely offer RAW output. So

really interesting experiments won't be possible. But if removing the filter turns out to be

straightforward, maybe I'll try it later on a higher-end camera.

 

Ray, if you attempt any experiments of your own, please post about them.

Link to comment
Share on other sites

You may not like Bob's answer but he's right.

 

First, you won't gain any increase in resolution. The Bayer color matrix does not degrade the luminance channel, it only affects the chrominance channels. Since the final image is formed using the brightness values in every pixel, the full spatial resolution of the sensor is maintained, only the color resolution is interpolated.

 

The Bayer color matrix is sometimes formed on a glass substrate that's epoxy-glued onto the sensor, or it can be deposited directly on the surface of the sensor. In either case, it is almost impossible to remove it without damaging the chip, whether mechanically (scratching the active surface, or cracking the fragile silicon die), or electrically (the sensors are very susceptible to ESD damage, and the tiny wirebonds that connect the die to the package can be easily broken).

 

So, in the end you may have to thank Bob for his direct answer...

Link to comment
Share on other sites

I don't think it's possible either.

 

But IF you could, you could then increase resolution somewhat by yanking the antialiasing

filter too--monochrome aliasing doesn't look nearly as bad as color aliasing. This

probably wouldn't increase max resolving power much (if at all) but would give you better

MTF at frequencies approaching the resolution limit

 

Then there's the issue of speed--I'm not sure exactly how much speed the bayer filter

costs, but I've got to believe it's at least a stop. It'll help even more if you're shooting

underexposed in very red lighting (for me, shooting a band on stage) the blue channel is

generating very little image and a lot of noise, because it's getting very little light. WB is an

after-the-fact adjustment, and can't really fix color-balance-induced underexposed

channels.

Link to comment
Share on other sites

I'm really not trying to discourage you here... but even if you manage to strip off the color filter array without killing the chip, the next step will be even more complex: completely rewriting the image reconstruction algorithm in the camera firmware. Without this, the image will be full of odd artifacts since the values of the luminance channel are determined from an expected response from the -- now missing -- Bayer CFA (and there are different versions too).
Link to comment
Share on other sites

Berg Na:

 

That's why I think that ultimately, the best camera for this type of experiment is one that

can produce RAW output. If I got that far, I was planning to try to enlist some help from

the fellow who develops dcraw. From what people are saying, it sounds as if I'll be quite

lucky to get even close to that stage, especially given my rather limited knowledge of

electronics.

Link to comment
Share on other sites

Ray,

 

Removing bayer masks is not really possible, its just the way the sensor is made. HOWEVER, as a unrelated point, if you remove the IR filter from a consumer digicam with RAW (such as the G2) and use a program such as DCRAW and disable bayer whatever (I will post when I remember the DCRAW option) you do get a IR image with more resolution than the traditional color one.

Link to comment
Share on other sites

I meant in the post from 7 pm. I knew about the 760M from a Mike Johnston essay.

 

I just reread Pete Myers's review of that camera. Sigh. It's going to be a joyous day when

the next B&W digital SLR arrives. (Not that I won't have a ton of fun shooting with what I

have until then!)

Link to comment
Share on other sites

Thanks for all the encouragement! Actually it looks like Berg Na may have provided the necessary go ahead "glass substrate that's epoxy-glued onto the sensor" - I had previously tried unsuccessfully to remove what appears to be a glass cover with a small screwdriver. I might be able to cut it off with an exacto knife. Bob probably knew I'd be back with my $10 hacked extended dynamic SMaL cameras. Google is my friend also and I had already been to site given by Bob - the article does support higher resolution when light source is blue but did not explain Berg's glass substrate and epoxy. I have a couple of fried ones that won't cost anything to experiment on (sometimes the firmware mod fails to properly update flash memory). I am thinking some B&W raws from $9.99 CVS Drugstore digitals using a custom decompressor may best demonstrate 120db (20 bit range) claim of SMaL. While a nonimbedded decompressor (SMaL uses a patent pending lossless compression routine) has yet to be completed, one of the hackers is close to a solution.
Link to comment
Share on other sites

I don't know why some of you folks don't just go back to B&W film. All this time and effort expended to limit the advantages of the digital camera. I don't get it. A strictly monochrome digital, like the Kodak. Wonderful, just what we need It's bad enough the manufacturers squander their development dollars on anachronistic DSLR designs instead of taking the digicam concept to a professional image quality level. Let's continue to take this incredible technology and use it to recreate the first half of the last century. Finally the photographer can have all this control over color.. and they clamber for a b&w monochrome digital. You all probably really get off on the Epson RD-1's film advance lever. Maybe that'd be a good camera to experiment with removing the bayer mask.
Link to comment
Share on other sites

Don't knock it. There are guys in their basement working on perpetual motion machines, everlasting batteries, gasoline engines that get 1000mpg and contacting aliens.

 

Who knows, maybe one day one of them will come up with something. It's their time to waste.

 

Everyone needs a quest. The fact that it's 99.9% (or in some cases 100%) likely to be unobtainable just makes the journey a little longer. Some people like to do things the hard way.

 

Question: Why shoot film when you can buy a camera for $9.99 from the local drugstore, hack into the electronics, take a belt sander to the sensor and (if you're very, very lucky) end up with images which are almost as good as those shot with a Kodak disk camera? Answer: because it's "fun" I guess.

 

An interesting project might be to try to build a monochrome digital back for a film camera. You can buy monochrome imaging sensors. Kodak will sell you a KAC-9638 1288 x 1032 resolution high dynamic range monochrome imaging chip with peripheral circuitry on a PC board for $27. Or isn't that enough of a challenge?

Link to comment
Share on other sites

The dynamic range is 57dB. That's 10^5.7 or 1:500,000. It has on-chip image processing which presumably applies non-linear conversion (gamma maps). The output is 10 bit, but that's after processing. Your Jpegs are only 8-bit and everything you've ever printed is 8-bit.

 

Seems like enough for most applications to me, but what do I know. I'm just a dumb 'ol Ph.D. engineer, not a photographer.

 

If I was a photographer and I wanted high resolution, high dynamic range monochrome images, I'd read about the Zone System and use film.

Link to comment
Share on other sites

Ben, the two mask removal techniques I've played with are UV fading and dissolving the mask with solvents. Chris Johnson is a bit braver, and has unsealed chips and removed the applique style filters that Berg mentioned.

 

As Ryan points out, DCRAW is an easy way to get at the basic sensor data.

 

Reconstruction is relatively easy after that. Most raw formats still scale the red, green, and blue channels to fit within the dynamic range of the format, so you typically need to apply scaling factors.

 

Ryan's way, removing the IR filter, is also a good way to get your feet wet. This works because the red, green, and blue filters are all transparent to IR from about 800nm to 1100nm. You'll need to remove the camera's IR cut filter anyway, as you tear down the camera to gain access to the sensor. Use a relatively long wavelength IR filter (87, 87A, 87B, 98C) instead of an 89B. An 89B (Hoya R72, Cokin 007, Heliopan RG715) will pass too much visible red, and the red pixels will have a substantially different signal from the green and blue (not just scaled differently, but different data entirely).

 

Monochrome conversion, either through actual removal of the filters, or using IR to "bypass" the filters, are resolution increasing techniques. Berg's statement about resolution and luminance is very far off. "The Bayer color matrix does not degrade the luminance channel, it only affects the chrominance channels. Since the final image is formed using the brightness values in every pixel, the full spatial resolution of the sensor is maintained, only the color resolution is interpolated". Bryce Bayer's work is based on using the green channel (1/2 the sensors pixels) for the luminance of a "typical" scene. Less "typical" scenes may only produce useful information in the red or blue channel, and this limits resolution to 1/4 the pixels. Furthermore, before the sensor is a spatial low pass (anti-aliasing) filter that further reduces resolution, because chrominance aliasing (color moire) is visually annoying in the extreme.

 

Technically, while a monochrome camera should still include an AA filter, the effects of luminance aliasing are not as visually annoying as chrominance aliasing, and can often be ignored.

 

And there is a "mass production monochrome DSLR" currently on the market, the Sigma SD10, although it's only 3.4mp. Its stacked photodiodes let you add up the color channels in the raw file to get the total output of the photodiode stack. And, at the same time, its quite useful as a color camera. Its IR capabilities are also prodigious.

 

Bob and Berg, you're way off base this time. Sorry.

Link to comment
Share on other sites

>> First, you won't gain any increase in resolution. The Bayer color matrix does not degrade the luminance channel, it only affects the chrominance channels. Since the final image is formed using the brightness values in every pixel, the full spatial resolution of the sensor is maintained, only the color resolution is interpolated. <<

 

The color interpolation process itself degrades resolution somewhat. Not a lot but measurably so.

 

I have to laugh when I read people complaining about the anachronistic nature of monochrome images and the luddite attitude of people who'd like a digital camera dedicated to creating them. Isn't photography about individual expression, creativity, vision? Shouldn't people who "see" best in monochrome desire the best possible tools? Well, maybe not. Maybe photography is really about slavish adherence to whatever technology the powers that be present us with. Maybe it's really about comformity rather than expression.

 

-Dave-

Link to comment
Share on other sites

Joseph - There are many different types of Bayer architectures that are very different from the one proposed by Bryce E. Bayer in 1976, when he worked at Eastman Kodak. There's an <a href="http://www.fillfactory.com/htm/technology/htm/rgbfaq.htm" target="right">article about the different types here</a>. In any case, every pixel in the sensor is photosensitive and does contribute to its spatial resolution. There's absolutely no situation where the resolution of the sensor is limited to only a quarter of its pixels.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...