Jump to content

A question for programmers about debayering algorithms


Recommended Posts

  • Replies 59
  • Created
  • Last Reply

Top Posters In This Topic

I know very little about this stuff but 'debayered' is not the correct term. From what I read here, a bayer filter overlays a digital camera sensor and converts light wavelengths into color data. RAW files are therefore the result of bayer filtering (for color info)in addition to recording the light intensity.

 

If you mean converting/compressing RAW data files into some other format and then converting that back into RAW, there are two useful terms: lossy and lossless.

In lossy compression/conversion (for example to JPEG) RAW data is irrecoverably lost. In lossless compression/conversion (for example to TIFF) no RAW data is lost. In principle, a RAW file could - I think - be recreated from a TIFF file.

 

The main question is why someone would ever want to do this. Any transformation/import would - I think - work equally well on a TIFF file as on a RAW file.

 

I have a question for more advanced programmers who know something about how to read and debayer RAW files: can you re-bayer an image once it has been debayered so that it can be properly debayered again?
Link to comment
Share on other sites

I know very little about this stuff but 'debayered' is not the correct term. From what I read here, a bayer filter overlays a digital camera sensor and converts light wavelengths into color data. RAW files are therefore the result of bayer filtering (for color info)in addition to recording the light intensity.

 

If you mean converting/compressing RAW data files into some other format and then converting that back into RAW, there are two useful terms: lossy and lossless.

In lossy compression/conversion (for example to JPEG) RAW data is irrecoverably lost. In lossless compression/conversion (for example to TIFF) no RAW data is lost. In principle, a RAW file could - I think - be recreated from a TIFF file.

 

The main question is why someone would ever want to do this. Any transformation/import would - I think - work equally well on a TIFF file as on a RAW file.

 

 

I have very little knowledge about this stuff myself. All I can tell you is that in cameras and other machinery there is Software which can be manipulated by programmers and Machine Language that is designed by Engineers and is pretty much fixed. Software mainly work on files, or data that is stored in a computer's memory.

Machine Language is built into the hardware itself by Engineers and is controlled by chips called Processors. I'm not sure, but I think camera Algorithms fall into that category. Machine language uses binary code which is a series of ON-OFF switches(0000001 - 000000101) etc. Even if you could manipulate the Machine language code, I'm not sure what would be the purpose, since the data a camera uses to produce a RAW file is actually Light hitting a sensor then being converted into numbers, by the Processor(s).

In the case of a Color image, Light that passes through a Bayer Filter(red, blue, green) on the sensor and is then converted into numbers. So if you remove the Bayer process my guess is that you would wind up with a B&W image, or no image at all ??

  • Like 1
Link to comment
Share on other sites

Hi @hjoseph7, I agree (mostly),

 

Light intensity and wavelength is not - at its source - a digital value. These values are initially registered by (analog ) sensors and converted to digital values: the RAW data.

 

FWIW, I started out my work career (back in the dinosaur days) in mobile weapon automation: a very small branch of IT that - in those days - used very small processors. All very low-level nerdy stuff. Most of our data was gathered by analog sensors and converted to digital values by an A/D convertor. Of course, all the scales were different so we had to 'normalize' the values without losing resolution.

 

We needed to be able to read and interpret binary numbers and manually correct any punch tape 'mispunches'. We would sequence through programs using binary lights and switches. All 'live test data' from sensors went through the 'processor' and all software was changeable. Years later, I worked as a quality manager at a chip factory and saw how programs were 'baked in' to chips.

 

So you're right. There is 'software'' in cameras that you can read, update and (with the right equipment and knowledge) even modify. A well-known 'add-on' is Magic Lantern. But there's also proprietary firmware that users or third parties can't change. Because it's physically 'baked in' to the the chips. I'm not 100% sure, but given the fast response to light wavelengths required, I strongly suspect that the 'bayer filter' is baked in to most cameras.

 

Again, the main question is: why on earth would anyone want to change this? Every modern camera allows users to adjust their preferences for color, contras, vibrance, sharpness, etc. So why would anyone want to reverse the initial color conversion?

 

I have very little knowledge about this stuff myself. All I can tell you is that in cameras and other machinery there is Software which can be manipulated by programmers and Machine Language that is designed by Engineers and is pretty much fixed. Software mainly work on files, or data that is stored in a computer's memory.

Machine Language is built into the hardware itself by Engineers and is controlled by chips called Processors. I'm not sure, but I think camera Algorithms fall into that category. Machine language uses binary code which is a series of ON-OFF switches(0000001 - 000000101) etc. Even if you could manipulate the Machine language code, I'm not sure what would be the purpose, since the data a camera uses to produce a RAW file is actually Light hitting a sensor then being converted into numbers, by the Processor(s).

In the case of a Color image, Light that passes through a Bayer Filter(red, blue, green) on the sensor and is then converted into numbers. So if you remove the Bayer process my guess is that you would wind up with a B&W image, or no image at all ??

Link to comment
Share on other sites

No great knowledge here, but if you look at the old converter Rawhide, it has many Bayer options. Most of them look pretty similar to me. I suspect you can't go backwards, but don't know for sure. I use Qimage for many things and know its raw converter results look different on a fine scale than Nikon Capture NX-D. I suspect the factory knows some things that others don't. With that exception, I think my Z6 can do a better job of converting to jpg than most other converters. Something I only learned recently is you can apparently take a raw image back into the camera and reprocess it to a jpg with whatever settings you like.
Link to comment
Share on other sites

"I'm not 100% sure, but given the fast response to light wavelengths required, I strongly suspect that the 'bayer filter' is baked in to most cameras."

You are probably right. You hear about High-Level programming languages such as JAVA, COBOL and C++, but the bottom line is that all these programming languages have to be converted, or Compiled into something the Machine can understand. That is (00000001- 00000101) etc.

The higher the level of a computer program language, the more processing it's going to take to translate that code into Machine Language, and this takes time. In computer language this time is often called over-head.

It might not mean much if time is not important, lets say you are only updating a couple employee records, but if you are updating 1 million, or 2 million employee records that over-head becomes a huge factor !

This over-head does also impact real-time, time-sensitive applications like ATM's, Digital Photography and Industrial Machinery.

Currently the closest computer language to Machine Language is called "Assembler language" followed by 'C' , but even those languages are far from real machine language which is (00000001- 00000101) ON/OFF switches. Because (00000001- 00000101) is so complicated for most people to understand, Assembler , 'C' and other computer languages were developed. I think Assembler language is what is converted and baked into some chips since it's the closest thing to actual machine language, but I could be wrong ?

 

Morse Code would be a pretty good example how the whole process works. As you know Morse Code is nothing but a set of electrical signals(codes) that are sent by an operator on one side of the line, then translated on the receiving end by another operator on the other side of the line. Not many people can understand Morse Code, so the sender and receiver have to be highly trained . To simplify things Teletype Machines were developed which translate Morse Code into letters of the alphabet, or numbers on a scale. This way the sender and the receiver can use regular alphabetical characters to translate a message. The only problem is that the Morse Code signals have to be translated which takes time.

 

I'm not sure, but once RAW data is captured by the sensor, then it has to be converted to JPEG, RGB, TIFF whatever by some other process(Software). This is why I also think these algorithms are baked into the chips themselves . This whole process can take a few milli-micro seconds, before an image appears on the back LCD of your camera, pretty close to the speed of Light....

Edited by hjoseph7
  • Like 1
Link to comment
Share on other sites

I have a question for more advanced programmers who know something about how to read and debayer RAW files: can you re-bayer an image once it has been debayered so that it can be properly debayered again?

It might help if you explained why you would want to do this. The demosaicking process uses and then discards the original values of the individual photo sites, so any reverse process would have to make a lot of guesses and I would expect it would cause of lot of artifacts/errors.

Link to comment
Share on other sites

I have a question for more advanced programmers who know something about how to read and debayer RAW files: can you re-bayer an image once it has been debayered so that it can be properly debayered again?

 

This is a deep question and the ultimate answer depends on the camera/sensor manufacturer, but almost surely the answer is 'no'. Each camera manufacturer has their own proprietary raw file process, that's why often when new camera models are released end-user digital processing software, e.g. Adobe PS and LR (to name two) cannot initially process the RAW captures from these cameras. Adobe must await plug-ins from the manufacturers and make such updates available for download. This RAW decoding includes the de-bayering (demosaicing) but also other steps as well. The raw image captured by the camera sensor has blocks of 2x2 pixels (so 4 total) with each pixel in the block responsible for only one color. In most sensors each 2x2 block has two green, one red and one blue (GRGB) sensitive pixel. De-bayering is the process of converting these unique color values in each pixel to the higher level of assigning three values to each pixel, one each for R, G and B. This is done through a process called interpolating and the exact method of interpolation varies and is most likely proprietary to each manufacturer. Also most likely it is not an invertible process and this is your question. Given a TIFF or JPEG color image with each pixel have 3 values (R, G, and B) obtained by this interpolation process reverse it to get the original G, R,G and B value in each individual pixel that led to the end values. In addition, the interpolation process is further complicated in that it is not restricted to working within each 2x2 grid in isolation, rather data from the surrounding 2x2 grids is also employed.

 

Note that there are other added complications as well since de-bayering is not done in isolation. Much more is done in raw conversion that is also non-invertible. Check out this link for an excellent, elementary overview of what all is done in raw conversion process to include an nice description of the demosaic process (it is step 5). You will note that it begins by stating it is specific to one camera model as the details would vary for other camera models and perhaps significantly for other camera manufacturers.

 

Finally if you really want to get a raw version of a TIFF or JPEG or whatever processed image, there is one easy way that will work regardless of the camera model/manufacturer. Step 1: print out the image, Step 2: point your digital camera at the printed image and set it to RAW capture mode, Step 3: press the shutter release, Step 4: download the raw image onto your computer. Voila, your TIFF of JPEG or whatever is back in a de-mosaic'd form. Of course for a byte-to-byte comparison with the original raw file (if that's how the original image was captured) it will bear little resemblance to the original.

 

This silly example however raises a more interesting question. As I said, you most surely cannot go back to the original RAW file by inverting the process, but that is too restrictive. Just like a camera pointed at a printed image will generate a raw (re-bayered) file whose image is the same (yes, of course it will be degraded), it should be possible to take a TIFF or JPEG file directly and by-pass the "pointing a camera at the printed image" process, instead applying mathematical algorithms in software to take the R, G and B values within each pixel and assign G, R, G and B values to each 2x2 block so that when 're-rendered' a reasonable facsimile of the original input will be returned. This was fun, in thinking about this and writing this I've done a complete about face and end up concluding that 'yes' one can surely re-bayer an image. In fact it would be an interesting project and might even have some value.

  • Like 1
Link to comment
Share on other sites

Over 25 years ago my work was doing an equipment inventory and some piece of lab equipment in my name showed up as missing. It was in my Lab, and I did the inventory on it- someone did not check it off properly, and I got some nasty-gram about it.

I went to my Lab, used my Polaroid SLR690 to photograph it with that day's newspaper with date clearly showing. I brought that up to the "Big Boss" who looked at it and stated "I know you, how do I know you didn't fake this". And I held out the camera, and he said "well, I guess even you couldn't fake a Polaroid". It was of course, a high compliment.

Edited by Brian
  • Like 1
Link to comment
Share on other sites

For the original- Reconstructing a de-Bayered/demosaiced image from a Tiff would not be difficult, Matching that to the original camera would be difficult, much of the information as it is read out of the analog portions of the sensor is lost to the processing that is done on-chip and in-camera. Probably why I stick with my CCD based cameras, I like having the DNG file as close to the original capture as possible.

 

Developing an algorithm to show that one image was derived from another would probably be easier. Things like lighting, exact angle, etc. That is not to say that an image cannot be generated from analyzing many images of a scene, such as something that generates the scene as it would look from a different angle or from different lighting. But then it would not be an exact duplicate of a photograph, but would be computer-generated imagery.

 

https://cameraderie.org/threads/experiments-in-leica-m8-and-m9-conversions.38737/.

  • Like 1
Link to comment
Share on other sites

Information is lost when a RAW image is rendered. In the RAW form, each pixel represents a color, red, blue, or green depending on its relationship to a physical Bayer filter above the sensor. Each pixel is assigned an RGB value based on adjacent pixels. Once the image is converted to an editable format, usually TIFF or JPEG, the original value of individual pixels is a composite of several pixels. The process retains most of the luminance information in each cell, which constitutes most of the spacial resolution. Compromises are made when interpreting the color, leading to color aliasing in fine, repetitive details.

 

The question whether a RAW image can be reconstructed from a rendering is largely moot, because the original RAW image is unchanged. You are not obliged to use Adobe Camera Raw for the conversion, and there are several alternatives. In fact many people prefer to use Phase One software. There is no reason, other than existing patents and copyrights, that you couldn't write your own program.

 

A Bayer pattern consists of a repetitive 2x2 pattern of four pixels, RGBG. There is nothing magical about the X-Trans pattern, which consists of four Bayer blocks in which alternate blocks are rotated 90 degrees.

  • Like 1
Link to comment
Share on other sites

The first Digital Sensors that I worked with - way back in the early 1980s, I wrote the software to convert raw sensor values to radiometrically calibrated data. "Watts/Steradian" and such units were used for pixels. Knowing the sensor response, point source response, etc of the sensor was required for this work. Using this information, we could generate scenes as they would be acquired with sensors under design. Each frame of the sensor had references to "light and dark" for a two-point calibration. The Raw frame of my early 90s Kodak DCS200 has reference blocks in each frame for light and dark, recorded in the ".KC2" file. Most modern cameras do not record the reference pixels in the stored file.

 

The Kodak DCS200c is the first camera that used Dr Bayer's mosaic filter. The DCS100 used a Mosaic filter, but not a Bayer pattern. My Kodak DCS200ir is the first digital IR camera manufactured in the DCS series. I had them make it.

Link to comment
Share on other sites

The first Digital Sensors that I worked with - way back in the early 1980s, I wrote the software to convert raw sensor values to radiometrically calibrated data. "Watts/Steradian" and such units were used for pixels. Knowing the sensor response, point source response, etc of the sensor was required for this work. Using this information, we could generate scenes as they would be acquired with sensors under design. Each frame of the sensor had references to "light and dark" for a two-point calibration. The Raw frame of my early 90s Kodak DCS200 has reference blocks in each frame for light and dark, recorded in the ".KC2" file. Most modern cameras do not record the reference pixels in the stored file.

 

The Kodak DCS200c is the first camera that used Dr Bayer's mosaic filter. The DCS100 used a Mosaic filter, but not a Bayer pattern. My Kodak DCS200ir is the first digital IR camera manufactured in the DCS series. I had them make it.

 

What computer language did you use to convert raw sensor values to radiometrically calibrated data ?

Link to comment
Share on other sites

Fortran, of course. Fortran-77 under for the Vax11/780 and Fortran-66 for the FPS-120b attached array processor. I also used Fortran for the data acquisition system, image processing, and image display.

 

I learned a valuable lesson early on. 100 Lines of Fortran, a good day. 300 Lines, a great day. 500+ lines take the rest of the day off because your brain is going to melt.

  • Like 1
Link to comment
Share on other sites

> I think Assembler language is what is converted and baked into some chips since it's the closest thing to actual machine language, but I could be wrong ?

 

Typically assembler is a 1:1 translation into machine code. Even with embedded microcontrollers, the move has been to C and Basic. Most of the compilers for microcontrollers are "cross-compilers", run under Windows or Linux mostly.

 

I still write a lot of code in assembler. Back in the 90s, I could beat the C/C++ compiler for embedded RISC processors by 5:1 for speed. The most advanced compiler I used was the TI NX FORTRAN-66 compiler back in the late 70s, ran on a first-generation Supercomputer. It could optimize code across multiple arithmetic units, fork and join code to run in parallel, and collapse three-deep loops working on 3D arrays into one assembly language vector instruction. It was perfect for image processing- the assembly language "vector" instructions worked on 2 and 3 dimensional images.

  • Like 1
Link to comment
Share on other sites

Fortran, of course. Fortran-77 under for the Vax11/780 and Fortran-66 for the FPS-120b attached array processor. I also used Fortran for the data acquisition system, image processing, and image display.

 

I learned a valuable lesson early on. 100 Lines of Fortran, a good day. 300 Lines, a great day. 500+ lines take the rest of the day off because your brain is going to melt.

 

Fortran is definitely a language used for scientific applications, but I never knew it could be used on machinery. That was one the first computer languages I learned. I use to love Fortran(77). The only time I got to use it in the real world was when I was working for the Census Bureau, but that only lasted 6 months.

Edited by hjoseph7
  • Like 1
Link to comment
Share on other sites

I do embedded realtime applications using Fortran-77 and assembly to this day, have used it for embedded systems for almost 40 years. Of late I switched to the Watcom v2.0 Fortran-77 compiler released in 2017. I rewrote my code that adds the Gamma curve to the M Monochrom so it would run under Windows.

 

https://cameraderie.org/threads/adding-a-gamma-curve-to-a-digital-image-thinking-out-loud-and-experiments.38778/

 

For realtime applications: Fortran produces Deterministic Code, will take the same amount of runtime every time. No "Garbage collection" or other background process inserted into your code that can steal cycles. I can count on it generating a waveform and be "rock-solid". Everytime my Leica M Moncohrom or M9 hesitates to advance the shutter while the buffer is being stored- I think they should have used Fortran and assembly.

  • Like 1
Link to comment
Share on other sites

The school I attended had 2 programs back in 80's. One was Computer Science the other was Computer Information Systems(business). I couldn't get in the Computer science program, because my Math scores were not too good, so I opted for the CIS program. Oh, so that's what you guys do ! I always wondered about that. ? I thought CS students worked in some Scientific Lab wearing white robes or something... We had to take one semester of Assembler Language and two semesters of Data Structures using Fortran. Assembly language was not one of my strong points unfortunately, but after I took that class, I understood how computers work like I never did before. Actually I still have my Assembler and Fortran 77 books.
  • Like 1
Link to comment
Share on other sites

I gave the white Lab coat away to a visitor that had wanted one. Never wore one myself.

 

For the original topic- it's a complicated problem that is going to get more important as time goes on. You can shoot out ideas on protecting intellectual property- but at some point you want to publish an image, and it will be digital. Photographs are near-instant captures of 2-D, lots of information is lost when capturing a 4-D scene. I'm back to orientation of the camera, lens, lighting, shutter speed, and depth of field for comparing two images. This is where feature extraction and comparison would be interesting,

 

Efficient segmentation of cellular images using gradient-based methods and simple morphological filters - IEEE Conference Publication

 

My wife put that on my Honey-Do list back in the 90s.

 

A comparison of wavelet and Fourier descriptors for a neural network chromosome classifier - IEEE Conference Publication

 

You know chicks and how they are always doing wavelet transforms and passing them through neural net classifiers. Which might also be of interest here. She was doing this 25 years ago.

Edited by Brian
Link to comment
Share on other sites

Thirty-Five years ago... the group that I worked with developed algorithms to describe images. I remember coding up this one and adding it into the scene metric code, which generated tables to describe images. If I thought someone were "ripping off my image", I would probably normalize the two images and generate scene metrics on them. The chances that two images gnerated with different cameras, different times, different lenses would yield the same scene metrics is "most likely" very low.

I keep as much of my old code as possible. This one was written for the Vax, converted to my PC when I wrote the DNG processor.

 

(the forum software deletes leading blanks- damnit)

 

SUBROUTINE CVSGMG( SCENE, SIGMAG, ACROSS, DOWN)

C COMPUTE THE SPATIAL GRADIENT OF A SCENE.

IMPLICIT NONE

INTEGER* 2 ACROSS, DOWN

REAL* 4 SCENE( ACROSS, DOWN), SIGMAG( ACROSS, DOWN)

C LOCAL STUFF.

C GET LIMITS OF THE TWO-DIMENSIONAL CONVOLUTION.

INTEGER* 2 BOXSIZ, WNDOSZ, BEGIN, END, ENDLIN, POINTS

PARAMETER ( BOXSIZ= 5, WNDOSZ= BOXSIZ/ 2)

INTEGER* 2 POINT, LINE, M, N

REAL* 4 SUM, POPLTN, MXSGMG, MNSGMG

C EXTERNALS.

REAL* 4 SQRT

BEGIN= BOXSIZ/ 2+ 1

POINTS= BOXSIZ*( BOXSIZ- 1)

END= ACROSS- BEGIN

ENDLIN= DOWN- BEGIN

C INIT MAX AND MIN TO RIDICULOUS NUMBERS.

MNSGMG= 1.0E30

MXSGMG= -1.0E30

POPLTN= 1.0/ FLOAT( POINTS)

C ZERO OUT OUTPUT ARRAY.

DO 1 LINE= 1, DOWN

DO 1 POINT= 1, ACROSS

SIGMAG( ACROSS, DOWN)= 0.0

1 CONTINUE

DO 5 LINE= BEGIN, ENDLIN

DO 10 POINT= BEGIN, END

C COMPUTE SPATIAL GRADIENT FOR THIS POINT.

C

C 2 1

C 2 _ _ 2

C OGx (I,J) = 1/20 >_ >_ (( X(I+M,J+N)- X(I+M+1,J+N))

C N=-2 M=-2

C

C 1 2

C 2 _ _ 2

C OGy (I,J) = 1/20 >_ >_ (( X(I+M,J+N)- X(I+M,J+N+1))

C N=-2 M=-2

C 2 2 2

C OG= OGx+ OGy

C FOR ((I= 1, ACROSS) J= 1, DOWN)

C

SUM= 0.0

C COMPUTE SIGMA G IN THE X DIRECTION.

DO 15 N= - WNDOSZ, WNDOSZ

DO 20 M= - WNDOSZ, WNDOSZ- 1

SUM= SUM+

1 ( SCENE( POINT+ M, LINE+ N)- SCENE( POINT+ M+ 1, LINE+ N))** 2

20 CONTINUE

15 CONTINUE

SIGMAG( POINT, LINE)= SUM* POPLTN

C COMPUTE SIGMA G IN THE Y DIRECTION.

SUM= 0.0

DO 25 N= - WNDOSZ, WNDOSZ- 1

DO 30 M= - WNDOSZ, WNDOSZ

SUM= SUM+

1 ( SCENE( POINT+ M, LINE+ N)- SCENE( POINT+ M, LINE+ N+ 1))** 2

30 CONTINUE

25 CONTINUE

SIGMAG( POINT, LINE)= SQRT( SIGMAG( POINT, LINE)+ SUM* POPLTN)

IF( SIGMAG( POINT, LINE) .GT. MXSGMG) MXSGMG= SIGMAG( POINT, LINE)

IF( SIGMAG( POINT, LINE) .LT. MNSGMG) MNSGMG= SIGMAG( POINT, LINE)

10 CONTINUE

5 CONTINUE

RETURN

END

Edited by Brian
Link to comment
Share on other sites

So, firstly, thanks to Brian for being so obliging! Sorry man but you are at a way higher level than I am, so some of that stuff you talked about is, as a Texas sheriff might say, outside my remit.

 

However... let me try and answer the first part of my question so that I might get an answer to the second part.

 

To reconstruct a RAW file from a TIFF isn't that tricky after all. I don't mean a TIFF in a DNG wrapper, which is only slightly more useful than a wet sock. I mean an actual RAW file. Here's an example of a 3-byte word which describes one pixel in an 8bpc TIFF, in binary:

 

10111100 10000011 11010110

 

The order is R, G, B. So what you can do is very simple: the first byte becomes the first pixel in the reconstructed RAW file. The second byte is the second pixel. The third byte, representing blue, is discarded, as this is the beginning of an RGRG row. If you want to turn 8 bits into 12 bits, that's a pretty simple mathematical operation (256 / pixel value x 4096). You won't get 12-bit precision though.

 

So now you have a 'reconstructed' RAW file, although it's virtually impossible to get back to the original pixel values. So, when you debayer the reconstructed RAW file, what do you get? Assuming that your reconstructed RAW file was formatted in a way that a RAW converter could read, you could easily check to see the result. But making the reconstructed RAW file is something that I can't do at this point.

 

If Brian or anyone wants to give me some tips (or corrections), please do. I'm comfortable enough with Javascript although I loathe the pointless C-like syntax.

Link to comment
Share on other sites

Thirty-Five years ago... the group that I worked with developed algorithms to describe images.

 

Cool. Just out of curiosity was a guy named Michael Kriss involved with your group? I once took his "short course" on digital camera design at an IS&T symposium; I knew that he had been been involved in that sort of thing at Kodak for a lot of years.

Link to comment
Share on other sites

The reason I suspect one might want to do this, is that they didn't like the way it was done.

 

It seems to me that if you have enough bits, you could get the original back within roundoff error.

But you would need the exact algorithm used in the first place.

 

Each green is between two reds and two blues. One could interpolate the reds and blues

between those two. Each red and blue is between four greens. Again one could easily

interpolate.

 

But there are also more complicated ones that may or may not be better.

 

For general deconvolution, I have (for many years) known of the book

"Deconvolution of Images and Spectra":

 

Deconvolution of Images and Spectra

 

There are some very interesting problems that can be done if you have

enough bits and know the point spread function well enough.

-- glen

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...