Binning

Discussion in 'Nikon' started by elyone, Jun 17, 2013.

  1. Is there any option of true binning in the super high MP cameras?
    I'll explain. I use in my work (microscopy) low resolution, extremity high sensitivity, cameras. They have a pixel size of about 6x6u (microns) or even 13x13u. So they can literally see 10s of photons and still get a signal.
    Now, to gain even higher sensitivity, at the cost of resolution, we can "bin". This means, we can combine pixels to act as a single pixel.
    For example, if my native res is 600x400, and I bin at 2x2, I have an actual resolution of 300x200, but now, I have 4 pixels gathering light and feeding into a single point.

    I was wondering if any Nikon cameras can do that. As far as I understand, increasing ISO, does not, of course, increase the size of a pixel (unlike in the old film, where you get a physically larger grain), rather higher ISO increases the voltage to multiply the signal from less photons, hence noise, because you are using the electronic conversion process itself to create a stronger signal.
    However, in a 24MP camera, I would like to be able to set it at say 12MP, so although I will get lower resolution, I now have much larger pixels and hence better light sensitivity. So I do not need to use a higher ISO, larger apurture or slower speed, to gain better sensitivity/ exposure.
     
  2. Current Nikon DSLR don't have the option to do binning (there were a couple of point and shoot with that ability).
    Most of the hardware binning applications I know use CCD sensors, although it is not impossible to do with CMOS sensors.
    Keep in mind that due to the bayer pattern (RGBG), you could only bin each 4 pixels (red with red, green with green and so on), so a with a 24 MP camera you would get a 6 MP binned image.
    There are some Medium format digital backs that perform binning (like Phase One with Sensor +), also CCD sensors.
     
  3. Avi, to the best of my knowledge when you select a smaller-than-maximum JPEG size, the Nikons will perform a filtered downsampling of the result, thereby performing (a kind of) binning. What they can't do - and several owners especially of the D800 might have liked - is write out a downsized raw file (retaining bit depth and downsampling to get better noise characteristics, but with a smaller file), in the style of Canon's sRAW. Although that implementation is hardly perfect either, so I'm not suggesting an exact match. Simple binning of pixels is a bit complicated if you bin pixels of the same colour, because adjacent pixels will then overlap (as Francisco may be referring to); if you don't sample one channel at a time, the result doesn't look like a bayer-format raw file. This may be one reason to generate a full resolution RGB result, then bin (filtered downsample) that. But you'd do it on the computer, because the reconstructed image with RGB at each pixel could be as large as a simple raw file.

    It's easier if you have a slightly odd sub-pixel layout (like Fuji's EXR), of course. With current Nikons, I think your best bet is to capture the full resolution, apply some strong noise reduction in post-processing (which will reduce resolution), then downsize the result. You've got more control over the process on a computer anyway.
     
  4. I wonder why DSLRs don't support pixel binning.
    Olympus once used the Kodak's KAI-10100 sensor in its E-400 cameras sold in Europe. The sensor has native support for pixel binning but I don't know if the camera utilized it.
    http://www.truesenseimaging.com/component/sobipro/41-kai10100?catid=&Itemid=114
    This sensor is used in a few high grade Peltier cooled astronomy cameras supporting 2x2 and 4x4 binning but they are quite expensive. I imagine all high grade microscopy imaging cameras are also cooled to minimize thermal noise in addition to pixel binning.
     
  5. Michael: In a bayer pattern, binning reduces resolution further than is obvious: normally the adjacent colour samples are spatially distinct, so they can be used to approximate luma. Binning quads (for example) of colour samples to give a new bayer pattern allows the existing raw decoder to work, but because there is overlap in the samples contributing to the constructed values there would be a reduced resolution compared with a native sensor of the same resolution, and the uneven spacing may confuse some reconstruction algorithms.

    Simply averaging (box sampling) a small number of pixels is probably not the best thing to do anyway - you can get better noise while retaining some resolution and limiting aliasing by using a more complex filter. This is easier to do if you don't have to reconstruct a usable bayer result, so downsampled JPEGs are common and image processing software is universally capable of resizing an image with a (usually) decent filter, but resizing a raw file, not so much.

    That said, I don't know how Phase One have implemented binning, and one would hope they'd do a decent job of it...
     
  6. Most of the hardware binning applications I know use CCD sensors, although it is not impossible to do with CMOS sensors.​
    CMOS sensors are actually better for binning than CCDs are. The problem these days is the lack of cameras using large monochrome sensors. As others have pointed out, you have to bin insanely on Bayers, and well implemented (I do stress the words "well implemented") low-pass filtered downsampling yields comparable results.
    The Foveon CMOS sensor does binning spectacularly, the only color sensor (Bayer or not) to bin that well. There's two problems, though: a lack of a colorimetrically accurate response and a general lack of sensitivity.
     
  7. I wonder if there would be any interesting mileage in a D800 mono with binning?
    There are some companies that can remove the bayer filter for a large fee...at-least there's no AA filter (as such) on the D800E!
     
  8. Double Posting
     
  9. higher ISO increases the voltage to multiply the signal from less photons, hence noise, because you are using the electronic conversion process itself to create a stronger signal.​
    There are advantages to the higher pixel count in an inherently low-noise architecture such as the Exmor sensor (used in the D800). Noise is actually summed in quadrature, making the noise across four pixels less than the noise in a single pixel of equivalent size. There is no inherent noise advantage to having a larger pixel.
    Where one might gain a slight advantage is in using something like PhaseOne's "Sensor+" architecture, which does a reorientation of the bayer matrix by doing a rotation on some of the bayer elements to bring it more into alignment, producing some capability to bin before demosaicing.
    A bigger concern with the D800 for critical applications is thermal noise.
     
  10. That said, I don't know how Phase One have implemented binning, and one would hope they'd do a decent job of it...​
    Here's a video explaining how they (Phase One) implemented it
     
  11. If you are photographing a static subject with a truly rigidly held camera, multiple exposures can be combined to
    effectively reduce noise in an application like Photoshop by using varying transparency for each layer. The bottom layer
    is left at 100%, and then each layer laid on top gets a transparency that is equal to 1/n, where n is the number of the
    layer. So the transparency, starting from the bottom and going up would be 1/1, 1/2, 1/3, 1/4...

    I did a little experiment to illustrate this some time ago and it actually works.
     
  12. Thanks, Francisco. It does look like a sensible approach, And life would be so much simpler if we didn't have to have the sensor sites in straight lines so the samples can be read out easily. (I wrote an article on this a long time age, though I'm not going to claim that I was any inspiration for Fuji...)

    Joel: Is this different from just averaging multiple exposures? (Though I can see some point in using an inter-frame median filter.) Either way, I'd like to think that there's more than can be done than just averaging a pixel quad on-sensor.
     
  13. I believe that the D800 automatically bins pixels at the JPEG conversion stage at high ISO speeds. The character of noise seen in RAW files is radically different from that seen in the JPEG version.
    Incidentally, the very reason for tolerating the green redundancy of a Bayer pattern is that it can be "stepped" by one sensor site in any direction and still have a quad of adjacent RGGB filters. Therefore it's incorrect to say that binning must occur over a 4 pixel pitch. It could easily be done with a single step in either the horizontal or vertical direction. Whether this would be visually more acceptable than binning in both directions, I wouldn't like to guess. The reduction in noise of 1/root2 might not be sufficient compared to the halving of noise with a bin of 4.
     
  14. +1 to your comment about being wedded to straight lines Andrew. Below is a little illustration of one possible layout based on 'triads' of RGB filter instead of quads. You can work out many more variations of stepping structure while still keeping 3 adjacent RGB sites than are possible with the (plainly limited and wasteful) Bayer pattern. Having 3 sites to a pixel like this also gives automatic orthogonal antialiasing.
    I like the idea of thinking of combinations of RGB filtered photosites as having a 'centre of gravity' for each pixel. It really doesn't matter what shape is being sampled, only the area of it and its perimeter limit.
    00bkgZ-540859884.jpg
     
  15. Apologies for the delay; I was trying to do a little more research, since I last looked at this a few years back...

    I agree that a delta-nabla pattern of hexagonal photosites has a lot going for it, especially if you can run the wiring without interfering with it (only the microlenses need to have a hexagonal profile). The reason people aren't doing more of it, I suspect, is because of the need to generate images with pixels on a rectangular grid for final consumption, which requires pixels to be generated unaligned to the samples; if computer monitors had less focus on perfect rectangular window edges and - like most of the lower-resolutions LCDs on cameras - had a delta-nabla LCD arrangement themselves, maybe we'd have a more photo-friendly approach. It's a definite improvement over the standard bayer in terms of aliasing, though. Regarding binning, it may make 3x downsampling easier than 2x, but I still maintain that - read performance aside - you're probably better off reading the full resolution image and doing a higher-order reconstruction filter anyway. It probably depends how much the digitizer noise is contributing to the pixel noise, though.

    Any repeating pattern will be subject to moire to some extent (although not necessarily a great extent - both a delta-nabla grid and the Fuji X-Trans are much less subject to it than a simple Bayer). And, of course, even a monochrome or Foveon sensor can have moire, just not false colouring. The eye gets away with a blue noise distribution to try to avoid this.

    I've heard it suggested that Penrose tiling can be used for dithering to try to approximate blue noise; on that basis, I was looking into three-colourings of Penrose tiles (which turns out to have been proven possible only relatively recently). Unfortunately, all the ways of subdividing Penrose tiles (I just did the triangular decomposition below; the rhomboidal decomposition has less acute angles and would probably be easier to make, though harder to program) involve differently-sized subregions, which makes the colour reconstruction a little complicated. Conway's pinwheel tiling has all the regions the same size, but they're very acute triangles that probably aren't suited to microlenses; I've not found any discussion on whether they're three-colourable, though they might be. I spent some time looking at sphinx tiling - convex shapes aren't really convenient, and a fixed colour decomposition into equilateral triangles results in adjacent regions of identical colour, but there may be something that can be done there (without turning into a regular hexagonal pattern).

    Sadly, I ran out of geek and free time at that point, but I can understand why people just used a low-pass filter for so long. Especially allowing for algorithms needed to reconstruct an image from any of these. Aren't eyes clever?
     
  16. No PNG support? I probably should have known that. I'll try that again with a JPEG. Sorry.
    00bl2w-540892984.jpg
     

Share This Page