Jump to content

I don't under stand megapixels, please explain to an idiot.


Recommended Posts

It's a 6.17 megapixel camera.<P>


The sensor elements are layed out in a honeycomb pattern and firmware in the S2 <B><U>interpolates</B></U> them to 12 MP. But that 12 megapixels is 6 magapixels of actual information plus 6 megapixels of magic fairy dust (i.e., wishful thinking, organic fertilizer, politicians' promises, whatever). They interpolate them diagonally but the bottom line is still that there are 6 megapixels of actual data, and there's no way to create new information out of thin air just by tilting your algorithm 45 degrees.<P>


One side effect of this trick is that it doubles your file sizes without increasing the information content, sort of the opposite of what we normally seek in file sizes.<P>


BTW, some geek is bound to point out that because it's interpolating along a diagonal it expands your resolution by 1.4X. This is not true, because it also means that the distance between the center points of your samples are also 1.4X farther apart. What matters in information content is the <B>NUMBER</B> of <B>DISTINCT</B> samples.

Link to comment
Share on other sites


<p>No magic here. The camera has 6.17 MP effective CCD resolution. The software inside the S2 interpolates it to 12.1 MP.

<p>The various accounts that I have read and please also see dpreview.com, indicate that the resolution is definitely better than the 6 MP D100 but is not really close to 12MP either, which is not a surprise since the CCD does not capture 12MP. However the <i>effective</i> resolution of the S2 is about 8-9MP as quoted by various sources.

Link to comment
Share on other sites

Todd, interpolation is a term normally used in a field of electrical

engineering called digital signal processing. It essentially

means you are "creating" or synthesizing "extra" data values that

really do not exist. They are created from underlying data points

that do exist.


Here's a simple example. Let's say you have measured the

outdoor temperature from 9AM to 2PM, every hour on the hour

and have recorded the results. Your list might look like:<P>




Time: 9AM Temp: 60<P>

Time: 10AM Temp:70 <P>

Time: 11AM Temp:75 <P>

Time: Noon Temp:80 <P>

Time: 1PM Temp:84 <P>

Time: 2PM Temp:88 <P>



Later on, you realized you really wanted to measure the

temperature every half hour. Since you didn't, you're screwed.<P>



However, you can make an estimate by averaging the

neighboring values. You'd end up with:<P>





Time: 9:00AM Temp:60 <P>

<B>Time: 9:30AM Temp:65 </B> <P>

Time: 10:00AM Temp:70 <P>

<B> Time: 10:30AM Temp:72.5 </B> <P>

Time: 11:00AM Temp:75 <P>

<B> Time: 11:30AM Temp:77.5 </B> <P>

Time: Noon Temp:80 <P>

<B> Time: 12:30PM Temp:82 </B> <P>

Time: 1:00PM Temp:84 <P>

<B> Time: 1:30PM Temp:86</B> <P>

Time: 2:00PM Temp:88 <P>



Are the bolded points at the half hour real? No, they're fake, or

estimates. The above is a very simple interpolator. There are

more complicated schemes with different requirements.<P>


You started with 6 data points that you measured and "created"

an extra 5 to give you 11 points. While those extra 5 points are

not real (cuz you never made the measurement on the half-hour),

they may indeed be very good estimates. There are a bunch of

qualifiers on this, but that's beyond the scope of this



So, using the above analogy, you can see how interpolation can

work with pixels and digital cameras. One can start with a 3

mega-pixel camera, and create 6 mega-pixels. The fake pixels

are created in-between the real ones. This is done by looking at

the neighboring pixels (in two dimensions) to determine the

value of the fake pixel. The more neighboring pixels you use in

your calculation, the better the estimate of the fake, or

interpolated, pixel.<P>


Bottom line is real pixels are much better than fake ones...

Link to comment
Share on other sites

Ignore the FOVEON chip for now. Isn't this true that all pictures produced through

the Bayer pattern are interpolated? It you shoot the raw format, we still need the

algorithm to interpolate those information to jpeg, tiff, ... So by interpolating the raw

information a little different, Fuji is able to get 12MP of good data. Grant it, Fuji

produces additional 6MP which are not physically exist. With Bayer pattern, we still

produces 6MP of data which also not exist, but it just happen to have the same

coordinate as the physical pixel location.

Link to comment
Share on other sites

<p>Having studied maths and/or physics helps a bit...


<p>Interpolation is essentially estimating unknown values based on known values (see temperature example above). This can be done by averaging, eg. if the temp of a developer is 75 F in the beginning of development and 65 F at the end, we can interpolate that mid-way through development the temperature was 70 F. <b>NOTE:</b> the key point is that there are different algorithms for interpolation; this is important for many imaging applications, eg. digicams.


<p>A digital camera has sensors for the three primary colors separately (the Foveon is the exception), which means that to produce a full-color image for us humans, the values from the sensors must be <b>interpolated</b>. The interpolation will then result in the images we use, ie. a full-color image on the computer.


<p>Since the CCD sensors can be arranged differently and the values can be interpolated in different ways, manufacturers can advertise slightly different megapixel counts (ie. the Fuji S2). What counts is that the number of the sensors place an upper limit on the achievable resolution, leading to the fact that one camera can produce 3 megapixels and another 5 megapixels, but their resolution could be the same (it's just a difference in the number of pixels, it does not measure actual resolution). (it's very roughly like "not all 100-speed films are the same in terms of resolution")

Link to comment
Share on other sites

"6.17 megapixel camera which produced 12.1 megapixel images. Huh?


After all the technical explanations, you finally arrive at the image output from the camera. On an S2 you can setup of an excellent 6 MP image and you can setup for an excellent 12 MP image.


As one uses the camera and gains experience they often have a tendency to setup for 12 MP and enjoy the results, which under certain circumstances for certain subjects delivers particularly nice prints... whatever fairy dust was in play. Fuji knew what they were doing when they added this feature, it's nice!

Link to comment
Share on other sites


<p>Here is the lowdown on why i said 8-9 MP. The majority opinion is that the 12MP output of the S2 is definitely much better in quality than the 6MP, of say the Nikon D100 or the Canon 10D.

<p>However it is not really as good as <b>actual</b> 12MP resolution camera would have.

<p>Hence people roughly equate to 8-9MP, i.e., the output is as good as that of an <i>imaginary</i> camera that would have 8-9MP resolution without any software interpolation.

<p>I hope it clears the fairy dust.

Link to comment
Share on other sites



There are 6.17 megapixels in the camera (or about 6.5 million pixels in plain English). That is, there are 6.5 million dots that sense light.


The built in computer in the camera takes those 6.5 million 'real' dots of light that comprise the picture and fills in the gaps between them by creating and adding 6.5 million more 'artificial' dots of light, using the average light and color value of the adjacent 'real' dots of light.


This is known as interpolation.


Then it writes all 13 million dots to a file on the compact flash card.

Link to comment
Share on other sites

<i>Is there any reason to shoot at the higher resolution given that interpolation can be controlled and varied in the computer?</i>


<p>Yeah, that's the real question...there could be an advantage using the camera-interpolation, since the programmers of the camera had an idea of how the hardware works and could have optimized the software for that. But I don't think the difference can be great.

Link to comment
Share on other sites

I guess that since the camera has distinct cells for R, G, and B, and they're adjacent to each other for each pixel, the precise position of the photosensitive cell is taken into account when interpolating to the final file size ... and this might be a good thing. The R in a pixel is not measured at the same position as the B from the same pixel in the 6 MP image which is normally generated, so it makes sense to generate a larger file by interpolating the data and avoid this type of slight shift. I guess Nikon also thinks it makes sense since they interpolate the data in the D1X.
Link to comment
Share on other sites

An interesting aspect of the S2 is that RAW files are only available in 12MP format.


6 MP's are only avail as TIFF's of JPEG's. I read once that the S2 performs the 12MP interpolation on ALL shots and downsizes from there. Wasn't sure how to confirm this, but the RAW in 12MP only situation would suggest this is the case.


Very high probability that 6MP's are created simply by stripping off the interpolated data... would only make sense. But, the interpolation seems to be the standard default mode and all else derived from it.

Link to comment
Share on other sites

"An interesting aspect of the S2 is that RAW files are only available in 12MP format"


If I remember previous discussions on how fuji's superCCD technology works, the interpolation to double the file size of 'normal' sensors is not just simple upsizing (as you could do in Photoshop) but rather is required as part of the de-mosaicing of their unique sensor layout. No doubt a mathematician could explain it!


I believe that the smaller sizes are achieved by downsampling the 12MP files.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...