Jump to content

Uneven JPG compression...?


thorkild

Recommended Posts

<p>I am wondering about the different sizes of JPGs I get from my D200. Shooting JPG "Fine" and in "L" setting, my files varies between 4-7.2 MB (8bit). I get the same result shooting my Leica D-lux. Can I set the camera(s) to produce equal sized files? And maybe bigger files. The resolution is L (3872 x 2592). I use Nikon Transfer and ViewNX. When I shoot RAW the files are naturrally much bigger, but from time to time I find that the JPGs are OK. So it is actually mostly a theoretical question.</p>
Link to comment
Share on other sites

<blockquote>

<p>Can I set the camera(s) to produce equal sized files?</p>

 

</blockquote>

<p>Not unless you shoot the exact same image time after time.</p>

<p>The digital sensor captures data. Scenes vary in the amount of data captured.</p>

<p>Try a little experiment. Shoot a white wall and look at the file size.<br>

Very little data is required to reproduce white compared to a scene with many colors and/or detail.</p>

Link to comment
Share on other sites

<p>You should notice a correlation between file size and complexity of subject. If the photo is of a colorful detailed subject with many differing degrees of dark and light, it will be a bigger file than if it is of an evenly-lit monochrome surface. It's not "uneven compression," it's different amounts of "visual variations" in the original scene that need to be translated into digital information.</p>

 

Link to comment
Share on other sites

<blockquote>

<p>The digital sensor captures data. Scenes vary in the amount of data captured.</p>

</blockquote>

<p>Not quite. The amount of data is the same every time: one value per sensor element. It's the level of <em>information</em> contained in the data (or extractable from it) that affects how well the image compresses. But you're right that a plain white wall will generally produce a smaller file than a complex scene. This can be affected by texture and lighting, though. The best way to produce a low-information image would simply be to shoot with the lens cap on at base ISO.</p>

Link to comment
Share on other sites

<p>Once I have even tried to take two JPEG pictures of the exact same scene, by setting the camera on a tripod. The first capture was in focus and for the second image, I deliberately made it totally out of focus.</p>

<p>It turns out that the 2nd JPEG was about half the file size of the 1st, approximately 5M vs. 10M. Since the 2nd image was so out of focus that there was no fine details in it at all. As a result, JPEG managed to compressed it a lot more without losing details, since there wasn't much to begin with.</p>

Link to comment
Share on other sites

Compression algorithms work based on redundancy in the file that's being compressed. A basic algorithm might look something like this...

 

Count the number of zeros or ones in an uninterpreted series. If it is a series of zeros make it a negative number, if it is a series of ones, leave it a positive number. Place a vertical bar between the numbers representing the series of zeros and ones.

 

Here's a series of zeros and ones...

 

00000000111101000010000000100101111111010001001111111110011111

 

Using the algorithm above, it would convert to...

 

-8|4|-1|1|-4|1|-7|1|-2|1|-1|7|-1|1|-3|1|-2|9|-2|5

 

The original series of zeros and ones is 62 characters long. The converted string is 49 characters long. We saved 13 characters of space compressing the 62 characters into 49.

 

This would be a loseless algorithm. This means that when you reconstitute it, you get exactly the same thing as what you started with. JPEG does not use a loseless algorithm. It does some approximations so it can generate bigger series of redundancy in the data which will yield a higher compression ratio.

 

In our sample algorithm, we can change it to say that if you have a series of zeros or ones and you encounter a new series that is 1 in length you assume that the new series is the same as the old.

 

If we apply the new algorithm, it yeilds the following string that is 28 characters long.

 

-8|6|-5|-8|-4|9|-4|-2|9|-2|5

 

That's a significant difference in size from the original and even loseless algorithm.

 

Back to the original question. JPEG file size is dramatically effected by 2 things.

 

1. It is not a loseless algorithm so it approximates data causing more redundancy in the file.

2. The amount of contrast and similar colors in the original photo.

 

If you take a photo with your lens cap on, it will basically be all black. This will yield a lot of similar pixels so the algorithm will be able to "shrink" the original a lot more than something that has a lot of color variation. The color itself doesn't really matter, it's the amount of it in the photo. With my D300, an all white JPEG L is 298k and an all black is the same. If a take a "normal" photo, the size ranges from 2.3MB (e.g., a lot of white or blue sky, an body of water) to 7.1MB (a picture in a crowd of people all wearing colorful and diverse clothes).

 

Anyway, there are a lot of sites that explain the details, this is just a very highly level example. If you like math, this stuff will be right up your ally.

 

--Wade

Link to comment
Share on other sites

<p>Craig, you are right and Kelvin was actually saying the same. Things are made more complicated than necessary by unfortunate wording: "<em>some images contain more information than others</em>". Arguably, information and data are the same. Information really isn't the best word here, since a photo of a while wall with one trace of blood on it can contain a wealth of information actually (to the CSI team), and yet compress extremely well with the JPEG algoritm.</p>

<p>But, yes, the thing that matters is how well the data can be compressed. A lot the same colour compresses easily. Horizontal gradients generally compress medium-well (with a risk of artifacts). Vertical gradients compress badly. Large colour differences compress badly (or show extreme artifacts). Random colour noise hardly compresses at all. <br />It's very easy to simulate with a bit of photoshop or similar: create an image of any resolution, fill with one colour, save as JPEG. Gradient horizontal, save as JPEG, gradient vertical or diagonal, save as JPEG, apply the noise filter on it (very heavy handed), save as JPEG. You'll see the differences in file sizes.</p>

Link to comment
Share on other sites

<p>The D200 has an option on the shooting menu called jpeg compression with two options. "Optimal Quality" and "Size Priority".</p>

<p>With "Optimal Quality", the camera will use the selected compression ratio (GOOD, BETTER, BEST aka Basic, Normal, Fine) to generate the jpeg file irregardless of the resulting file size. As described above, more complex scenes will result in larger files. I <em>ALWAYS</em> shoot "Optimal Quality" if I'm not shooting raw.</p>

<p>With "Size Priority", the camera will vary the jpeg compression to keep all the files approximately the same size. This means that more complex scenes will generally lose more detail than a simpler scene.</p>

<p>I'm not sure what circumstances the "Size Priority" is useful. Maybe journalists who need the end file size to be within certain limits for technical reasons. For example, when using a wireless transmitter, it might be helpful to keep the file sizes approximately constant so that the transmission time doesn't vary much per image.</p>

<p>Even with Size Priority, the files will not be <em>EXACTLY</em> the same size, but they will be close.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...