Jump to content

gordonr

Members
  • Posts

    265
  • Joined

  • Last visited

Posts posted by gordonr

  1. <p>In general rotation of Jpeg images can be done either by:<br>

    1. Placing a tag in the file header, so that the image editor orients it vertically every time the file is opened.<br>

    2. Permanently modifying the compressed Jpeg using Lossless Rotation. This is a special technique that works without opening the image (and avoids loss of quality). However to work reversibly, it requires that images match the 8X8 pixel Jpeg grid. If the image dimensions are not an even size, some rows and columns of pixels will be truncated, which may reduce the file sizes slightly.</p>

  2. <p>I haven't been active on this forum for a long time, but it seems you are asking so many questions which were answered 10 years ago. In 2003 I wrote an article on Jpeg compression specifically for photo.net, which covers many of the issues you raise: http://www.photo.net/learn/jpeg/<br>

    The only points I would add, in response to your specific question are:<br>

    1. Jpeg has no 'history', so you can't go backwards (only forwards), and you can't improve the quality by changing the settings!<br>

    2. It is impossible to determine what the original image quality was (for forensic proof issues), though you can certainly figure if the version you have is adequate, and is free of artifacts.</p>

  3. Bill is of course correct about the 8X8 blocks (he helped me write the above article). When the chroma subsampling of 2X2 is used then the blocks are 16X16 pixels in size (or 16X8 for 2X1 subsampling).

     

    Note that even at the highest quality settings there is a small loss of quality due to the YCbCr subsampling - this is certainly not visible, but can be seen by sharpening edges, etc.

  4. Jpeg 2000: born 1999(?) much trumpeted, took ill immediately, patented 2001(?) and died soon after. Lamented mostly by those who had no idea of the defects and disadvantages. (Apologies to my mother d:11 Apr 2005).

    <p>

    In an era of open standards, proprietary software will seldom get more than 10% of the market share. See my article: <a href="http://www.photo.net/learn/jpeg/">http://www.photo.net/learn/jpeg/</a> for more comments.

  5. That's like asking how many legs you need to walk - two is sufficient (and more won't really help). All modern Pentium class CPU's are limited by RAM, then bus-speed, then hard-drive, etc.

     

    Doubling your CPU clock speed will at best reduce task time taken by a half (often much less). If waiting 1 second (or 90 seconds) seems too long, then upgrade. The main benefit will be reduced frustration, rather than much extra work in a given time period.

     

    This might sound obvious, but it is a bit like rushing to get somewhere, purely to be 30 seconds ahead of your neighbour. Often re-planning the task will be better than raw CPU-power. Using fewer undo-levels, less bit-depth, and workflow changes can all make a difference.

  6. Gaussian blur would be appropriate if a large radius unsharp mask was applied, but I find it hard to believe that an old camera would have had this function. Since simple sharpening was probably applied, the use of soften-filter is more appropriate. If sharpen-more was applied, then soften-more might be appropriate.

     

    A small radius gaussian blur gives much the same effect as soften (with some easier control). I find that a fractionally softening user defined filter helps undo some of the damage. Don't try to undo all of the original sharpening (this is counter-productive).

  7. The horizontal axis is linear, but the values are gamma-encoded - a form of semi-logarithmic coding used on all computers for displaying images, and a source of endless puzzlement to those not "in-the-know".

    <p>

    See: <a href="http://www.photo.net/equipment/nikon/scanner/ls-1000/gamma">Color Depth and Monitor Gamma</a> (http://www.photo.net/equipment/nikon/scanner/ls-1000/gamma) and <a href="http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html">Gamma FAQ</a> (http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html).

  8. Roger N Clarke has done some work on this subject, as part of a wider investigation into astronomical observations of faint objects (Optimum Magnified Visual Angle). This diagram on his site shows a number of low-contrast circles, on a grey background:

    <a href="http://www.clarkvision.com/visastro/omva1/low-contrast-spots-1-c.gif">

    http://www.clarkvision.com/visastro/omva1/low-contrast-spots-1-c.gif</a>

    <p>

    The smaller low-contrast circles are much harder to distinguish (small circles are easy to distinguish when contrast is high), showing that detection of many levels of brightness depends on the pixel-size of the area.

  9. Two related factors are spatial density and chroma sensitivity:

     

    1. It may be possible to distinguish more than 256 graduations in brightness, but this applies to extended areas, not immediately adjacent pixels (the human eye is not that sensitive at normal viewing distances). Adjusting the RGB values slightly for adjacent

    pixels (plus or minus 1) could give 3 times the sensitivity for mostly monochrome images

    (see below).

     

    2. The human eye is more sensitive to sharp differences in brightness (edges), than abrupt colour transitions (in the real world these seldom occur). Jpeg compression takes advantage of this (amongst other things), and a full-range of 16.7 million colours (24-bit)

    for every pixel is overkill (IMO).

     

    The GIF standard which uses a palette of only 256 colours, can produce moderate quality images by dithering adjacent pixels, without exactly reproducing every detail. (For monochome images GIF reproduces the full range of detail, though with generally larger

    sizes than Jpeg).

     

    When Jpeg was developed the standards documents indicate that there was pressure from specialists (such as X-ray radiographers) to include a 12-bit version, and although this is available it is seldom used (AFAIK).

     

    For most real-world images the old high-colour display mode (16-bit or 65000 colours) was surprisingly good (with proper dithering in viewers such as IE4). You can test this for yourself by decreasing the colour depth of an image (with error diffusion).

  10. Important reading is Bob Atkins article on <a href="http://www.photo.net/learn/raw/">RAW, JPEG and TIFF</a> (http://www.photo.net/learn/raw/), which covers issues very relevant to this thread (16-bit, dynamic range, etc).

    <p>

    I can't overemphasize the point the most 8-bit images are gamma encoded (2.2 for PC 1.8 for MAC) - this extends the dynamic range considerably, almost to the point where it equals that of 12-bit linear, though of course not the same quality across the range.

    <p>

    This 'simple' question would justify a full lentgh article to give a complete answer, but basically 8-bit (gamma encoded) is 'good enough' for almost all purposes.

  11. It is extremly unlikely that any display device you own can show all 500 gradations *simultaneously*. The human eye can quickly adapt to changes in brightness.

    <p>

    Jpeg is designed to emphasize the most important parts of any given scene. Jpeg is also gamma encoded (semi-logarithmic). See my article: <a href="http://www.photo.net/learn/jpeg/">Jpeg Compression</a> (http://www.photo.net/learn/jpeg/).

  12. Looking at the website <a href="http://www.disc-info.org/">Digital Image Submission Criteria</a> (disc-info.org) indicates that the question is much more complex than just determining the compression level of a given Jpeg. This site is worth reading for those interested in digital image upload submissions.

    <p>

    Using a specific level of compression as a criterion is not helpful IMO - using bits-per-pixel is better over a wider range of images.

    The acceptable level of compression varies tremendously from 0.5 bits/pixel to 4 bits/pixel, which is a huge range in file-sizes.

    <p>

    Bill: Chroma subsampling is mentioned in my article (I forgot to repeat it in my response above). I did contact the poster privately, but the subject is a rather complex one.

  13. Photo.net has a user-upload gallery (as do other photo forums), and has gone through a lengthy period of debate about what size images should be allowed. Recent changes have satisfied most people, but no rule can work for every image, since Jpeg file sizes vary enormously with image content (detail) and dimensions.

     

    I think I understand the broad aim, but I think it clashes with the reality that Jpeg is a complex subject, and not very amenable to 'top-down' rules. Photoshop is not the only image editor on the market, and it is arguable whether it provides the best tradeoff between size and quality. (For the record - I am not a Photoshop user).

     

    This subject is a real hot-potato IMO...

  14. To summarise a very complex topic:

     

    1. Photoshop tables are very different from the standard (IJG) tables, and jpegdump will never be able to give an exact match for a Jpeg compressed with Photoshop, since jpegdump can only estimate IJG tables.

     

    2. It is well known that older versions of Photoshop vary the chroma subsampling as the quality setting is reduced - this is done at some arbitrary point in the scale (e.g. 7 to 6), and the user does not have the ability to control this chroma setting.

     

    3. What are you trying to achieve with this process? Is it to improve your online images, or are you just trying to understand this for recreation?

     

    4. If you really want to know more, contact me directly. I don't think photo.net members would benefit greatly from this particular discussion.

×
×
  • Create New...