Perhaps I am starting to understand why I am here questioning the use of gamma, given today’s state-of-the-art technology. I am coming at this question in the year 2010. However, gamma has been around for almost 200 years. First there was film density. Then there were CRT’s electron guns. Then twenty years ago came Jpeg soon followed by the first consumer digital cameras: memory was expensive, computing power was limited and people post processed in 8 bits: for nearly two centuries gamma was needed for one reason or another to store and process images. So gamma was ever present, people always assumed it was needed and never questioned it. Fast forward to 2010. Cheap storage, cheap processing power, large linear files and 16 bit color are widely available. If you are a digital photographer with modern color-managed equipment who rolls his own (like I am), is gamma an asset or a liability in your post-processing working color space? More than happy to send pictures to my overseas friends via email in perceptually efficient 8bit Jpeg, but my whole workflow revolves around 12/14/15/16 bit data, from capture to inkjet print. Every time Photoshop has to perform complex color operations on my aRGB (or ProPhotoRGB, or whatever) data it has to de-gamma it to get back to linear and then re-gamma it when finished. Not to mention when the image needs to be converted to a different color space, or printed. Each round trip to linear and back adds noise in the shadows and quantization in the highlights that may become apparent in complex post processing jobs. But apparent or not, why not stick to a linear color space (gamma =1) in the first place, from raw processor to output? What’s the case for gamma today, if you are like me?