Jump to content

Have we gone too far?


Recommended Posts

<p>Matt: takes the position that the only way to achieve resolution is through the mechanics of the camera and lens and ignores the other potential components of the system. Matt is also of the opinion that information that is not contained the original capture cannot be correctly and accurately surmised in the digital world. Actual practice counters both of Matt's assertions. <br>

The digital image environment by its very nature includes all of the components -- the lens, the camera, the CPU and the software. To limit any of them is to cripple the system. The primary difference is that the camera and lens have actual physical and practical fiscal limitations. The CPU and software limits greatly transcend those of the physical components. The physical components will never reach the depth and breadth the digital ones will. Trying to do so with lenses and cameras is simply a bridge too far.</p>

Link to comment
Share on other sites

  • Replies 75
  • Created
  • Last Reply

Top Posters In This Topic

<blockquote>

<p>The physical components will never reach the depth and breadth the digital ones will.</p>

</blockquote>

<p>John: You are confusing breadth and depth of an image with breadth and depth of a photograph. Adding new pixels to an image is not adding resolution to the originally collected information (the photograph). It's adding <em>fake</em> information to the original information. This may or may not matter for some uses, but you are not fixing poorly collected information through software, you are <em>making up information</em>. I notice that you have carefully avoided ever specifically addressing any of the practical scenarios and questions that have been put to you, to challenge your assertion that details are irrelevant.<br /><br />Please explain, in plain language, how post production software, handed an image that contains a three-pixel flying bird, can improve the resolution of <em>that bird's image</em>, delivering an image of <em>that</em> bird, instead of a sharper three-pixel blob.<br /><br />Be specific. No hand waving about creativity. No "fiscal limitation" red herrings. I have multiple cameras and lenses right in front of me, right now. Some combinations of them will leave me with a degree of fundamental, inescapable ambiguity in an image element (like the three-pixel blob where the goose used to be), while the next combination will clearly resolve that element into a useful piece of information: a recognizable goose.<br /><br />I do not own a Hubble Telescope - these are the differences between one combination of off-the-shelf, consumer-grade equipment and another. So, skip the consternation about why we must now turn to fake CGI image elements because we've reached some fiscal or laws-of-physics limit, and explain how your approach makes reaching for the better lens I already have (or that I could stop in a shop today and buy) "irrelevant." Specifically, using the scenario described, describe the means by which three pixels are understood by your choice of software to be a goose, and can reconstruct <em>that goose's </em>wing position and coloring.<br /><br />I'll save you the trouble: it can't be done. This is basic information theory.<br /><br /><strong>This is a sentence.</strong><br /><br /><strong>This this is is a a sentence sentence.</strong><br /><br />More information is not being conveyed in that second sentence. Want to make the second sentence "feel" more like real one? What rules will you use to decide what extra, but non-repeated words go in there? Here's a better example:<br /><br /><strong>This is a full resolution sentence about a large gray goose.</strong><br /><br /><strong>This is a lower resolution sentence about a goose.</strong><br /><br />Now, describe the algorithm that, handed the second of those two sentences, would produce the first one. <br /><br />You don't want to let go of the word "resolution," but you're using it a completely incorrect context. More pixels is not more resolution if those pixels don't <em>contain</em> information. The pixels you want to add to the image after the fact do not contain information unless you create that information. That is <em>not</em> a higher-resolution photograph. Please provide a different name for what it actually is. But you can do that after you describe the details of the magic trick that finds the original goose in those three pixels. I can't wait to hear how that works.</p>

Link to comment
Share on other sites

<p>I'm not worried about polite. I'm genuinely interested in understanding how your solution for resolution makes real details irrelevant. Here's a simpler one, based on current events:<br /><br />Say I'm standing at the bottom of the Washington Monument, hoping to photograph some of the new cracks that showed up during that recent earthquake. My walk-around 18-200 superzoom can't quite resolve the cracks, but my 70-200/2.8 <em>can</em> do so in the available light. You are asserting that I can leave the better lens at home, that carrying it (and thus buying it, and fretting about such things) is going too far. <br /><br />Please just explain how software can bring out the details that the lens didn't bring in in the first place. Or consider that you're not really saying that at all, after all, and that your scolding on this whole topic was misdirected.<br /><br />You began by saying that "it is my view that computing power can trump most lens input beyond what the human eye can conceive." I'm trying to point out that <em>no</em> amount of computing power can trump what the lens couldn't resolve in the first place. You can't enhance, recover, or fix what's <em>not there and was never there</em>. I want to understand what you really mean, since you can't mean what you actually said. <br /><br />Why am I persistent, here? Because it's assertions like yours - which are fundamentally incorrect as phrased - that take on an internet life of their own, and are seen on sites like this by thousands of people, for years after they're written. We owe it to those readers to explain what all of that computer power is actually going to do to render better lenses irrelevant.<br /><br />It may be easier to "explain quantum physics to a pig," but you haven't actually tried to explain anything at all, in real terms. Pick any one of the scenarios mentioned above (the Washington Monument cracks seem easy, right?), and just step through what the software does to fix the absence of real details that the cheaper lens missed. That's all. Just walk through it in plain language. Or if nothing else, just plainly answer the one fundamental question: how does the software know what the missing details were? Wouldn't directly addressing that question be more constructive than telling someone to shut up?</p>
Link to comment
Share on other sites

<p>Matt: Don’t read this.<br>

For the rest of you, here’s how this works and works in many different settings. If you are a musician, and you hear the first few notes of a familiar piece of musing, you can recall the entire piece of music – even a very complicated piece of music. Think of the opening notes of Beethoven’s Fifth Symphony. If you are the conductor you will be able to produce the music for the entire orchestra. For those of you with iPhones, you may have the app, SHAZAM, which does much the same thing, but will recall the specific recording after only “listening” to a bit over your phone.<br>

In visual images that your visual cortex process from what is captured by your retina the same sort of information efficiency is at work. Your retina is reporting to your brain about 24 times a second. However, your brain can’t process anything close to the amount of information your retina sends. The actual amount processed is about one millionth of what is sent. Your brain does some storing of image information from the previous images sent by your retina. A filter determines what has changed from the previous image and that is what gets processed – which is why when there is movement in our visual field we are attracted to it. You can demonstrate some of this yourself. Stare at a pattern for a bit and close your eyes, the pattern remains in your visual memory – but is kind of like a memory. Change everything in the visual image at once and the brain shuts the visual cortex down and reboots. This is why a bright light flashed in your eye blinds you. It hasn’t shut your retina down – it has overloaded the brain’s ability to process information.<br>

Think about how biometrics are used – fingerprints, facial recognition, retina scans, iris – or the more exotics, like gait. The whole image of the fingerprint, etc. is not compared. Only a tiny percentage of the information called minutia points are used for searching and matching.<br>

Once a match is found, much more information may be released, such as biographic history, medical information, employment information, etc.; literally gigabytes of information. In the world we live in a little information can trigger the release of lots more, even predictive information. Your phone may be able to forecast where you are going to be next and merchants knowing your purchasing habits can push information to your phone to suggest that your favorite coffee chain has an outlet just ahead on your left and you should stop in for a latte. In some parts of the world a small bit of information can release a live visual image of you and where you are right now – those of you in the UK know what I’m talking about.<br>

There is an emerging field of predictive analytics that with a tiny bit of information and access to immense look up tables can predict future behavior, it might be a risk of suicide, the commission of certain crimes, travel, etc.<br>

My point is with these analogies that the ability to use tiny bits of information, Matt’s birds, for instance, a very rich and very real picture can be constructed. Our bodies do this with an information efficient process and we mimic that information efficiency in the digital world. It is not just confined to photography. The problem that Matt has proposed as challenging, is not only not challenging, it is almost trivial.</p>

 

Link to comment
Share on other sites

<p><< There is an emerging field of predictive analytics that with a tiny bit of information and access to immense look up tables. >><br /> ... predicates upon known info which has been captured in the "immense look up tables", right? Now, what if the data are not in the tables to begin with - such as the details of that specific "crack" (in Matt's example) that were not captured? How would the software know all the if-then-else conditions to arrive at the correct conclusion when the capture is not complete and the details are unique to that particular situation?</p>
Link to comment
Share on other sites

John: your analogy is fundamentally flawed. But if you want to use musical notes as an analogy, fine. Let's say I am

attending a musical performance. It's original stuff, with half a dozen musicians. One of them is playing a mandolin. I

want to be sure that I am recording that never-before-heard piece of music, and do so in a way that can resolve even

the quiet, higher-frequency notes of that one instrument. I would reach for a good microphone and a recording system

that has the directionality, dynamic range, etc, to do so.

 

But you're telling me that I don't need to record those never-before-heard notes or the nuances of that one musician's

style and the tonality of that particular instrument, and can instead just record the scene with the audio features on my

cell phone, even though it will fail to actually resolve those details. Why? Because you're saying that software can

produce all of the nuances of that musician's instrument, style, and the actual unrecorded notes of an otherwise

unknown piece of music. Even though none of that information exists (since the lower-resolution capture device

couldn't collect it).

 

Can you elaborate on how we will hear a recording of that unique performance, if you are asking a computer to guess

what note he played next? This is exactly like the cracks in the Washington Monument, which you are so carefully

avoiding. You likewise didnt explain how the goose will be recovered from three pixels. How does the software know

those three pixels are a goose? You still aren't touching that fundamental problem with a ten foot pole.

Link to comment
Share on other sites

<p>Matt: I hear your argument. Photographers and simply mechanics with no creative skills they bring to the table. You are absolutely right and we all agree. As to the crack in the Washington Monument -- given the knowledge of the earthquake producing the crack is nothing. I saw a very realistic Chicago destroyed by aliens in a movie not so long ago. All details are unique. Nothing is more unique!<br>

Matt: I was absolutely right about explaining things to pigs.</p>

Link to comment
Share on other sites

<p>This is an interesting thread. Unfortunately, the semester just started and I don't have time to get into this, but perhaps I can offer an opinion which may bridge the gap between Matt and John.</p>

<p>Here's my prediction: In the very near future (if it's not already partially here ;-) ), software will become available that will recognize (ie, detect and classify) common features of scenes, e.g., grass, sky, clouds, faces, skin, hair, eyes, buildings, construction materials, etc. It will also be able to estimate important parameters for that element of the scene such as brightness, contrast at different length scales, some of the important aspects of color, etc.</p>

<p>If this hypothetical SW sees something that it can partially, but not completely ID, it may ask for clarification from the user. So, if it sees three closely spaced, partially merged small dark blobs in the sky, the algorithms will surmise that these are most likely birds, but could possibly be three fighter planes flying in formation or even dust specs. My hypothetical software will then stop, highlight the area, and ask the user a series of questions based on the responses that the user gives: </p>

<ul>

<li>Computer: Birds, planes or dust specs?</li>

<li>User: Birds</li>

<li>Computer: Are there three of them?</li>

<li>User: Yes</li>

<li>Computer: Ducks, geese, SBBs (small brown birds)? (It will know that hawks, vultures and many other species don't fly in tight formation, so it won't even offer them as choices)</li>

<li>User: Ducks</li>

<li>Computer: Please type species. Hit enter for generic ducks.</li>

<li>User: Mallards</li>

<li>Computer: Which way do you want them to fly?</li>

<li>User: SW, descending slowly.</li>

<li>etc. etc. </li>

</ul>

<p>The smart algorithms in this hypothetical SW will do the same for many other common elements of natural scenes, classifying and estimating material and optical parameters (eg, type of grass, density, color, variability in color, etc.) as best it can, asking for user input when it can't.</p>

<p>In this way, my hypothetical SW will "fill in" pixels with reasonable information most of the time, hopefully, only occasionally making a really bad guess. Of course the new pixels won't really represent what was really there, but the new pixels will be a close enough approximation for many purposes. Think of it as a giant upgrade to the context sensitive fill function that appeared in CS5 .... a automatized melding of classic photography and CGI. Some people will use it very conservatively, ie, filling in things like grass, wood grains, pores, etc. where needed. Others will use it more extensively to generate the scenery for games and simulators, whereas others will use it to generate every scene in Avitar 2025.</p>

<p>Of course, such algorithms and SW must be classified as CGI, but potentially so close to what could plausibly be in the scene that it will be acceptable for many purposes. I hope this scenario partially bridges the gap in this discussion. If nothing else, it answers Matt's question, <em>"... how does the software know that those three pixels are a goose..."</em>. ;-)</p>

<p>Tom M</p>

Link to comment
Share on other sites

<blockquote>

<p>Think of the opening notes of Beethoven’s Fifth Symphony. If you are the conductor you will be able to produce the music for the entire orchestra.</p>

</blockquote>

<p><br /> Well, no, a conductor won't be able to do that. The composer himself probably wouldn't be able to do that for any score of more than trivial complexity!<br /> There may have been one or two conductors in the history of the world who might have been able to do something close to that for a few of the scores that they conducted often, and conductors who work without a score in front of them have the broadest aspects of the composition memorized -- where the piece changes tempo, where the crescendos / decrescendos fall, etc. But the greatest conductor in the world couldn't tell you how often the bass trombone plays a C# in a full symphonic score.<br /> <br /> As to John's question "... But does any of it matter to the buyer or viewer?", that depends on both the subject and the expectations of the person viewing the picture. My clients (architects) care enough to pay a substantial premium for what might be considered 'excessive' resolution in portraiture or landscape photography.</p>

Link to comment
Share on other sites

<p>Re music, especially classical: No two renditions, even by the same artist, can be an exact duplicate, even by someone as precise as Meastro von Karajan. However, electronic music (those "0" and "1" 's) can do that. But that would not be the same. That would be "robotic music".</p>
Link to comment
Share on other sites

<p>Greg: My suggestion about the conductor was not meant that they would recall all the member's music from memory, but after hearing a few bars and their (LUT) of having the score for the whole orchestra available and their knowledge of the piece could immediately retrieve any particular measure for any instrument without much effort. We all rely on memory assists. I don't remember many numbers in my speed-dial directory -- even numbers I once dialed from memory -- but I can retrieve them instantly. And in large part, that is the point of the discussion. With the vast amount of information instantly available we don't need to recreate it every time we want it.</p>
Link to comment
Share on other sites

<p>Greg: My suggestion about the conductor was not meant that they would recall all the member's music from memory, but after hearing a few bars and their (LUT) of having the score for the whole orchestra available and their knowledge of the piece could immediately retrieve any particular measure for any instrument without much effort. We all rely on memory assists. I don't remember many numbers in my speed-dial directory -- even numbers I once dialed from memory -- but I can retrieve them instantly. And in large part, that is the point of the discussion. With the vast amount of information instantly available we don't need to recreate it every time we want it.</p>
Link to comment
Share on other sites

 

<p><em>every photograph is a fiction to a certain extent. No one lives in 2 dimensions locked in time. Every photograph is a fiction because, at best, it is a representation of the event captured on film or CCD. It is only a question of degree. Photography has the freedom of that fiction.</em></p>

<p>A photograph is a two-dimensional projection of 3D space created by light on a photosensitive surface (or a copy thereof). Just about everyone understands this. Now, it is this that makes it interesting - it tells of the real world. (You know, most people are at least a little bit interested in the real world and actual events.) Of course a photograph is not a literal copy of the real world but a projection. This doesn't make it fiction in any way. Fiction is when you add an eagle to the sky or fractals to compensate for lack of detail in the photograph. In the written word, fiction is when the words tell of an imagined world or of imagined "facts". Non-fiction, such as an encyclopedia, deals with matters of the real world, as known at the time of writing. Now, if the reader doesn't understand the language the text is written in, then he or she might be indifferent as to whether the story is true or not, but if they do understand it then the credibility of the text is among the first questions that is asked. Similarly with photographs - it is very much of interest to know if the photo was fabricated or if it tells of the real world. And people who are familiar with the nature of the photograph will know that it was created by light projected by a lens on a 2D surface.<br>

<em> </em><br>

<em>are you saying there is no creativity in photography, that it is the mere capture of an event with no participation from the photographer? Are you saying the photographer is only a mechanic and not an artists?</em></p>

<p>That isn't it. Photographers bring creative input to the table by 1) selecting the point where the camera is, direction where it is pointed, and angle of view / framing, 2) choosing the moment of capture, 3) controlling in many cases either explicitly or by selection of the moment the lighting in the photograph, 4) selecting how to post-process the images, and 5) choosing which images of the ones that they have made to present and the editor or AD or whoever will choose how to combine these images on the pages of the publication. All of these parts require creative decisions and no two photographers will make the same decisions or produce similar content (if they're good).</p>

<p>Making of stuff that isn't there isn't really photography ("drawing with light"). It's digital graphic art, and an art form of its own. This is all fine for illustrations, but not when they are photographs in a documentary context (i.e. nature photography, photojournalism etc.) You can naturally add your own trickery if you're presenting your work as visual art instead of photography. But do not be surprised if it is not taken seriously - if you do not have the skill to draw or paint your subject, or photograph it, then your art of making up stuff and trying to hide the forgery by taking the macroscopic structure and look from a photograph isn't going to make it interesting. Learn to draw it from scratch and then see if it creates interest. At least then it will have a consistent look. The merger of photography and graphic arts is often unnatural simply because the viewer expects to see a projection of something from the real world yet the manipulated image often shows that the physics were not correctly modelled by the creator. Light didn’t come from the same direction, the reflections are of the wrong colour, a shadow is missing or inconsistent with the other parts of the image. The lack of skill and understanding of light is often obvious by the practitioners of this "art". The birds in the sky reflect light from around them and the direction and colour of light has to be consistent with the location, weather etc. and you simply cannot recreate these things by computer accurately simply because there is not enough information in the 2D image to recreate a model of the 3D world so that you could simulate the appearance of objects that were not there. You might not see the forgery / discrepancies yourself due to your ego, but something is missing from the image, the liveliness of the image created by the chaos of the real world and the bouncing light. Many viewers will see it. In any case, even if it did look realistic, it lacks the interest that photographs of real-world objects and events have. It is a very human trait to be interested and inquisitive of things in the real world and that’s why there is so much interest in the details of photographs.<br>

<em> </em><br>

<em>after hearing a few bars and their (LUT) of having the score for the whole orchestra available<br />and their knowledge of the piece could immediately retrieve any particular measure for any instrument without much effort.</em></p>

<p>I believe all the musicians in the orchestra bring significant input to the whole and there's no way one person could reproduce it as it is played by the orchestra. You can hear it in your mind but it's always a variation / interpretation you hear, not exactly the same as when actually played. If one person could actually reproduce it and reproduce the sounds digitally then what would be the point of the live orchestra? It's different precisely because of the creative input by all the members of the orchestra add their own variations to it.<br>

<em> </em><br>

<em>information that is not contained the original capture cannot be correctly and accurately<br />surmised in the digital world.</em></p>

<p>This is of course true and not an opinion.</p>

<p><em>It is easier explaining quantum physics to a pig.</em></p>

<p>That's not a very civil comment. I wonder why you represent yourself as some kind of authority in this discussion. I also have my doubts that you understand quantum mechanics since you don't seem to have respect for the measurement and obtaining real data that a (n experimental) physicist would have.</p>

 

Link to comment
Share on other sites

<p>Ilkka, it wasn't meant to be a civil comment, but I do appreciate you kind word for physicists. My work and patents are in artificial intelligence, and my work is in predictive analytics and contextual analysis. On a purely philosophical issue - no photograph can capture reality. All are mere manifestations and representations -- but never the reality of the event.</p>
Link to comment
Share on other sites

<p>no photograph can capture reality</p>

<p>But a photograph can convey information about reality. Such as, say, the precise location of a crack in a piece of architecture. Your contention is that we don't need that actual information, and thus don't need a lens/camera that can resolve that detail. You suggest that a person studying a photograph in order to see where those cracks <em>actually</em> are should be satisfied instead with an artist's conception of cracks based on information about other cracks.<br /><br />And the photographer who has been asked to record that image should - instead of simply recording the image and handing to the person who wants it - spend time being interviewed by software (software that cannot know that the cracks are even there) so that she can collaborate with that software in order to depict from her recollection of the cracks, a realistic <em>seeming</em> fantasy representation of cracks. Which joint is <em>actually</em> cracked, or how wide it is - who needs to know that, right? It's enough to know that there are cracks, and to show an image of some sort of cracks, somewhere on the structure, synthesized by software. Right?<br /><br />It is not uncivil to ask you about this. It continues to be exactly at the heart of the matter, and is where your initial assertion simply stops dead in its tracks. How can a low resolution image of a building, showing no cracks, be fixed after the fact to show the actual cracks?<br /><br />That you work in contextual analysis is interesting. How do you conduct an analysis in the complete absence of data, and without any context? Say I show you a low resolution image of what (due to the low resolutoin) appears to be a smooth cement wall. You have no other clue that it has just cracked in an earthquake. What form of analysis, what store of data, what statistical model, what possible method do you propose will conjure up accurate information about those cracks in the same way as (or, as you insist, better than) a lens/camera that records their actual position, size, etc?<br /><br />We're both asked to hand an image to a structural engineer for some initial thoughts. I can choose to re-shoot the scene with a lens/camera that records the details the engineer needs to see. You will need to explain to him - as you have to us here - that the actual details of the damage are irrelevent, but that you've whipped up a nice artist's conception of some cracks on a wall, based on some other cracks, having nothing to do with this particular load-bearing wall, about which you have some data. These aren't two different techniques on the resolution spectrum. They are two fundamentally different things: the recording and presentation of actual information vs. the fictionalizing, synthesizing, or artistic rendering of information. Do you really imagine that the person who needs that information would find the real details to be irrelevent? Why?</p>

Link to comment
Share on other sites

<blockquote>

<p>Matt is also of the opinion that information that is not contained the original capture cannot be correctly and accurately surmised in the digital world. Actual practice counters both of Matt's assertions.</p>

</blockquote>

<p>No, Matt is aware of the fact that if you don't know what's in an image, you must therefore fabricate it. Not only have you <em>not</em> identified a single "actual practice" that can draw real details out of the thin air of a low-resolution capture (though you mention what amounts to how one can paint fake details in), but you've also deliberately avoided addressing any and every provided scenario that would give you an opportunity to describe how that might work in those situations. You provide links to the cartoonish output (some very good) of people who use complex artist's tools and lots of time to <em>create</em> images. You mention software packages that can help an artist execute her vision for a scene - but you haven't provided a single example of <em>absent information</em> being walked back from the one-way street of low resolution recording. And of course you can't, because that information is not present once resolution is too low. I understand that, given your career, you have a vested interest in asking people not to look behind that curtain at the show. But you're not talking to entertainment consumers, you're talking to people who personally use lenses and cameras to record things. Or are they irrelevent, too?<br /><br />I thought of you today. I was talking to a woman who trains herding dogs, and I was photographing the proceedings. She was thrilled that I had the serious lenses out, because she isn't equipped with similar gear. She had a number of white sheep out in a large field. Several of them are essentially indistinguisable from one another, though they have ear markers that allow her to review the behavior of the flock and her dogs when she sees photos or video of the scene after the fact. At certain distances, or when using equipment that's too low-resolution, the ear markers cannot be individually resolved. She cares about the details, and has been working with those animals for decades - and she knows that only the markers can tell the true tale.<br /><br />A better photograph (using higher-resolution equipment) works for her to that end. A poor photograph that doesn't contain the markers or enough of the other details on a particular animal would not serve her in the same way. Would you have synthesized those tags and tatoos, based on data about <em>other</em> sheep? Or would you have just told her that what she wants to know is irrelevent? It may be to you, but it's not to her.</p>

Link to comment
Share on other sites

<blockquote>

<p>I never suggested that there wasn't a place for high resolution and I'm glad you have found that tiny niche.</p>

</blockquote>

<p>"Tiny niche"?<br /> Being something of a 'pixel princess' myself (I use a 56 megapixel Leaf Aptus back for my architectural work) that gets my attention.<br /> This "tiny niche" would include aerial, architectural, scientific and forensic photography, most of the photography done for engineering, and a lot of artsy stuff (like mine) in which the very fine detail that reveals the texture of the subject is critical to the impact of the photograph.<br /> But, given that it doesn't include weddings or little league team pictures, I guess that it is a comparatively small niche.</p>

<p> </p>

Link to comment
Share on other sites

<blockquote>

<p>working within your own limitations</p>

</blockquote>

<p>The laws of physics limit <em>all</em> of us, John, not just me. What matters is what you do about it, and (in the context of your original comment) whether it makes financial or logistical sense. You wrestle with <em>your</em> equipment's limitations by not minding false image elements in your results, and you consider other approaches to now be "irrelevent." Other people use the tool appropriate to the job, and recognize that sometimes (not just a "tiny" number of a times) a better image capture rig and actual details from the scene being photographed are the only, or the better/faster/easier option.<br /><br />You're assuming that I have no interest in enjoying both the creation and consumption of enhanced or fictionalized images - not true at all. But being interested and involved in both accurately recorded images and artfully created ones doesn't render me incapable of distinguishing between the two - in practical, semantic, philosophical, logical, economic, cultural, historical, and techological terms.<br /><br />I suspect that the "tiny niche" into which you're pushing annoying observations about those differences is a <em>lot</em> bigger than you think it is, John. You're seeing it from the point of view of someone who professionally "surmises" things, so perhaps most problems involving details look, to you, like something that should be solved that way. But if you ask most people whether they'd like their camera to record their kid's actual freckles, or if they'd rather click the "add typical surmised freckles" button in a post-production session, I think you know what the answer would be.<br /><br />And you certainly are going to have to make room in my <em>tiny niche </em>for documentarians, photojournalists, event shooters, sports photographers, scientists, technologists, engineers, architects, product photographers, doctors, art reproducers/publishers, criminologists, archeologists, and all the rest that can't survive gaining a reputation for synthesizing their images in order to save a little money (once) on an equipment purchase. Most everyone else, on a strictly practical note, are not guys like you and me who sit in front of computers all day long, putting them to the task of crunching lots of data. <br /><br />There certainly are lots of kids coming up, now, who <em>do</em> perceive Photoshop and a library of plug-ins and supporting software to be the solution for not choosing to shoot in good looking light, not getting their subject's eyes in focus, and other mistakes. Many of them will be in complete agreement with you that they'd rather sit in front of the computer creating images loosely inspired by something out of their cell phone than fuss about the actual details.<br /><br />Those folks are - at that stage of their development - more than happy to "fix" things after the fact, even if it means copy/pasting open eyes from one shot over blinked eyes from another, or allowing software to do that for them automatically. I've transplanted human heads, dog feet, blades of grass, dog hair textures, flying birds, fingers, beer glasses, signs - you name it - all to produce a pleasing final image under certain conditions, for certain audiences. Every time the tools get better, faster, easier - the happier I am (because I like good-looking results, and I value my time).<br /><br />But that has nothing to do with your original assertion that real details are now irrelevent. Because when they're not, they're simply not. And for the guy who's trying to decide between a cheap kit lens vs. a modestly more expensive one that's optically better, proposing the CGI route as a way to save him that $100 across all of the thousands of images he will shoot during that lens's working life, and recommending instead that he deploy his time, software, and computing power on countless images thereafter - that's not lined up with reality.<br /><br />You asked if we've come too far in discussing which lens delivers the goods better than the next. Do you not see that you'd simply replace that conversation with endless arguments over which suite of software running on which platform can better guess what's missing in a given photographer's information-deprived photo? And that, of course, is only for people who do indeed consider actual details to be irrelevent.</p>

<blockquote>

<p>With the vast amount of information instantly available we don't need to recreate it every time we want it.</p>

</blockquote>

<p>There's the conceptual problem driving all of this. Photographing something is not "creating" (or "re-creating") <em>anything</em>. It's recording something. Because you personally make no distinction between recording something and fabricating something, it follows, I suppose, that you don't see captured destails as particularly meaningful. I assure you that many more than a tiny minority of people do.</p>

<blockquote>

<p>I never suggested that there wasn't a place for high resolution</p>

</blockquote>

<p>We're not talking about "high resolution," we're talking about the differences - for the average photographer - between one lens and the next, or one camera body and the next. This isn't about the differences between point and shoot cameras vs. drum scanning gargantuan negatives, or the difference between a cell phone and something fancy from Hasselblad. The majority of the discussions you seem to be lamenting are about things like once maker's macro lens vs. another's when used while photographing flowers. Or the difference between details captured by an 85/1.4 and a somewhat less expensive 85/1.8 when used wide open. while photographing babies for decent-sized prints. The people trying to make those decisions haven't "gone too far," but they would certainly think that giving up on details because they should sit down and paint in faux details from now on <em>is</em> going too far. <br /><br />The tiny niche would be the people for whom the latter is acceptable or desireable (and this makes all sorts of presumptions about the "instant" availability of the massive amounts of data and computing power you propose they dip into while fixing the eyelashes on that shot of their baby. Those are some huge presumptions, there, even if we ignore the who issue of whether it's the resolution panacea you say it is.</p>

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...