Jump to content

Where do you draw the line with photo editing?


Recommended Posts

Thank you for all the input everyone!! Some of the things I suppose I'm also neglecting to remember is that for decades and decades before lightroom and photoshop came along heavy darkroom editing was done by many a skilled photographer and indeed many if not most of the tools in today's moderns digital packages owe their names to the processes they used "in analogue. Things like burning, dodging, heavy cropping contrast adjustments, unsharpen mask and much more were once all dark room techniques. I look forward to seeing what I will be able to capture and produce in the future as my skill and eye grows, with the help of my fellow photographers :) Indeed I'm looking to join a local photographic society in the next week or so and I'm in the process of trying to pick out 4 images to submit. Yes in this case I will have to read over the rules carefully to make sure I know what they are looking for and not. I'm not expecting to win or anything but it's exciting (and more than a bit nerve racking) just to meet other amateur photographers in person and to have a chance to have my work critiqued by judges, and more than anything to get tips and techniques from others!
Link to comment
Share on other sites

Interesting lonleylight, I started out in a darkroom and it was good for me that software developers adopted terms from the darkroom, which for me made the transition much easier. I wonder if it made it more difficult to those who came straight into digital. I imagine it was probably going to be a new terminology for new photographers, so it wouldn't matter that much, but I wonder if they were wondering where some of the weird terms came from. Dodging and burning??? Whatever is that? But nothing to compare with something like the digital film effects program Maya. One time, I played around on a friends computer and I thought my head would explode with the massive unfamiliar terminology relating to functions in film and effects, 3D, shape generation and motion scripting, etc etc. I was going to hurt myself if I tried to do that on my own.

 

However, another thing in terms of my photo trip is that I notice that generally, I still think about a file as a print, and though working digitally right now for processing regardless if shot with film or digital, I find I think generally in dark room terms and using the programs to do darkroom like things on my photos, which is limiting in one way, but basically where I'm at with photography at the moment.

Link to comment
Share on other sites

There IS a line when it comes to photo editing. Especially when it comes to photo journalism. Even when it comes to fine-art you might run into a wall trying to submit an image for competition that is heavily edited. That depends on the competition of course. Unfortunately, most cameras don't see exactly as the eye sees. There is this problem with dynamic range and to a lesser degree depth, sharpness, contrast and color(Hue in B&W) . Lucky for us, editing software allows us to manipulate these things to a certain degree where a dull picture can come to life, or a flat picture is more true to life.

 

When it comes to my personal editing style, I try my best to get my images to resemble true life, or what my eyes saw at a particular moment, (or what my minds eye has seen and felt). I might stray if I'm in a creative mood, but most of the time, if I cant get the final image to look as close as possible to true-to-life, Out it goes ! Whether this is good or bad I'm not sure ? I have seen some fantastic composite images on this site and other sites that show you what you can do with photo editing these days. Maybe in the future I might give that sort of thing a try, but for now, I do just as I did back in film days. I have my "Keepers" and I have my "Throw aways".

Link to comment
Share on other sites

When I first learned darkroom photography, I learned about dodging and burning in, but never really got interested in doing it. I am more of an engineer, and less of an artist, so I tend not to want to make even obvious changes. I mostly never got interested in the digital equivalent, either.

Nothing against editing, or those who do it, but it just isn't me.

In reality, all of our images are edited, regardless of the medium in which they're made. Far more editing is done before the image is captured than after, and the results are most definitely obvious. The simple choice of Kodachrome over Ektachrome is as much an act of editing as is dodging & burning (and with greater effect on every image you create). The choice of digital camera determines more about your images than a subtle tweak of hue or saturation, e.g. Nikon files differ visibly from Canon files, and each of us has his or her preferences about the IQ from a specific brand or model. Shooting RAW doesn't avoid this either - it mandates editing to produce a finished image. Worse, a simple "raw" as-captured image differs from film to film and sensor to sensor.

 

Your choice of printing paper and color management algorithm for printing are yet other ubiquitous editing choices that we all make but don’t consider formal editing. Lens choice is another editing act with profound effect on many aspects of an image, including color. Do all 50 mm 1.8 lenses yield identical images? Obviously, they don't - we deal with color casts, chromatic aberration etc all the time. Some prefer Zeiss, some Canon, and some Nikkor - yet none puts exactly what goes into it on the sensing medium. Then there's lighting, and focal length, and exposure, and white balance, and.....

 

So dodging, burning, and tweaking exposure parameters seem to me like part and parcel of the ordinary chain of events connecting exposure to viewing.

 

Even extremes have a place. I've even seen a few HDR pics that I really like. But overuse of any method or approach gets annoying fast, and I'm seriously concerned that humans will lose at least some color and contrast sensitivity & discrimination over the next few generations as more and more of what we see is artificially enhanced beyond anything nature ever produced.

 

One of the definitions of engineering is "the action of working artfully to bring something about". Applied to photography, this would seem to me to mean thoughtful, knowledgeable integration of the chain of events that starts with choice of subject & setting and ends with viewing of the finished image. It's detailed attention to everything from knowledgeable choice & use of equipment to birthing the finished image to choosing its presentation. Ignored steps can be missed opportunities..

  • Like 1
Link to comment
Share on other sites

In reality, all of our images are edited, regardless of the medium in which they're made. Far more editing is done before the image is captured than after, and the results are most definitely obvious. The simple choice of Kodachrome over Ektachrome is as much an act of editing as is dodging & burning (and with greater effect on every image you create).

 

 

Yes.

 

Especially framing the shot in the first place, and any cropping on printing (or scanning).

 

I always liked slightly more wide angle lenses than many people, which seems to represent

my ideas on framing.

 

Not long after my son was born, I learned about VPS, being a little less contrasty than Kodacolor, and

mostly used that for my color negatives for many years. Well, also after I learned that I could keep it

in the camera, above 55F, for reasonable lengths of time.

 

(snip)

 

 

So dodging, burning, and tweaking exposure parameters seem to me like part and parcel of the ordinary chain of events connecting exposure to viewing.

 

Even extremes have a place. I've even seen a few HDR pics that I really like. But overuse of any method or approach gets annoying fast, and I'm seriously concerned that humans will lose at least some color and contrast sensitivity & discrimination over the next few generations as more and more of what we see is artificially enhanced beyond anything nature ever produced.

 

 

Hmm. In the days before color film, did people have different sensitivity to contrast and such, than they do now?

 

The different gray levels would have had more importance to them.

 

Reminds me, from a biochemistry talk some years ago, on the development of color vision.

It seems that the brain learns and adapts to color vision when we start seeing color after we

are born. Not that anyone would do the experiment, but if you raise a baby in a completely

black and white world, the theory is that it wouldn't develop color vision.

-- glen

Link to comment
Share on other sites

In the days before color film, did people have different sensitivity to contrast and such, than they do now?.

They did not. The evolution of color vision in humans and our primitive ancestral mammals (a fascinating story, BTW) spans 90 million years and occurred in response to interesting changes, none of which was in the color spectrum to which they were exposed. Per work published by a group of Emory scientists about 5 years ago in the Public Library of Science Genetics (along with other works, all of which are summarized nicely in THIS paper), our early predecessors were nocturnal and had visual sensitivity only to UV and red. Over the next 60 million years, we evolved through 7 genetic mutations in the production of opsins (the universal photoreceptor proteins in all species), losing UV sensitivity but gaining sensitivity to green and blue as we became diurnal or cathemeral. But it was all based on light intensity in a fairly static environmental palette of color. Our vision was apparently adapting to our emergence into the sunshine from the darkness.

Reminds me, from a biochemistry talk some years ago, on the development of color vision. It seems that the brain learns and adapts to color vision when we start seeing color after we are born. Not that anyone would do the experiment, but if you raise a baby in a completely black and white world, the theory is that it wouldn't develop color vision.

We don't have to develop color vision - we'e born with it. This article from Scientific American may help clarify what happens. Infants do see colors, but their visual cortices aren't well developed for the first few months and their brains process color perception differently. The general term for this kind of discrimination is categorical perception, and evidence suggests strongly that perceptual processing of all kinds is influenced by prior knowledge and experience, which is what babies gain over the first few months of life that (along with brain & neural network development) enables them to respond to colors (and language and all sorts of other categorical variables). See this work by Casey & Sowden if interested in more detail.

 

My specific concern over the proliferation of artificially high contrast & high intensity coloration is that our visual systems (like smell and touch but unlike hearing) can change their sensitivity to stimuli in response to intensity. Our retinal sensitivity changes in response to brightness, both by chemical change (degradation of a photosensitive protein called rhodopsin) and by change in the retinal neurons (& most probably in our cerebral visual pathways). So as we accommodate to an HDR world and exhibit preference for high intensity, high contrast colors in the things we use, wear, etc, we may well "forget" how the world used to look as we amplify its colors. We're talking about many generations, though - I wouldn't get darker sunglasses just yet :)

Link to comment
Share on other sites

So as we accommodate to an HDR world and exhibit preference for high intensity, high contrast colors in the things we use, wear, etc, we may well "forget" how the world used to look as we amplify its colors.

We've adapted to paved roads, airline travel, movie screen colors, television colors. We'll probably continue to adapt to monitors and mobile devices. What degree of change in our vision will happen because of HDR is anyone's guess. We don't forget so easily. The world of photography used to be only black and white. We haven't forgotten that and still use it today. Music streaming didn't make us forget about vinyl, which is having quite the resurgence. But, yes, the world evolves. We don't remember or forget absolutely.

There’s always something new under the sun.
Link to comment
Share on other sites

We've adapted to paved roads, airline travel, movie screen colors, television colors. We'll probably continue to adapt to monitors and mobile devices. What degree of change in our vision will happen because of HDR is anyone's guess. We don't forget so easily. The world of photography used to be only black and white. We haven't forgotten that and still use it today. Music streaming didn't make us forget about vinyl, which is having quite the resurgence. But, yes, the world evolves. We don't remember or forget absolutely.

Ed is correct - I'm talking about biological adaptation, i.e. physical change in our structure and function. Our predecessors adapted to living on land by developing lungs that could extract oxygen from the atmosphere instead of the ocean. They adapted to use of clothing and climate-controlled shelter by losing hide and hair. We're adapting to cooking our meat and using utensils by losing our teeth - alligators have about 80 and we have 32. And we'll probably adapt to the depletion of the ozone layer by genetic structural change in our skin, since melanoma is increasing at an alarming rate.

 

If we start making everything brighter and more colorful because we like the way HDR images look (which appears to be the case), we'll adapt by mutation of the genes that produce opsins. Then future generations will be less sensitive to color & contrast because they won't need more sensitivity to either one - this is how we lost our ability to see UV light. Life is tough - adapt or die !

Edited by otislynch
Link to comment
Share on other sites

My point wasn’t a semantical one on the proper use of “adapt”. You mentioned a concern about HDR. I’m no more concerned about the long-term effects of HDR on vision than I am about the long-term effects of movies or tv on vision.
There’s always something new under the sun.
Link to comment
Share on other sites

Otis, I’m not denying there may be physical changes. I’m saying I’m not concerned about them. The world is changing in a myriad of ways and physical changes are bound to be part of that. I’m more concerned, for instance, about physical changes caused by the overuse of antibiotics which make them less effective against infection in humans than I am about adapting to HDR. Unless we destroy it, we will hopefully always have nature as the counterbalance to HDR photos and other media.
There’s always something new under the sun.
Link to comment
Share on other sites

 

(snip on babies and color vision)

 

We don't have to develop color vision - we'e born with it. This article from Scientific American may help clarify what happens. Infants do see colors, but their visual cortices aren't well developed for the first few months and their brains process color perception differently.

 

I was remembering form about 25 years ago, so older than the SA article.

And I believe that the one I remember doesn't disagree with the article.

 

The talk had to do with how the brain knows which neuron in the optic nerve comes from

what part of the visual field.

 

From the part I don't remember as well, the eye generates an image of a line slowly moving

across the visual field, in randomly chosen directions. That this is done before birth, and

allows development of the visual system. I believe that I sometimes see this in

dark enough rooms, and when I am close to asleep.

 

The next question relates to color vision, and how the brain knows which neuron gives

which color signal, after it knows which one is where in the visual field. This one, I believe,

is done after birth. The baby would still have sensation of color, but wouldn't at the

beginning have a properly colored image.

 

It was after the talk that I asked the speaker about raising a baby in a monochrome world,

which is an experiment that no-one would want to do.

 

Also, this reminds me of some thoughts when I was young, on whether different people

have the same psychological sensation for the same colors. I had wondered at the time,

if everyone had the same favorite psychological color, even if it matched to a different

physical color. There might be a random component in the initial assignment of neurons

to colors.

-- glen

Link to comment
Share on other sites

My point wasn’t a semantical one on the proper use of “adapt”. You mentioned a concern about HDR. I’m no more concerned about the long-term effects of HDR on vision than I am about the long-term effects of movies or tv on vision.

I'm not one to beat a dead horse, Gary. Ordinarily, I wouldn't respond further here. But we suffer as a species by underestimating the importance of environment in our own evolution. As a physician, I feel the need to try to improve this if I can. There's actually evidence that watching television is associated with a decrease in visual acuity. For example, a 2014 study (Mushtaq et al. Effect of TV watching on vision in school children. International Journal of Research in Medical Sciences) found that "...children watching TV for less than 1 hour [per day] had visual acuity of 6/6...whereas children watching TV for 1-2 hours, 2-3 hours and >3 hours had lower visual acuity". I don't know of any long term studies yet, but we're starting to see visual problems associated with occupational computer monitor use too (e.g. this report). This kind of change will affect our abilities as photographers and influence design of future cameras. And to remain OT, it would certainly change our approach to editing.

 

I wouldn't be at all surprised to see demand rise for some kind of HDR-like "enhancement" of eye level viewfinders in the future. That would be the beginning - and it's not an unreasonable concern.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

I wouldn't be at all surprised to see demand rise for some kind of HDR-like "enhancement" of eye level viewfinders in the future. That would be the beginning - and it's not an unreasonable concern.

 

I think it is here with mirrorless to some extent now. Look through the view finder of a DSLR then look through the viewfinder of a Sony A7RIII as an example. Go into a room with low light and compare the two. The mirrorless shows the image as it will be captured with the camera settings. You can see in the darkness with pushed ISO and slowed shutter speed. The DSLR shows only what the eye sees through the lens. No doubt the enhancements will continue to evolve as camera companies try to out do eachother. It can be a dramatic change like having some cybernetic vision enhancement, for me it took a bit getting used to and once I did, going back to the DSLR I do notice I can't see in the dark and I don't have focus peaking where the parts of the image that are in focus will shimmer with a designated color.

 

I still shoot a lot with the DSLR, I switch back and forth between the two based on the capabilities of the camera and what I want to accomplish. It's a tool.

Edited by Mark Keefer
Cheers, Mark
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...