Discussion in 'Nikon' started by Mary Doo, Jan 28, 2021.
I guess she left her mono or tripod in the car. That looks uncomfortable.
FWIW there is a photographer tour leader in Australia when they get into wildlife photography. Many of the customers just use breathing techniques and taking rest breaks and handholds the lens. It was prob a F4 lens but hey ... They went to our camera club to speak for the evening and some of us asked if they used a monopod or a beanbag but overwhelmingly most people just found it easier to handhold it and take rest breaks.
Does the wildlife respect such timings?
It is quite widely practiced. The idea is that the cow would come home if one fusses about with a tripod or monopod for fast-moving objects such as bird-in-flight. This is not to say that the tripod does not work, just not as flexible. So the method is, on each shooting session, first acclimate yourself by choosing a target and repeat it for ten or more times to practice accuracy. Place the lens with hood on the ground and lift it only when you are ready to shoot.
I tried to look for your images to find practical application but could not find any. I believe Topaz Gigapixel has not promised pixel by pixel dna-accurate duplication of teeny-tiny files to make it wall-size. People who use it are those interested in enlarging their already-fit-for-print images of reasonable size.
Seems correct - depending on the model there might not be much weight difference to a 600/4 though.
Found this: https://photographylife.com/wildlife-photography-tutorial/8
I see color moiré in downsized displays if it covers a large enough area.
As I understand it, the phenomenon has to do with the Bayer Filter Array (BFA) and overlapping detail being recorded by different colors and that causing confusion for the demosaicing. It will always show up on repeating patterns if the lens can resolve more than the sensor can record; but being out-of-focus, or any vibration (camera shake, shutter or mirror shock) can negate the need for an AA filter (and even AA filters aren't an absolute solution because there's a balancing act between eliminating all color moiré and not losing too much resolution).
The solution is to make the lenses the AA filter. That is, if the sensor resolves more than the lens can resolve and not the other way around then you won't get any color moiré. The main reason you don't see it with your Sony cameras probably isn't the lenses as much as it's vibration, which is why you want to have a solid tripod with the mirror up and the shutter up before the exposure begins.
There is another factor at play. Many lenses are unable to resolve at the pixel level with high resolution sensors. Besides intrinsic properties of the optics, resolution is downgraded, as you suggest, by focusing accuracy and camera motion. Interference between fine repetitive patterns and pixel spacing are sufficient to produce Moire pattens. The Bayer array and color interpolation probably contributes. Phenomena like "purple fringing" (unrelated to chromatic aberration) is caused by parallax between the sensor, micro-lens array and Bayer filter.
For what it's worth, I do quite a lot of printing still and nowadays I make prints for boxes of A3+'s and binders with A4 prints. I enjoy the process of printing and seeing the details and the whole image effortlessly. I also show them to friends and colleagues though less often than in the past (2020 doesn't count as there is much less in-person contact). I noticed that if I photograph an event, for example, and only deliver digital files to the person or people who organized it or are the main participants, they often are forgotten and not shared so much. If I make some kind of printed outcome, for example, a set of documentary photos printed on a large paper and displayed at the department wall (say if it is a work-related event), there is quite a bit of discussion and interaction around the photos. I believe printed images have their own advantages, one of which is their semi-permanence due to their physical nature. The other is that it's possible to see multiple images in detail at one glance and the images can form a larger whole together. On a computer screen or mobile device, typically one views one image at a time and often the details are not visible to the extent they are in printed images. Of course, screen resolutions are increasing but still, I don't find it as convenient to view multiple images on the computer than on a desk, for example. I like especially trying to make sets of images that work together and figuring out the minimum set of images that tell the story and the images should work both individually as well as a group.
I print a fair bit too. Mostly for myself, some to hang on the wall; those are mostly 24 x 36. I think when I die, all my pics will be forgotten anyways. It's not like grand kids are going to poke through digital archives (which are probably locked with a password they don't have) and so the only record will be the prints. Like I did the other year, dig through bankers' boxes full of old photos. That tradition won't be there for the next generation, mostly.
You never know. I think someone somewhere will do exactly that, though there are no guarantees in specific cases. I think the issue is being saturated with too many images these days, that no one image stands out as special and warranting a longer look. But we can always try to achieve that.
OK. Perhaps this is more 'photographic' and to your liking.
This is the framing I would have liked to have got of this Oystercatcher. Unfortunately it's only about 1/4 of the frame I was able to shoot - standing in a boat 30 ft offshore from the bird.
Here's a comparison of a Gigapixel upres against what I could get from PhotoShop.
Agreed, that Gigapixel worked much better on a 'natural' subject.
What I found peculiar was that allowing Capture One to do some 'reconstructive surgery' at the pixel level, before feeding the image to Gigapixel, gave a noticeably different result.
This time the results are much closer. Except the PS upscale and sharpening took almost no time at all, while Gigapixel took well over 3 minutes at the job.
I'll have to print the results out for a better comparison, but I'm still not seeing any amazing day/night improvement from Gigapixel at 'normal' viewing distances.
There's also a lot of reconstruction done in an inkjet printer driver, which might practically eliminate small differences.
And the above as it appears in an inkjet print.
Some transformation IMO!
A repeat try on the camera picture gave no perceptible advantage to Gigapixel at all.
So it seems as if Gigapixel has been trained to emulate feathers and beaks, but not metal surfaces. Which makes it a bit too unpredictable and picky in my view.
Then there's This.
It's not really sharp to begin with, no image manipulation will save that pic
Rodeo, the original purpose of this thread is not about which program is the better extrapolator. I mentioned Topaz Gigapixel AI only because it is probably the best program for enlarging an image, and that such an excellent tool can be used to make a bigger print in the event that, say, 24MP is not enough - on the rather rare occasion that one needs to do so.
Note: Although such a program can enlarge a tiny file, the best candidate for using such a program is an image that is already sufficiently large. Yes, the program can enlarge a small file, but no one should expect miracles. In fact, such capability is probably not desirable because then anyone would be able to capture any low-res image from the internet and do whatever one wants to do with it.
You compared Photoshop vs Gigapixel and you indicated that Photoshop is better. I on the other hand like to defer to expert opinions and I mentioned one of the many positive reviews. You mentioned the face that showed up in an article you read. It appeared to be an inexplicable option - that I did not notice - because normally I just want the program to do what I would like it to do and this info would be irrelevant.
That said, you left me with no choice but to spend time to examine your allegation. I enlarged a 1280 x 431 file by 600% to 7280 x 2586. As mentioned, this source file is not optimum for good practical results but it is small enough so that it can be displayed here. My test results indicate that Gigapixel is better.
I hope you will not make me relitigate things again because, although I am retired, I am quite busy.
6x or 600%.
I selected "600%" in Photoshop and "6x" in Topaz Gigapixel and both ended with "7280 x 2586" files.
The original file was "1280 x 431" (the one displayed in my post). The two small pieces shown below it were snippets of the "7280 x 2586" files displayed at 100%
But your images are labeled 600x which would be 60000%. 600% would be 6x, which is believable. 600x is not believable, even for Gigapixel.
Separate names with a comma.