Jump to content

PhotoZone retest of 70-200/2.8 IS


awindsor

Recommended Posts

For what it is worth the PhotoZone retest of the 70-200/2.8 IS is now <A

HREF="http://www.photozone.de/8Reviews/lenses/canon_70200_28is/index.htm">online</A>.

Better but still not stellar. Even with L lenses their seems to be considerable

sample variation (the 300/4 test had a similar outcome and the second sample

tested was sent to Canon for servicing twice before achieving its excellent

results).

Link to comment
Share on other sites

I don't think there too much wrong with this second sample. At 200mm it is slightly less sharp than the non-IS version and the 200/2.8L II which is exactly what one would expect.

 

Fairly close to the non-IS at 70mm, a bit better in fact. Slightly worse at f2.8 than the 85/1.8 and about the same stopped down a bit more. Again what one would expect.

 

This sample variation is why I test my lenses when they come in. At least I'll spot any ones that are really bad.

Link to comment
Share on other sites

3 samples and no stunning result with some QC problem, Do you think that review would ring an alarm bell to Canon head office? Enough to make a revision to that lens?

 

Or are people expecting too much of that lens because it is F2.8 and very expensive.

Link to comment
Share on other sites

When are they going to do multiple retests of all their other lenses? Why just pick on one? Maybe others aren't representative either.

 

I'm not sure what the point is of numberical tests giving precise scores to lenses if they can vary all over the map depending on the sample they test.

 

Sending one back to the manufacturer 3 times just casts doubt on the whole system and testing methodology.

Link to comment
Share on other sites

I don't know about, or care about MTF charts. But after two years I do know that the 70-200 2.8IS is, hands down, the best zoom lens I've ever owned and never fails to impress me even now. You probably wouldn't have to look very hard to find a lot of people that will agree.

 

I've never relied on Photozone type tests to decide or not wether a lens is right for me.I prefer hands-on tests and the opinions of seasoned pros,I'll read user reviews of a lens after it's been out for a while. Good luck.

Link to comment
Share on other sites

I tend to agree. . . .something stinks.

 

First. . we have photozone doing all these fancy tests on a cropped camera . . .

 

and now they are retesting a lens because it seems bad? Is it their methods? Are they data shopping? Or does Canon really have QC problems?

 

Why not retest the 17-55/IS because the results seemed "too good to be true?"

Link to comment
Share on other sites

Guys, lighten up a bit. I've the greatest respect for Bob's views since I know he comes from a background of professional involvement in optical testing, but I'm your in-house statistician, and I am here to tell you that what's going on here is pre-statistical - hypothesis generation rather than hypothesis testing - and that's not a waste of time. It would be really interesting to have someone gather a sample of, say, ten copies of the same lens from around the place and put them through a properly calibrated (that is, checked for repeatability) test procedure for both FF and 1.6-factor, but nobody seems to have the resources on offer to make that happen. At the moment the PZ tests are as good as anything published that I've come across, and what we should be doing is encouraging the tester rather than carping about his efforts.

 

Of course you can't use test results in isolation, and user experience is valuable, but we all know how much myth is generated on the forums even when there is sometimes a grain of truth underlying it. If you think PZ only re-tests problem lenses, take a look at the reports on the 135/2 and 200/2.8. Neither of these is a lens where users have reported sample variability, and the PZ tests are close but not identical, and confirm the user consensus about the quality of the lenses. Although that's a very small sample, it does point towards reasonable repeatability of the procedure, and suggests that when a lens shows up badly there may be something worth investigating. For example, what the tests on the 300/4IS show is that the lens is very good at its best, but getting there may be a struggle - a mis-match between design robustness and the quality control needed to handle it, perhaps.

Link to comment
Share on other sites

I agree with what Robin is saying here. Talk about shoot the messenger!

 

The new results are just what one would expect and compare well with other on line data like

http://www.the-digital-picture.com/Reviews/ISO-12233-Sample-Crops.aspx?Lens=103&Camera=9&FLI=4&API=0&LensComp=245&CameraComp=9&FLIComp=0&APIComp=0

and

http://www.the-digital-picture.com/Reviews/ISO-12233-Sample-Crops.aspx?Lens=103&Camera=9&FLI=0&API=0&LensComp=106&CameraComp=9&FLIComp=0&APIComp=2

 

I can see no reason to slag off Photozone?s methods just because they have happened to find a QC outlier on your favourite lens, these things are to be expected with mass market consumer goods, even expensive ones.

 

I know Bob has concerns about the repeatability of these sorts of tests. I am sure that testing using an optical bench is tricky and requires great care to get good repeatability.

 

I don?t personally feel that is the case with this ISO type method.

 

I myself use a similar method to Photozone, but using my own software, and I must say with a hell of a lot less care about the set up and only in domestic surroundings.

 

Between retests of the same lens including a complete take-down and set-up I have fairly good repeatability, typically within 5-10% of the reported MTF 50% value and often a lot less. The statistical variation due to AF can be more than this.

 

I am sure that Photozone do a lot better on repeatability than me considering the care they operate with and the better facilities.

Link to comment
Share on other sites

I guess it is only fair for Bob to question their testing methodology. After all I objected to his <A HREF="http://www.photo.net/equipment/canon/is_lenses/

">testing of lenses</A> by photographing a deer (a doe, a deer, a female deer?) standing in front of dense brush. <P>

 

Bob even knew how to ensure perfect focus

<blockquote>

Without strapping a tilted scale to the deer's back it's a bit hard to tell for certain!

</blockquote>

but could not be bothered to do so. <P>

 

More over not only did Bob use only one sample of each lens but only one sample of rabbit and deer. <P>

 

The comments that follow the review are a great example of reasoned discussion ;) This thread still has a long way down to go. <P>

 

For those that are interesting in pointless unreliable single sample tests on full frame (since pointless unreliable single sample tests are less useful on a crop factor camera) SLRGear now does some testing on a 5D.<P>

Link to comment
Share on other sites

Yes I remember that article from when I was researching my move from Canon FD to digital. I thought it gave clear results of what IS/VR can do.

 

My own IS tests I use objective sharpness measurements. Generally these show good trend lines although any single point sample shows statistical variance due to variations in AF accuracy and the photographers ability to hold the lens still.

 

Of course the statistical variance can be reduced by taking many samples at a given point and averaging. This is fine if you have the time and energy for it.

 

As a side note this is a typical area for systematic errors when people compare if filters effect lens sharpness. They omit to realise the variation of AF or in some cases to refocus after fitting the filter. The method I used was to make 10 AF operations each with and without the filter; you then find there is no difference in fitting a filter.

 

Whilst I agree it would be desirable to test 10 lenses of each type so the mean and +/-2 sigma variance limits of it?s performance can be established, it is unlikely this is ever going to be done, even on a small scale. Also I doubt that many people will really understand this anyway.

 

So while single lens tests are problematic and make any one for one comparison suspect, I don?t think you can call them pointless as they do give an indication of typical performance in general. [not all the lenses can be outliers :) ]

 

It would be more useful if sites such as photozone and the-digital-picture showed results for both sides of the optical axis, neither make it clear what they are doing in this respect. This would at least make it possible to see if the lens has significant decentring. This would be a good clue as to the build accuracy and if the results were a QA outlier.

 

I do agree some over view doc should try and explain to readers about part-to-part variations and measurement uncertainty, however I suspect there is little solid understanding of either.

 

The best one can do is look at the results from several sites to try and get an understanding of the relative merits of a lens although I do feel that there is too much emphasis on sharpness in photography these days. But I don?t agree that ignorance is bliss.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...