I've decided that I'll get a nice scope this year so I've been looking at reviews in back issues of BW magazine. Anyway, what surprises me most is the inconsistency. Here are some results for optics:
In 2003 survey:
Leica APO 62 optics = 9.5
Swaro APO 66 optics = 9.5
Zeiss diascope 65 optics = 9.0
Opticron HR 66 ED optics = 8.0
Kowa 663 ED optics = 7
In 2002 survey:
Leica APO 62 optics = 9.5
Swaro APO 66 optics = 9.5
Zeiss diascope 65 optics = 8.5
Opticron HR 66 ED optics = 9.0
Kowa 663 ED optics = 9.5
What worries me is that in 2002 the Kowa and Opticron scopes optics were rated very highly whereas in 2003 they both scored sunstantially less, with the Kowa considered as mediocre. Another example is the Nikon 78mm scope with zoom eyepice which in 2002 was rated by BW mag as a dog with milky optics, poor build and poor desgin, and yet several respected reviewer sites e.g. Alula consider the scope to be first rate. I wonder if this is reflective of a lack of consistency in the BW mag testing methodology, a lack of ability on the part of the testers, or genuine sample variation. Or could it be that using a single number to rate optics is a non-starter as it depends too much on what weight is given to each aspect e.g. brightness, contrast, centre sharpness, edge sharpness, FOV etc.
Interestingly the well known Alula scope survey noted that several of the top end scopes that they tested were probably lemons. So, it looks as if there is quite a bit of variation.
So, how does Joe Public make sure that (s)he gets a good 'un and not a lemon, and does buying a Leica/Swaro in reduce the probability of a lemon?
In 2003 survey:
Leica APO 62 optics = 9.5
Swaro APO 66 optics = 9.5
Zeiss diascope 65 optics = 9.0
Opticron HR 66 ED optics = 8.0
Kowa 663 ED optics = 7
In 2002 survey:
Leica APO 62 optics = 9.5
Swaro APO 66 optics = 9.5
Zeiss diascope 65 optics = 8.5
Opticron HR 66 ED optics = 9.0
Kowa 663 ED optics = 9.5
What worries me is that in 2002 the Kowa and Opticron scopes optics were rated very highly whereas in 2003 they both scored sunstantially less, with the Kowa considered as mediocre. Another example is the Nikon 78mm scope with zoom eyepice which in 2002 was rated by BW mag as a dog with milky optics, poor build and poor desgin, and yet several respected reviewer sites e.g. Alula consider the scope to be first rate. I wonder if this is reflective of a lack of consistency in the BW mag testing methodology, a lack of ability on the part of the testers, or genuine sample variation. Or could it be that using a single number to rate optics is a non-starter as it depends too much on what weight is given to each aspect e.g. brightness, contrast, centre sharpness, edge sharpness, FOV etc.
Interestingly the well known Alula scope survey noted that several of the top end scopes that they tested were probably lemons. So, it looks as if there is quite a bit of variation.
So, how does Joe Public make sure that (s)he gets a good 'un and not a lemon, and does buying a Leica/Swaro in reduce the probability of a lemon?
Last edited: