• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Mega Review of the best 8x42... (2 Viewers)

They must have gotten a bunch of lemons, then? You seem to suggest that all these bins should be equal in terms of sharpness. They should be darn close of course (that's an impressive selection) but differences do exist and the results aren't really surprising are they? Well, maybe the Leica, but apparently they had multiple samples of that one. I have to think there was some serious splitting of hairs involved as well. Overall, though, the results aren't too surprising.

I don't think Henry is suggesting that all binoculars should be equal in terms of sharpness. But the question is whether any differences are caused by differences in the design of these binoculars or simply by some of the binoculars being lemons (and others cherries). That's pretty difficult to prove unless you've got a large number of binoculars of each type to evaluate so you get to know how large the differences between individual binoculars of a given type are.

Hermann
 
Last edited:
But sorry to insist on the Canon. Even without IS it is too aberrant for comparison.
For start: It is porro, its prism is different, ergonomics-weight-maintenance-durability. So many things different... in my opinion the fact that it is porro, alone, already makes the comparison ... awkward.

When you check the optical performance of binoculars, the type of prism used doesn't matter. Or are you seriously suggesting that porros should be excluded from a test?

In fact, my main complaint about the test is that it didn't include some high quality porros like the Nikon SE and the Swarovski Habicht. They would have given quite a few of the roofs a run for their money.

The exactly same thing happens to their scopes. I don´t know how Nikon manage to sell those... the prostaff, the 60 and 82 fieldscope and their EDG series!! They all look so overpriced that I wonder if their mix some diamonds in their prism.

Well, over here the Nikon EDIII and the ED82 are far cheaper than any of the other alphas at the moment. You pay less than half of what you pay for the Zeiss, the Leica, the Swarovski and the big Kowa.

Hermann
 
Give me some documented cases where you have tested a Swarovski binocular and found optical defects and what were they. I would be very interested in hearing what kind of optical defects you have observed on them because Swarovski prides themselves on quality control and 100% testing of all binoculars being shipped to assure no problems. ...

Hi Dennis,

I don't actually look through bins much except the ones that I buy, but I have experience with all the major brands, and in the course of making purchases I've found a surprisingly high percentage of flawed instruments at all price levels. I've bought most of my bins mail-order from reputable places (e.g. Eagle Optics, Cabelas, B&H, Adorama). I routinely find flaws in alignment or assembly of non-alphas, flaws that are so bad that I have to return them for exchange. As for alphas, I've less often found optical issues, but they do exist, and other issues, such as with focus/diopters, or bits of stuff (fibers) falling onto the inside of the ocular and imposing themselves on the view are not uncommon in my experience.

As for alphas (including Swarovski) and optics related issues, here are some examples:

Swarovski 8x32 EL: Was slightly out of alignment the day I received it, and went way out of alignment within a few days of use (They were fixed promptly and under warranty. According to the repair note, the ocular was not properly tightened after adjustment following assembly).

Swarovski 8.5x42 EL: First unit I ordered had a defect in the armor and the right side focus became uncoupled at close distances (focused down to about 10 feet while the left side continued focusing down to 7)--returned to vendor. Second unit I ordered had a problem with the synchronization between the left and right side focus at distances around 20 feet--returned to vendor. Third pair I ordered was perfect and remains one of my favorites, though I've had to return it once to Swarovski to fix the focus drive which became extremely stiff.

Leica 8x42 Ultravid: My unit had a coating flaw on one of the objectives--when I breathed on the lens it revealed a human palm print recorded as a flawed application of the outermost coatings (Leica replaced the objective under warranty).

Zeiss 7x42 Classic: First one I ordered had a subtle flaw in the assembly of the ocular yoke which caused the ocular to be slightly canted. It didn't affect left/right alignment, but the focus didn't ever seem right across the field on either side. Returned to vendor. Second unit has had issues with focus drive and diopter equivalent shifts in left/right focus at high temps (maybe now fixed) but it is still one of my favorite bins of all time.

Bausch&Lomb 8x42 Elite (waterproof version): The optical performance of my unit seemed very poor overall (color, contrast, size of sharp sweet spot). Sent to B&L to fix some other problems (with hinge tension and crud in the view), which they fixed, but they said the optical perfomance was within spec. Immediately took back to B&L (personally, since I lived near the corporate headquarters) and talked to repair lab director. The bino was put through some kind of optical testing and deemed on the low end of the acceptable range. At the B&L lab, I was given the choice of my repaired unit, a new replacement unit of the same, or the (then newly released) 8x50 Elite model. I tried the new 8x42 unit they offered me and it was optically outstanding (difference between it and my unit of same model was like night and day with respect to contrast, sharpness, and color accuracy)! The sweet spot on the new unit was huge compared to my original unit. I kept that one. To this day, I'm still amazed at the different views of those two units--it was hard to believe they were the same binocular model. Also interesting is that the optical performance of the left and right sides of each unit were very similar (it wasn't that one side of the original unit had a gross flaw). I've since had a similar experience with two units of a non-alpha--the Eagle Optics 8x32 Ranger Platinum.

Final comment: If you spend any time in the scopes forum here at Birdforum, you'll find plenty of examples of majorly optically flawed units from Zeiss, Leica, and Pentax. Swarovski, Kowa and Nikon seem to do better, but they release for sale the occasional optical lemon as well.

--AP
 
When you check the optical performance of binoculars, the type of prism used doesn't matter. Or are you seriously suggesting that porros should be excluded from a test?

It does matter. The prism have inherently characteristics... for example contrast. To compare a porro to a roof and give the porro a better result because it show a bit better contrast... is no data.
Same thing as if you give the roof a better result... because it is less bulky.
It is like comparing aples and oranges... and giving oranges better result because it is more citric.

I think one should compare roof to roof... and porro to porro... and IS to IS. (And in the test we saw roofs vs a Canon porro-IS).

Otherwise the inherently characteristics of the systems will get in the way of the fine line needed to say "A show better result than B".

(By the way... the 60 and 82 are not Nikon´s alpha and are not in the same level of the alpha´s. Their alpha is the EDG series... incredibly expensive. Their EDG 65 cost more than the Swaro80 and the ZV85)
 
Last edited:
In fact, my main complaint about the test is that it didn't include some high quality porros like the Nikon SE and the Swarovski Habicht. They would have given quite a few of the roofs a run for their money.

Hermann

hi Hermann...

mr.Salimbeni (Binomania) at Cloudynights forum gave us hint that they had tried something regarding review for porros...I hope it will be like "Porro Mega-review"! :t:
 
It does matter. The prism have inherently characteristics... for example contrast. To compare a porro to a roof and give the porro a better result because it show a bit better contrast... is no data.
Same thing as if you give the roof a better result... because it is less bulky.
It is like comparing aples and oranges... and giving oranges better result because it is more citric.

I don't get it. Maybe I'm a bit thick, but if I'm looking for a binocular (or a scope, for that matter), I'm interested in how it performs. If a test shows that porros are optically better than roofs, I've got to decide whether the smaller package, weatherproofing and so on are so important to me that I'm prepared to settle for the lesser optical quality of a roof.

Excluding roofs because they use a different type of prism is ludicrous. What's next, excluding roofs with Abbe-König prisms because they're different from those using Schmidt-Pechan prisms?

Hermann
 
(By the way... the 60 and 82 are not Nikon´s alpha and are not in the same level of the alpha´s. Their alpha is the EDG series... incredibly expensive)

By the way, the Fieldscopes may not be Nikon's top line anymore, but they're still in among the alphas when it comes to optical performance.

And your point is?

Hermann
 
I don't get it. Maybe I'm a bit thick, but if I'm looking for a binocular (or a scope, for that matter), I'm interested in how it performs. If a test shows that porros are optically better than roofs, I've got to decide whether the smaller package, weatherproofing and so on are so important to me that I'm prepared to settle for the lesser optical quality of a roof.
Excluding roofs because they use a different type of prism is ludicrous. What's next, excluding roofs with Abbe-König prisms because they're different from those using Schmidt-Pechan prisms?
Hermann
That was not what I meant.
You can compare optical quality even using a bino vs a scope. Still the more the two elements and systems look alike... the more precisely and finely you will be able to distinguish the glass and their fine performances; without the inherently characteristics of the prism getting in the way.

Reason why, in my opinion one should compare... roof to roof... and porro to porro. (Specially considering a porro IS... with a hanging prism inside. It is a completely different device)

And your point is?

About the nikons?
My point is that I happen to think that most of their scopes(except for the ED50) is overpriced.

Their 60ED for $1.5k ... their 82ED for almost $2k...

Their EDG65 cost more than the Swaro80 and the Zeiss85 ( <-- And I consider these last two a luxury already).
I honestly never tested that EDG... but please... it is overpriced.
 
Last edited:
A bout the nikons?
My point is that I happen to think that most of their scopes(except for the ED50) is overpriced.
Their 60ED for $1.5k ... their 82ED for almost $2k...

The EDIII (60mm) is about $800 over here, the ED 82 about $ 1100. Can't beat those prices. I agree on the EDG though, they're too expensive and far too heavy.

Hermann
 
...
My main question is - is there any properly conducted research or documentation of any kind that gives any sort of guidance for the number of human testers that are needed to give a statistically supportable result when the final (non-standard) optical element is added - our eyes (also possibly glasses) and the translation given by our brains?

Without this I have no confidence (statistical or otherwise) that a single human observer will (or won't) arrive at a solution that is meaningful for me.
If, as I suspect there is no such research, then multiple observer testing - as in this case (thank you Piergiovanni and all others involved) is a worthwhile and valuable addition to listening to everyones views on this forum and personally trying the things.

As an ageing industrial mathematician I am interested to know if there is an answer to my question!

Hi,

I've been monitoring this thread and would not wish your important question to get lost. The simple answer is "yes," there are technical guidelines available on how to conduct multi-observer research, and how to analyze the results. Right now I don't have the time to get into the details, but let me say that to do it correctly, steps must be taken at the front end to insure that the individual observers do not cross-contaminate their ratings by sharing their views or opinions. I don't know if that was done here judiciously, but I suspect it wasn't. At the back end (i.e., the analysis) one must first assume that the observers were all different and only merge their evaluations if there is no evidence to support it. In this case, a trained analyst would probably have established individual binocular ratings for each observer and then used non-parametric procedures to test whether the observers could be assumed to have come from the same population. Only if that test were passed would it be valid or meaningful to merge ratings.

Anyway, I don't have time right now. Those interested might read up on Likert scales, of which this is a seven point variant. http://en.wikipedia.org/wiki/Likert_scale

This is not intended as a bashing, but, in fact, some very misleading conclusions can be drawn from improperly collected or analyzed surveys. It's happened before.

Incidentally, it's perfectly valid to assume that specimens represent the central tendency of the production run, so long as the results are understood in that context. Cherries and Lemons represent the extremes of the production distribution and are not "representative" in either case. Hence, there is a philosophical problem in using them.

Later,
Ed
 
Last edited:
I've left a long trail of posts here describing aberrations and sample defects in alpha binoculars and scopes. You're welcome to examine it, but I don't think you'll find what you're looking for because I've tried not to single out any particular manufacturer for condemnation or praise. That would require a very large test sample. It wouldn't be based on the self-congratulations of marketing material.

Returning to the Binomania tests, there are numerous examples of binoculars judged to be more or less sharp than others. What causes that, design or sample defect? A defective specimen is the first possibility that needs to be eliminated. Sample defects in binoculars are usually visually subtle because the magnification is so low, but they can cause a loss of sharpness and contrast. I know it's impractical to test many samples, but it's not so tough to bench test single specimens before they are submitted to the group.

I also agree with Ivan that the Canon doesn't belong in this test group.

"I've left a long trail of posts here describing aberrations and sample defects in alpha binoculars and scopes. You're welcome to examine it, but I don't think you'll find what you're looking for because I've tried not to single out any particular manufacturer for condemnation or praise. That would require a very large test sample. It wouldn't be based on the self-congratulations of marketing material."

Henry
Give me documented case of a Swarovski binocular you tested and found an optical aberration that was so subtle that it would not be identified by these reviewers before testing it.


"Returning to the Binomania tests, there are numerous examples of binoculars judged to be more or less sharp than others. What causes that, design or sample defect? A defective specimen is the first possibility that needs to be eliminated. Sample defects in binoculars are usually visually subtle because the magnification is so low, but they can cause a loss of sharpness and contrast. I know it's impractical to test many samples, but it's not so tough to bench test single specimens before they are submitted to the group."

I think you are really overly paranoid on sample defects. These guys said they are experienced enough using to binoculars to detect when there is something obviously wrong with a binocular. I think you are overplaying the importance of it and I don't think it would have changed the outcome of this review at all. In fact I have looked through just about all these binoculars myself over the years and I would rank them about the same so I don't think they had too many "sample defects" and the reviewers were experienced in using the binoculars they were testing so they would have been more sensitive to any optical defects on the binoculars being tested.
 
Last edited:
I remember getting a brand new Swarovski EL32 and that had collimation problems which needed it to be sent back to Austria, they didn't repair it though they just sent me a fresh one, so yes i believe any alpha can come from the factory with a defect.

Anyway thats history, i have my 12x50SV now and i am going out to get my eyeballs high on the majestic views this baby gives, 8x magnification?, hehehe not for me anymore 3:)

I looked through the SV 10x50 and 12x50 and found them amazing for a high magnification binocular. You really prefer the 12x over the 8.5x even with the smaller FOV. A 12x view is amazing especially when the edges are tack sharp.
 
Hi Dennis,

I don't actually look through bins much except the ones that I buy, but I have experience with all the major brands, and in the course of making purchases I've found a surprisingly high percentage of flawed instruments at all price levels. I've bought most of my bins mail-order from reputable places (e.g. Eagle Optics, Cabelas, B&H, Adorama). I routinely find flaws in alignment or assembly of non-alphas, flaws that are so bad that I have to return them for exchange. As for alphas, I've less often found optical issues, but they do exist, and other issues, such as with focus/diopters, or bits of stuff (fibers) falling onto the inside of the ocular and imposing themselves on the view are not uncommon in my experience.

As for alphas (including Swarovski) and optics related issues, here are some examples:

Swarovski 8x32 EL: Was slightly out of alignment the day I received it, and went way out of alignment within a few days of use (They were fixed promptly and under warranty. According to the repair note, the ocular was not properly tightened after adjustment following assembly).

Swarovski 8.5x42 EL: First unit I ordered had a defect in the armor and the right side focus became uncoupled at close distances (focused down to about 10 feet while the left side continued focusing down to 7)--returned to vendor. Second unit I ordered had a problem with the synchronization between the left and right side focus at distances around 20 feet--returned to vendor. Third pair I ordered was perfect and remains one of my favorites, though I've had to return it once to Swarovski to fix the focus drive which became extremely stiff.

Leica 8x42 Ultravid: My unit had a coating flaw on one of the objectives--when I breathed on the lens it revealed a human palm print recorded as a flawed application of the outermost coatings (Leica replaced the objective under warranty).

Zeiss 7x42 Classic: First one I ordered had a subtle flaw in the assembly of the ocular yoke which caused the ocular to be slightly canted. It didn't affect left/right alignment, but the focus didn't ever seem right across the field on either side. Returned to vendor. Second unit has had issues with focus drive and diopter equivalent shifts in left/right focus at high temps (maybe now fixed) but it is still one of my favorite bins of all time.

Bausch&Lomb 8x42 Elite (waterproof version): The optical performance of my unit seemed very poor overall (color, contrast, size of sharp sweet spot). Sent to B&L to fix some other problems (with hinge tension and crud in the view), which they fixed, but they said the optical perfomance was within spec. Immediately took back to B&L (personally, since I lived near the corporate headquarters) and talked to repair lab director. The bino was put through some kind of optical testing and deemed on the low end of the acceptable range. At the B&L lab, I was given the choice of my repaired unit, a new replacement unit of the same, or the (then newly released) 8x50 Elite model. I tried the new 8x42 unit they offered me and it was optically outstanding (difference between it and my unit of same model was like night and day with respect to contrast, sharpness, and color accuracy)! The sweet spot on the new unit was huge compared to my original unit. I kept that one. To this day, I'm still amazed at the different views of those two units--it was hard to believe they were the same binocular model. Also interesting is that the optical performance of the left and right sides of each unit were very similar (it wasn't that one side of the original unit had a gross flaw). I've since had a similar experience with two units of a non-alpha--the Eagle Optics 8x32 Ranger Platinum.

Final comment: If you spend any time in the scopes forum here at Birdforum, you'll find plenty of examples of majorly optically flawed units from Zeiss, Leica, and Pentax. Swarovski, Kowa and Nikon seem to do better, but they release for sale the occasional optical lemon as well.

--AP

Don't you think defects like these would be noticed right away by these experienced observers and the sample would be eliminated and another sample would be procurred to replace it.
 
Sure they would notice major problems.

But it is not like or there is a major problem or it is perfect.
It is a gradient from major and obvious problems.... to mid ones... to small ones... to tiny ones... even to imperceptible ones.

Cherries, Lemons and an entire gradient between the two.

That is why it is important to, when possible, have more samples.

In my opinion more samples are even more important than more observers.

In that example, to reduce the numbers of Brands and Observers... to increase the number of samples( 3 guys testing 3swaro vs 3 Leica vs 3 Zeiss)... seem to me as a better approach from a methodological point of view.
 
Last edited:
Hi Ed -
Thanks for responding - I was aware of the statistical tools available so don't worry about having to explain them.
The question was more of the nature of whether anyone had any knowledge of any such formal research being carried out with binoculars - possibly by manufacturers. It could be useful in improving market penetration from their point of view, and from our point of view confirming the validity of testing, such as that we are looking at.
I actually have aditional interests in such research but will now look elsewhere - once again thanks.
 
Sure they would notice major problems.

But it is not like or there is a major problem or it is perfect.
It is a gradient from major and obvious problems.... to mid ones... to small ones... to tiny ones... even to imperceptible ones.

Cherries, Lemons and an entire gradient between the two.

That is why it is important to, when possible, have more samples.

In my opinion more samples are even more important than more observers.

In that example, to reduce the numbers of Brands and Observers... to increase the number of samples( 3 guys testing 3swaro vs 3 Leica vs 3 Zeiss)... seem to me as a better approach from a methodological point of view.

It would probably be difficult to get three samples and I am stll not sure they would vary in quality that much to change the outcome. Possibly as Henry says it would be easier to bench test the samples that were tested to make sure they were all up to snuff.
 
I just received the Spring 2011 edition of the Peregrine Observer published by the Cape May Bird Observatory. They did a Mega Review of their own in July of 2010 and just published the results. Pete Dunne, Louise Zemaitis, Don Freiday and Brian Moscatello were the testers. I searched their website but could not find the results online so I'll give you their rankings below.
1. Leica Ultravid HD 8x32
2. Leica Ultravid HD 7x42
3. Swarovski EL 8x32
4. Swarovski EL 10x32
5. Zeiss Victory 8x42
6. Swarovski EL Swarovision 8.5x42
7. Zeiss Victory 7x42
8. Leica Ultravid HD 8x42
9. Swarovski New SLC 8x42
10. Zeiss Victory 8x32
11. Leica Ultravid HD 10x42
12. Swarovski EL Swarovision 10x42
13. Nikon EDG 7x42
14. Nikon EDG 8x32
15. Nikon EDG 8x42
16. Zeiss Victory 10x42
17. Nikon Premier LXL 8x42
18. Steiner Peregrine XP 8x44
19. Minox APOHG 8x43
20. Leica BN 8x42
21. Zeiss Conquest 8x30
22. Vanguard EndeavorED 8.5x45
23. Kowa DCF 8x42
24. Nikon EDG 10x32
25. Zeiss Conquest 10x30
26. Kowa BD 8x32
27. Nikon Monarch 8x42
28. Nikon MonarchX 8.5x45
29. Zeiss Conquest 8x40
30. Bushnell Elite 8x43
31. Vanguard Spirit Plus 8x36
32. Kowa Genesis 8x33
33. Alpen Apex 8x42
34. Leupold Olympic 8x42
35. Nikon EDG 10x42
36. Kowa DCF 10x42
37. Nikon Premier LXL 8x32
38. Vortex Fury 8x32
39. Leupold Yosemite 6x30
40. Minox HG 8x33
41. Kowa Genesis 8.5x44
42. Minox HG 8x43
43. Vortex Razor 8x42
44. Leica BN 10x42
45. Nikon Prostaff 8x25
46. Leupold Yosemite 8x30
47. Alpen Wings 8x42
48. Leupold Katmai 6x32
49. Minox BL 8x33
50. Nikon Monarch 10x42
51. Minox BV 8x42
52. Leupold Katmai 8x32
53. Steiner Peregrine 8x44
54. Steiner Merlin 8x32
55. Nikon Prostaff 9x25
56. Minox BL 8x44
57. Steiner Merlin 8x42
58. Vortex Spitfire 8.5x32
59. Nikon Action ATB 7x35
60. Alpen Apex 8x32
61. Bushnell Elite e2 8x42
62. Alpen Shasta Ridge 8x42

Taken from the Peregrine Observer Vol. 33, Spring 2011
 
I just received the Spring 2011 edition of the Peregrine Observer published by the Cape May Bird Observatory. They did a Mega Review of their own in July of 2010 and just published the results. Pete Dunne, Louise Zemaitis, Don Freiday and Brian Moscatello were the testers. I searched their website but could not find the results online so I'll give you their rankings below.
1. Leica Ultravid HD 8x32
2. Leica Ultravid HD 7x42
3. Swarovski EL 8x32
4. Swarovski EL 10x32
5. Zeiss Victory 8x42
...

58. Vortex Spitfire 8.5x32
59. Nikon Action ATB 7x35
60. Alpen Apex 8x32
61. Bushnell Elite e2 8x42
62. Alpen Shasta Ridge 8x42

Taken from the Peregrine Observer Vol. 33, Spring 2011

Now THAT strikes me as a weird review: four people, 62 binoculars and mags from 7x to 10x??? Good luck!

PS: and the SV 10x42 is 12th while the EDG 10x42 is 35th? Compare that with Allbinos which put the Nikon first.

Geez, I think I'll just go use these things and make my own decisions--the way I usually do.

Mark
 
Last edited:
Continuing a little further, the primary motivation for having multiple observers is to examine the degree to which their ratings differ. For example, are binoculars rated differently by males vs. females?, birders vs. hunters?, old vs. young?, Swarovski collectors vs. Zeiss collectors?, experts vs. novices?, etc. Or, do individuals differ just because they are different individuals?

In this case, the cohort was apparently not chosen to make such user contrasts, but rather to average together a uniform group of experts. Still, we know only some use eyeglasses, at least one is an avid Swaro collector, and a few have past experience with the specimens under evaluation. Does it make sense to average them together before knowing whether their ratings are statistically different?

Of course, that question can't be answered while the data are combined across observers. Hence, my suggestion is for the authors to publish an additional table showing the rank orderings of the binoculars for each observer. I assume the score (or sub-score) for each binocular is simply the sum of relevant Likert estimates made by the observer. With this table, we could at least understand the range of rankings for each binocular within the expert cohort.

What I've discussed in this and my previous post comes under the heading of research design, and if a survey is not well designed with such contrasts in mind they rarely can be extracted post facto. If fact, it can become quite dubious as to what averaged results really mean.

Finally, I would have to repeat, if the observers were not prevented from exchanging their views and opinions during the observation period, the ratings, whether individual or combined, are permanently contaminated.

Ed

PS. Iveljay, just saw your post. Yes, the methods would be quite appropriate for market research and more informative than most published surveys. :t:
 
Last edited:
It is a gradient from major and obvious problems.... to mid ones... to small ones... to tiny ones... even to imperceptible ones.
You are right, but let me point out this (my opinions):
- even with 3 sample for each bino the outcome will be about the same;
- we received 1 sample for each bino from optical shops BUT we also had our personal bino. We had 3 Leica, 3 Swarovision, 3 Zeiss.
- we knew the bino before the challenge, so we know what to expect from them;
- as humans, we use the eyes + brain, so which is the limit between an objective problem of the bino and our vision? If our vision has a tolerance bigger then small/tiny problems of the bino then you can have tons of sample that nothing change. Let me say that mid and big problems should be affordable for us, we did not look in a bino for the first time :)
- this is an amateur review, get it as it is... Even magazines when test binos use 1 single sample ;)


I would like to read some comment about the outcome, if you agree with the scores and if not why.

greets,
Ivan
 
Warning! This thread is more than 13 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top