There are several things to consider about this review. PLEASE note the plate glass window they are looking through in the photos. Other than that this seems to be a pretty well done review.
Note also this is biased for magnification as there is no mention of correction for the 7x magnification. The USAF chart gives resolution advantage to higher magnification, particularly at the closer ranges that the chart is useful. Close resolution may not be the same as distance resolution. The correlation is close, but I don't think absolute.
Another thing is to note that this is a mid size comparison and the ZEN ED2 and Hawke x36 models are not really compact models. Neither are they really full sized either, so it is hard to place them in a fit and feel category with smaller binoculars. So adjust the the magnification difference (resolution) and eliminate the fit and feel score, biased toward smaller glass and the score of both the Hawke and ZEN RAY glass goes up quite a bit. Even at 7x, the ZEN ED 2 resolution score beats half of the binoculars ranked above it. Given the extra 1x of magnification the same design Hawke 8x36 beats all but four of the glass ranked above it.
I don't want to seem pick on this because it is a well thought out review (except for the plate glass thing). If they are going to do this then post the reviews for everything. It also points out that in a rather inclusive review with a large number of binoculars it is really hard to evaluate a lot of glass.
Given that the 7x36 ED2 was stacked up against more expensive 8x models, four of which cost close to the $2k mark, the fact that the ED2's overall score was in the 4+ category with the "big boyz" speaks very well of this bin.
Stephen Ingraham had a method by which he compensated for lower or higher power bins by moving the target so many feet closer or farther, and then based his N.E.E.D.S. score for resolution on that correction for magnification. If you don't do that, you are comparing apples and oranges.
The other problem with these total score number type comparisons is that one or two subjective factors can throw off the final score as we saw on another thread with the reviewers at allbinos.com giving the lowest distortion bins the most points when, in practice, some distortion is better than none or too little in bins for daytime use to prevent or minimize the "rolling ball" effect.
In the case of the birdwatching.com reviews only three parameters are given values:
1. Focus
knob
score
2. Diopter score
3. Fit & feel
score
4. Resolution score
Out of those four categories only "Resolution" has an objective measurement, the others are all based on the preferences of the reviewers.
For example, if I were ranking diopters (I assume they mean "diopter adjustment," not how many diopters the bins can compensate for since they didn't list that data), I would rank top models such as the EL and EDG lower because I don't like pop-out on-the-focus-wheel diopter control, I find it cumbersome. The exception would be the pre-HD SLCs, which have on-the-focus-wheel diopter adjustment, but with a push-in rather than pop-out mechanism that is the easiest diopter adjustment I've ever used. I'm sorry to see Swaro drop this convenient design on its new HD-SLC.
So unless I explained my criteria, the score for "Diopter" would be meaningless to anyone reading the review. In the legend below the birdwatching.com rankings, they break down diopters as either having a lock or not having a lock and having a numbered? diopter scale or not, so I would have to assume those are the criteria they used to rank the diopters.
How about if the diopter wheel was hard to turn, that would knock points off in my rankings. How about if it was hard to locate, like the smooth brass ring on the EDG? How about if it had spiky tines that dig into your fingers like the diopter wheel on the Leupold porro Cascades? Obviously, there are other criteria of importance for diopter adjustment including how many diopters you can adjust for with the bins.
It would be very useful if reviews adopted some international standards for evaluating various bin criteria, but realistically I don't see that happening. Every reviewer has his/her own methods.
As imperfect as number rankings are, I don't have a problem with them as long the reader knows exactly which characteristics for each category are being evaluated. And given the subjective nature of some categories being ranked, there should always be a disclaimer at the end of the review such as Holger's and Allbinos'.
Holger's disclaimer:
"The 'final score' is the sum of the individual scores and is intended to serve as an orientation only. Generally, it would be an over-simplification of the matter to just look which binocular has got the highest score, because it would obscure the individual features of the devices which differ quite a lot among each other."
Allbino's disclaimer (at the end of the Zeiss 8x56 FL review):
"This situation only confirms what we’ve written many times – points and the test result per se are not enough. They might provide some tips but they are not an oracle. It might happen that a model with a bit worse score will prove to be better in some situations. Apart from that,
so far nobody has constructed such a point scale and such test criteria which would please everyone."
Amen.
Brock