Thanks for the links, Brock.
It gives me an opportunity to complain once again about tests that rank binoculars with numbering systems. It's worth taking a close look at the methods and assumptions of these tests in the "How Do We Test Binoculars?" article. They resemble the efforts to quantify and rank dissimilar characteristics in the Cornell, Kikkert and other tests. I'm all for quantifying what can be measured, but assigning arbitrary numbers to subjective impressions, then adding up a grand total strikes me as useless.
It's easy to find many examples of assumptions and priorities in these numbers that would be irrelevant to a particular user. For instance, a downgrade for a narrow range of IPD adjustment means nothing if your IPD falls within the range available. Perhaps the most glaring example of an assumption is the notion that less "distortion" is better and therefore worthy of a higher number. Anyone who has followed Holger Merlitz work on this subject knows by now that low pincushion distortion causes the unpleasant panning effect of "rolling globe" in many people. Applying pincushion distortion is a design choice, not the failure to correct a distortion. The list could go on. The point is; don't take any of these numbering and ranking systems seriously.
Henry,
I agree about their number ranking system being arbitrary, particularly in the example you cited.
Ranking a bin higher that is used primarily for daytime activities because it has lower distortion even though pincushion creates a more natural view while panning the landscape (at least when used in moderation) is not only arbitrary but usually not desirable.
That's why I wrote them to ask about the "rolling ball" in the 8x56 Nobilem, which they ranked #1 overall. If it has a high degree of "rolling ball" like the Nikon HG and SV EL, it wouldn't be #1 on my list, and I wouldn't buy one.
Another case in point is the values given to edge performance. Take the example of the Swaro 10x40 Habicht:
"Blurring at the edge of the FOV
The blur occurs in the distance of 77% +\- 3% from the field of view centre.
5/10.0
Only a 5 out of 10 for edge sharpness out to 77 percent? That's pretty decent in my book, though what they don't include and what would make a difference for me is how slowly or steeply the sharpness falls off from that point to the edge.
I find bins with really blurry edges to be distracting, particularly while panning.
And I'm sure we could also make cases against other feature ratings.
But, I can understand the reviewers desire to create a number ranking system. Holger does this and so does Kimmo.
Holger has a disclaimer at the end of his reviews: The 'final score' is the sum of the individual scores and is intended to serve as an orientation only. Generally, it would be an over-simplification of the matter to just look which binocular has got the highest score, because it would obscure the individual features of the devices which differ quite a lot among each other.
I think the reason reviewers do rankings is because laypersons like myself lack the technical skills and, in some cases, also the samples to evaluate - the nearest alpha optics store to me is a 120-miles round trip.
So we depend on "expert reviewers" to guide us to which bin might be the best one(s) to buy.
In my case, I'm a bit better informed than perhaps some others, so when I see a bin whose distortion is rated 9 out of 10 because "the distance of the first curved line from the field centre compared to the field of vision radius is 79.5% +\- 4%," that raises a red flag, because those binoculars might have "rolling ball."
I also agree that IPD should not receive a number, but rather just be listed in the specs, and if a bin has a narrower range than most bins, mention that as a warning.
Having said that, I do appreciate having a site like allbinos where you can get some "hard data" for a number of features on a number of bins. I also like the way you can compare bins side by side:
http://www.allbinos.com/binoculars_compares.html
Here on BF, you have to search for that information in the archives and pull up separate pages to make your own comparisons, which is sometimes difficult to do if you don't know the correct search terms.
What I'd like to see on BF is something similar to what Edz has on Cloudy Nights - a separate section for technical reports.
However, this section would be fairly useless to the non-technically minded unless there was also a non-mathematical explanation of the terms and methods used in the reports to make them more understandable to a lay audience. Or perhaps a primer that one could read before reading the reports and use as a reference.
Brock