dalat
...
What I find tragic about this Cornell Labs report, just like the last one done five or six years ago, is the abysmal ignorance of survey design, measurement theory, and applied statistics.
The reason is probably that if they (or anyone else) would take survey design and statistics seriously, they had to admit that such tests are just not feasible (at reasonable cost).
- How many copies of the same binocular you need to test to exclude bias from sample variation? 10? 100?
- How many testers you need to exclude bias from eyesight, personal preferences, too much reading birdforum, Swarovski marketing etc.? 100? 1000?
Just sorting out the question if the SV 32 or the FL 32 is better (=considered better by the majority of users by a statistically signicative margin) would easily take a few hundred to thousend rounds of testing. And it would still not help your personal decision.
The whole ranking thing is just pointless, regardless whether it's based on a couple of people looking through binoculars and telling what they think, or measuring transmission at 550 nm in an university lab.
Last edited: