MagpieCorvidae
Well-known member
I've seen the Cornell labs report and a couple of others, but they are no more than an average perception of a bunch of people looking at bimoculars; which is like measuring distance by having 100 people look at a tree and average their guesses.
There must be a lab out there that puts these things through a battery of objective tests via instrumentation and sensors; completely bypassing the human brain and all the associated biases, brand expectations, and placebo effects.
Allbinos comes close, but still in the end they selectively weigh the various scores to come up with a value which represents a subjective mix of priorities.
Plus, they don't seem to update too often.
Is there anyone out there doing this better, or is this all we have for now ?
EDIT:
Even if we go along with the idea that we should let the human eye be the ultimate judge ... such reviews should still be done right and so far they aren't !
Any review panel of 10 or 20 people should be *prevented* from knowing whether they are currently holding a Zeiss vs a Tasco. The evaluations should be entirely on optical merit alone, and not subconsciously tilted (positively or negatively) by price and brand recognition or bias.
I almost guarantee it that bias factors, other than optical merit, are so strong that rankings would be quite different under double blind conditions.
The importance of blind experiments cannot be overstated. Same thing with wine tasting. The majority of people that taste wine on blind test do not choose expensive wine as their favorite, neither can they tell which of the wines they tasted were the expensive ones.
There must be a lab out there that puts these things through a battery of objective tests via instrumentation and sensors; completely bypassing the human brain and all the associated biases, brand expectations, and placebo effects.
Allbinos comes close, but still in the end they selectively weigh the various scores to come up with a value which represents a subjective mix of priorities.
Plus, they don't seem to update too often.
Is there anyone out there doing this better, or is this all we have for now ?
EDIT:
Even if we go along with the idea that we should let the human eye be the ultimate judge ... such reviews should still be done right and so far they aren't !
Any review panel of 10 or 20 people should be *prevented* from knowing whether they are currently holding a Zeiss vs a Tasco. The evaluations should be entirely on optical merit alone, and not subconsciously tilted (positively or negatively) by price and brand recognition or bias.
I almost guarantee it that bias factors, other than optical merit, are so strong that rankings would be quite different under double blind conditions.
The importance of blind experiments cannot be overstated. Same thing with wine tasting. The majority of people that taste wine on blind test do not choose expensive wine as their favorite, neither can they tell which of the wines they tasted were the expensive ones.
Last edited: