• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Q: Is there an objective binocular ranking system out there ? (1 Viewer)

MagpieCorvidae

Well-known member
I've seen the Cornell labs report and a couple of others, but they are no more than an average perception of a bunch of people looking at bimoculars; which is like measuring distance by having 100 people look at a tree and average their guesses.

There must be a lab out there that puts these things through a battery of objective tests via instrumentation and sensors; completely bypassing the human brain and all the associated biases, brand expectations, and placebo effects.

Allbinos comes close, but still in the end they selectively weigh the various scores to come up with a value which represents a subjective mix of priorities.
Plus, they don't seem to update too often.

Is there anyone out there doing this better, or is this all we have for now ?

EDIT:
Even if we go along with the idea that we should let the human eye be the ultimate judge ... such reviews should still be done right and so far they aren't !

Any review panel of 10 or 20 people should be *prevented* from knowing whether they are currently holding a Zeiss vs a Tasco. The evaluations should be entirely on optical merit alone, and not subconsciously tilted (positively or negatively) by price and brand recognition or bias.

I almost guarantee it that bias factors, other than optical merit, are so strong that rankings would be quite different under double blind conditions.

The importance of blind experiments cannot be overstated. Same thing with wine tasting. The majority of people that taste wine on blind test do not choose expensive wine as their favorite, neither can they tell which of the wines they tasted were the expensive ones.
 
Last edited:
I have only seen the ones you mention, plus optics for birding scorecard, nothing that takes the human element out. But, only my thoughts of course, seeing as binoculars are used by humans, who will all have differing preferences and optical requirements, I just wonder what advantage would be gained by taking human perception out of the appraisal?
 
It's always going to come down to subjective factors after the tech-specs (which themselves can be the subject of debate) have been dealt with. Reliable reviews are out there, see for example Pierogiovanni's binomania.it and anything by Kimmo Absetz.
 
The only independent scientific comparative review that I have seen was conducted by the laboratory of the Department of Optics and Optoelectronics at the Georg-Simon-Ohm Institute in Nurenberg on three models of rangefinder binoculars and published in March 2013.

Transmission, resolution, edge sharpness, contrast and chromatic abberation were determined and the manufacturers' published data for magnification, field of view and dioptre compensation were checked and verified. Binocular alignment and field rotation were also assessed.

The results speak for themselves.

From the consumer's point of view it should be an industry standard.

You can read the full report and analysis in English in the attached pdf.

This has been downloaded over 100 times since I posted it in the Leica thread.
 
Last edited:
....There must be a lab out there that puts these things through a battery of objective tests via instrumentation and sensors; completely bypassing the human brain and all the associated biases, brand expectations, and placebo effects.....

A binocular of itself, with no human element "sees" nothing ....... :cat:





Chosun :gh:
 
With out the human interaction of what use are binoculars. There can only be perfection in how it relates to the eye, and everyones eyes are different. So those reports by those hundreds of people tell a great deal more about the binoculars characteristics than a hard cold test.
 
I'm OK with human evaluations, as long as there are practical test conditions for each characteristic
and seperate factors are tracked...that way I can understand in terms of my preferences.

0-5 for "overall image quality" is a way too vague, especially for some famous top grading index.


I'd like "0-10, resolution, hand-held" and "0-10, contrast/saturation, challenge target and glare source",
"% diameter very sharp, %diameter usable at 100 ft"....thangs like that.

Maybe "best focus at 30 ft, near point for 6mm-high font, far point for 6mm-high font" to eliminate any fudginess about depth of field.

So I guess..... semi-objective? clear test rules, subjective grades.

Things like "absolute resolution with bright lights and a solid tripod" are not useful to me,
but also some "user image enjoyment" under unspecified weather and target is too nebulous as well.
 
Last edited:
The only independent scientific comparative review that I have seen was conducted by the laboratory of the Department of Optics and Optoelectronics at the Georg-Simon-Ohm Institute in Nurenberg on three models of rangefinder binoculars and published in March 2013.

It's a very decent article (at least the lab testing part), although done for only three models. Thank you for sharing that !
 
Last edited:
A binocular of itself, with no human element "sees" nothing ....... :cat:

perterra said:
With out the human interaction of what use are binoculars. There can only be perfection in how it relates to the eye, and everyones eyes are different. So those reports by those hundreds of people tell a great deal more about the binoculars characteristics than a hard cold test.

The only way I would remotely accept a human evaluation is if it's done as a blind ,or double blind (funny how this sounds) experiment with all binocular bodies wrapped in padded black plastic bags exposing only the lenses and the eyepiece with no visible clues of brand or make ... and then on phase 2 the plastic bags labeled with missmatched brands such that a Swarovski is labeled Zen Ray, or Vortex, and a Kowa, or Meopta is labelled Zeiss, or Celestron all done randomly.

Otherwise this common practice of self reporting (on optics) is useless (had way too much psychology in college) and any test would have to show us how the most relevant of these cognitive biases are being circumvented --> http://en.wikipedia.org/wiki/List_of_cognitive_biases
 
Last edited:
The only way I would remotely accept a human evaluation is it's done as a blind ,or double blind (funny how this sounds) experiment with all binocular bodies wrapped in black plastic bags exposing only the lenses and the eyepiece with no visible clues of brand or make ... and then on phase 2 the plastic bags labeled with missmatched brands such that a Swarovski is labeled Zen Ray, or Vortex, and a Kowa, or Meopta is labelled Zeiss, or Celestron all done randomly.

Otherwise this common practice of self reporting (on optics) is useless ... had way too much psychology in college and any test would have to show us how ALL these biases are being circumvented --> http://en.wikipedia.org/wiki/List_of_cognitive_biases

Strange as it may sound, most birders I have met seem more interested in the birds than the brand of optics they are using. I see mostly Nikon prostaff and early Monarchs at our local Audubon centers.

What you see in the last Cornell test were a bunch of inexpensive binoculars got top pick or best in class over a bunch that were pricey. Not sure how that would change wrapping them in black plastic bags. If you really use binoculars a lot, in my opinion the ease of use becomes more important than optic perfection. And the two do not go hand in hand.

I'm guessing you don't wear glasses, because once you do that last few percentiles of quality become a moot point.
 
Last edited:
Strange as it may sound, most birders I have met seem more interested in the birds than the brand of optics they are using.

If optic quality was ultimately irrelevant to most then we would all be using $25 Tascos with various body styles for comfort. Yet a lot of people feel compelled to upgrade, so there should be objective optic quality criteria other than than the idea "if it's more expensive it must be a step up"

Well, this is for those with the curiosity and the care to find out objective evaluations in the performance of the available optical instruments, the validity of the manufacturer claims (aside from the hobby of birding) and an unbiased deterministic method of correlating which optics truly correspond to their value and at which point the optic gain is so negligible to the human eye that it makes no sense to spend any more. (eg The human eye stops telling the difference at 260ppi of screen resolution but cellphone manufacturers will sell us higher if we are foolish enough to buy in the hype)
 
Last edited:
If optic quality was ultimately irrelevant to most then we would all be using $25 Tascos with various body styles for comfort. Yet a lot of people feel compelled to upgrade, so there should be objective optic quality criteria other than than the idea "if it's more expensive it must be a step up"

Well, this is for those with the curiosity and the care to find out objective evaluations in the performance of the available optical instruments, the validity of the manufacturer claims (aside from the hobby of birding) and an unbiased deterministic method of correlating which optics truly correspond to their value and at which point the optic gain is so negligible to the human eye that it makes no sense to spend any more. (eg The human eye stops telling the difference at 260ppi of screen resolution but cellphone manufacturers will sell us higher if we are foolish enough to buy in the hype)

So how would a machine test tell someone that field flatteners were going to bother them, how would it tell you that the color tones would be unappealing, how would it let you know how much CA to expect or the 3D effect. Everybody has different eyes, what I see you may not, what you see I may not.

As for upgrading, most dont do it every six months or even every six years.

The best test out there to tell you what is the best, is you. Go look thru them, only you can determine the value of what you seek. Just trying to point out the world is subjective, nothing is black and white
 
So how would a machine test tell someone that field flatteners were going to bother them, how would it tell you that the color tones would be unappealing, how would it let you know how much CA to expect or the 3D effect. Everybody has different eyes, what I see you may not, what you see I may not.

But that in itself is a sort of human bias "if I cannot fathom how something could possibly be accomplished then it is likely universally impossible". Of the things you mentioned, CA, color tones, and 3d effect are measurable. About field flattening, you mean that some prefer barrel distortion ?
 
The only way I would remotely accept a human evaluation is if it's done as a blind ,or double blind (funny how this sounds) experiment with all binocular bodies wrapped in padded black plastic bags exposing only the lenses and the eyepiece with no visible clues of brand or make ... and then on phase 2 the plastic bags labeled with missmatched brands such that a Swarovski is labeled Zen Ray, or Vortex, and a Kowa, or Meopta is labelled Zeiss, or Celestron all done randomly.

Otherwise this common practice of self reporting (on optics) is useless (had way too much psychology in college) and any test would have to show us how the most relevant of these cognitive biases are being circumvented-> http://en.wikipedia.org/wiki/List_of_cognitive_biases




Then don't waste your time on the user's comments here if you think human factors like cognitive biases are not important.

You aren't going to find that any of the people reporting here have done very many, if any, double blind studies. Virtually everything discussed here is based on individual experience.

Frankly, after being here for nearly 9 years, I have come to the conclusion that it is, and always has been, the human factor that is the most important factor affecting the development and improvement of binoculars over the period of their existence. And if you look back 9 years ago you will find that we were just beginning to discuss to any great extent the necessity of Phase Coatings in the improvement of low cost roof prism binoculars. Divers human cognitive biases were instrumental in bringing this about although everybody saw it differently.

Bob
 
Then don't waste your time on the user's comments here if you think human factors like cognitive biases are not important.

Well, I wasn't asking for user opinions :)
The question was on whether there have been professionally done lab tests (aside from Allbinos.com) that I might have missed.
 
Well, I wasn't asking for user opinions :)
The question was on whether there have been professionally done lab tests (aside from Allbinos.com) that I might have missed.

There aren't many and it is debatable whether Allbinos would fall into your criteria although they have the most thorough reviews of binoculars you will find here. "Alula" magazine, a publication devoted to Birds published in Finland also did rigorous comparisons among a number of binoculars over the years, but again there were no, AFAIK, double blind studies. See a typical one here: It is translated into English.

http://www.lintuvaruste.fi/hinnasto/optiikkaarvostelu/optics_8_Leicaultravid_GB.shtml

Bob
 
But that in itself is a sort of human bias "if I cannot fathom how something could possibly be accomplished then it is likely universally impossible". Of the things you mentioned, CA, color tones, and 3d effect are measurable. About field flattening, you mean that some prefer barrel distortion ?

So is there a machine you can pour beer into that will tell you if it taste good?

So why do some see CA in binoculars that I dont see?

So what color tones do I find pleasing, it can tell you the color tone, but it cant tell if you if you like it.

Yes, some prefer distortion. Do a search on the rolling ball effect.
 
Out of all of the things that factor into the image we see in a binocular image, only two come immediately to mind that are objectively quantifiable. That is resolution and light transmission percentage. Those are important to be sure, but the rest of the entire spectrum of "stuff" is completely subjective.

Some people son't see chromatic aberration, some do. Some like a warm color balance, some prefer a neutral bias, some a warmer bias, some value sharp edges on the outer field differently than others. People differ on their likes or dislikes of field curvature and the flatness of the field. What binocular fits me in regards to size, shape and weight, not to mention eye relief and eye cup size and shape, may not fit you. That list is nearly endless. Focus wheel tension, travel, and direction.....

What you ask about can not exist. :eek!:
 
Last edited:
A binocular of itself, with no human element "sees" nothing ....... :cat:

Chosun :gh:

Amen, sista! Took the words right out of my mouth (now I'm hungry). I've been preaching this for years.

Although providing useful data, which Mr. Data would find sufficient, and giving the reader perhaps enough information to decide whether or not a particular bin warrants his interest and further investigation, I much prefer field tests by experienced users who I know are going to be objective (as humanly possible) and not skew the results toward their favorite brands like some do here on BF or those who just test the specs to see if the company is being honest.

I've found that getting around the "subjective" is counterproductive , because as Chosen points out, most objective reviews leave out "human factors" from their reviews, which could completely change how we feel about a bin.

More useful is identifying reviewers who share your preferences in optics, be that sharp edges, sharp centerfields, high contrast, "warm" bias, smooth focusers, fast focusers, super close focus, low flare, etc. I find that works better than a "bench tests" alone.

Particularly helpful is when a reviewer actually uses the binoculars to watch birds rather than artificial stars or real stars or resolution charts and then reports what he saw. Are you going to see the same thing he did? Maybe, maybe not, but if the reviewer gives a detailed enough description, you should be able to mentally compare what he said to what you see through your binoculars.

There's no substitute to trying the bins yourself, of course, but not all of us, particularly those of us who live in the Outback, are in a position to visit a Cabela's or another big store that carries a lot of brands of optics. So like Blanche DuBois, we depend on the kindness of strangers to help us at least narrow down the possibilities in our price range.

Reviewers I find kindred spirits include Stephen Ingraham (formerly the Better View Desired reviewer), Wayne Mones (another former BVD reviewer), Laura from Optics4Birding, Piergiovanni from binomania, and last but certainly not least, our very own Frank D. I include Frank on my short list because many if not most reviewers only review the Top of the Pops, neglecting lower priced bins that the ordinary plebe might buy.

Except for the obvious fanboys, there are a lot of experienced users on BF bin forums whose reviews I find useful and interesting to read. And, of course, the more technical reviews such as Henry's, Holger's, Surveyor's, etc., which I sometimes understand, and sometimes not (MTFs?), but still find useful as a place to start.

But then I want more. I want to read people's opinions about how these bins work in the field for them. How they handle, how the focuser turns, the level of flare, the color contrast, the eyecup comfort or discomfort, the level of CA, how they handle backlit situations, how they hang, etc.

Put the technical together with the subjective and you get a more complete picture.

Brock
 
I've seen the Cornell labs report and a couple of others, but they are no more than an average perception of a bunch of people looking at bimoculars; which is like measuring distance by having 100 people look at a tree and average their guesses.

There must be a lab out there that puts these things through a battery of objective tests via instrumentation and sensors; completely bypassing the human brain and all the associated biases, brand expectations, and placebo effects.

Allbinos comes close, but still in the end they selectively weigh the various scores to come up with a value which represents a subjective mix of priorities.
Plus, they don't seem to update too often.

Is there anyone out there doing this better, or is this all we have for now ?

I haven’t had time to read all the subsequent posts—I hate it when that happens. So, though you have probably received many good answers, I would like to throw in my frustrated . . . sour grapes answer.

With 44 years in military and civilian optics, I have tried to get a gig with a birding mag to provide an ongoing column to raise the birders knowledge about things they need to know about binoculars, that have been going around for decades.

Yet, each have told me about the “EXPERTS” they have in tow. Have they been inside 12,000 binos, or designed and manufactured lenses for the government, or provided consultation to Yerkes Observatory, or done optical restoration for the Smithsonian or Seattle’s Museum of History and industry, or spent 10 years editing and publishing and international optical journal? No! But, they do have opinions.

Frankly, their opinions are very good . . . to a point. Yet, when it comes to understanding more than a few basic buzzwords and phrases, being a master birder, serious amateur astronomer, or seasoned sailor, makes a person no more of an expert on binoculars and optics, than being a disk jockey makes one an expert on the design, manufacture, and performance of microphones.

I was bemoaning this situation with Pete Dunn, at Cape May. His response:

“Some of the editors have heard the same things for so long, they think they have heard it all; they just don’t know what they don’t know.”

So, this old guy will probably go to his grave without being able to raise the bar on serious bino knowledge and knowledge of the industry.

As, far as ratings? It’s all subjective. There are too many variables, and so many observers don’t have the experience to quantify the data they seek.

Off my childish, arrogant, self-serving soapbox, now. I gotta go repent.

Bill :eek!:
 
Warning! This thread is more than 10 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top