• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Cornell U: An Unbiased Review site (1 Viewer)

elkcub said:
Frank,
[SNIP]
In the end it becomes a matter of self-discipline to discard what doesn't make sense and not be bothered by published foolishness. So, I hope you don't give in to being aggravated by last year's "best" becoming this years "beast." It's a game they're playing.

You and I both own fantastic binoculars — and we don't need a Cornell report to tell us that.

Elkcub

Elkclub,

There seems to be a segment of consumers who have a burning desire to own the perceived "latest and greatest." I see it in the world of amateur astronomy, where an owner is perfectly happy with a telescope until something perceived as "better" comes to market. It also seems common in the automotive market, where many folks replace their car every two or three years, whether they need to or not.

I suspect this "disease" is common in most hobbies. Unfortunately, reviewers may suffer from this disease too, and it might be easy to argue that they are more likely to.

Clear skies, Alan
 
There seems to be a segment of consumers who have a burning desire to own the perceived "latest and greatest."

I would agree and would offer that you would find that type of behavior in any activity that is heavily equipment oriented. One could argue that birding doesn't have to be but I would wager that just about every birder grabs a pair of binoculars when heading for an outing.

Mick Baron was kind enough to email me a copy of the 2000 Cornell review. Here are a couple of snip-its of their remarks about the Venturers in comparison to the competition at the time....

"In terms of pure image quality, the original Zeiss 7x42s still reign, along with all of the full- and "oversized" Swarovskis, Zeiss Night Owls, and the new Nikon Venturers."

"For close-focus capability, nothing beat the Bausch and Lomb Elites, although the Nikon Venturers were close."

"If you’re after the absolutely highest quality 10x binoculars available, in my opinion it’s now a tossup between the Swarovski 10x50s and the Nikon Venturer 10x42s."

"I eagerly reached for the much-acclaimed Nikon Venturers as soon as they arrived–and I was not disappointed. The image they provided was virtually identical to that of the Swarovskis, and the field of view was only slightly narrower. When I focused them down to 8 1/2 feet, I was truly impressed. These binoculars weigh 25 percent less than my Swarovskis, and they felt so good in my hands, I found myself reaching for the Nikons whenever I spotted a bird."

"But I believe that Nikon has set a new standard by offering a superb image in a much friendlier package"

"Across all of the full- and oversized models, these Nikon 10x42 Venturers were clearly the top choice among our reviewers. The combination of extra-crisp image, moderate weight, wonderful feel, turn-and-lock eyecups, and excellent close-focusing capability allows these binoculars to buck the recent trend toward ridiculously heavy optics."

This last comment almost had me laughing out loud when you compare it to the comments of the most recent report. Has the competition really gotten that much lighter? Have the optics really improved that much? Rhetorical to some extent.

Sorry, just had to vent a bit. I do think Elkclub nailed this one on the head though in his last post.
 
FrankD,

I think the bottom line is simple - there are no unbiased reviews. The fact that the reviewer knows which binocular he or she has in hand insures this. Reviews should be considered only a starting point in deciding what binocular to buy, and all reviews should be considered highly subjective. Personal hands-on experience is the real key to picking the pair that fits your needs.

Clear skies, Alan
 
AlanFrench said:
Elkclub,

There seems to be a segment of consumers who have a burning desire to own the perceived "latest and greatest." I see it in the world of amateur astronomy, where an owner is perfectly happy with a telescope until something perceived as "better" comes to market. It also seems common in the automotive market, where many folks replace their car every two or three years, whether they need to or not.

I suspect this "disease" is common in most hobbies. Unfortunately, reviewers may suffer from this disease too, and it might be easy to argue that they are more likely to.

Clear skies, Alan

Alan,

I certainly agree and also tend to share the affliction. As you said, the reviewers probably do too. So, although it may seem that I'm chiding Dr. Rosenberg all the time, thus far I have avoided any criticism of his opinions. He's entitled to have them, and they are often worth considering. When it comes to his chronic misuse of survey technique, however, that's something else. It creates a false sense of scientific rigor and conveys misleading impressions well beyond his personal opinions. But, even in this one must exercise self-control. I think I'll spend some time in a lotus position. ;)

Thanks,
-elk
 
elkcub said:
Alan,

I certainly agree and also tend to share the affliction. As you said, the reviewers probably do too. So, although it may seem that I'm chiding Dr. Rosenberg all the time, thus far I have avoided any criticism of his opinions. He's entitled to have them, and they are often worth considering. When it comes to his chronic misuse of survey technique, however, that's something else. It creates a false sense of scientific rigor and conveys misleading impressions well beyond his personal opinions. But, even in this one must exercise self-control. I think I'll spend some time in a lotus position. ;)

Thanks,
-elk

Elk,

The Cornell review was never represented as a scientific study. Cleary, Ken made the point that personal subjectivity reigned in the findings and there is simply no mention of statistical analysis. It is nothing more than a simple review of manufactured-supplied binoculars by everyday users.

Based on your intense interest, why don't you conduct a truly scientific analysis that betters the "results" of the Cornell review? I'm sure we'd all be interested. As it is, the Cornell review is an excellent guide to choosing among the best binoculars available. Notable optical exceptions are the Swift 820 Audubon and the Nikon SE's. Since manufacturers chose their products, I'll assume Swift and Nikon aren't interested in promoting these lines. The general absence of porros in the review is notable.

John
 
John Traynor said:
The Cornell review was never represented as a scientific study. Cleary, Ken made the point that personal subjectivity reigned in the findings and there is simply no mention of statistical analysis. It is nothing more than a simple review of manufactured-supplied binoculars by everyday users.

John

Precisely. I think it's rather unsporting to expect the article to be more, and its not being more is due to neither bias nor incompetence. It is what it is: an evaluation based upon simple "like"/"don't like" opinions, which is how 98% of birders evaluate binoculars. Most people (not the denizens of this forum, but most) will pick up a bin and say, "Wow, that's sharp!" or "Hmm, what's so special about this?" or "This is too heavy!", and -- for them -- one number to express optical performance (for example) is enough. I'm reminded of the test reports of audio and photo products in Consumer Reports where, say, "sound quality" or "optical quality" is boiled down to one colored button. To an audio or photo enthusiast, it's laughable, but for most people it's more than enough.
 
Curtis Croulet said:
Precisely. I think it's rather unsporting to expect the article to be more, and its not being more is due to neither bias nor incompetence. It is what it is: an evaluation based upon simple "like"/"don't like" opinions, which is how 98% of birders evaluate binoculars. Most people (not the denizens of this forum, but most) will pick up a bin and say, "Wow, that's sharp!" or "Hmm, what's so special about this?" or "This is too heavy!", and -- for them -- one number to express optical performance (for example) is enough. I'm reminded of the test reports of audio and photo products in Consumer Reports where, say, "sound quality" or "optical quality" is boiled down to one colored button. To an audio or photo enthusiast, it's laughable, but for most people it's more than enough.

John/Curtis,

If you like this article, and the methods used in it to produce rankings (i.e., rating statistics) I certainly honor your defense of it. Personally, I would have hoped that a full professor at a major Ornithology laboratory would be more professional in evaluating the tools of the trade for the average birder. Whether or not he realizes it, statistical analysis was indeed done with the data, and I don't think it's "unsporting" to point that out that it had serious failings — just as we have no difficulty discussing the failings of a binocular or telescope.

Oh, I recognize a challenge when I see one guys. Alas, I have no access to a large number of new binoculars provided by manufacturers, or students/staff to make evaluations. However, for the next "review" I'd be more than pleased to provide free consulting services for both the design and analysis. How's that? |=)|

Since some folks apparently don't see the perils of composite ranking scores (i.e., overall "goodness" scores) I'm somewhat reluctant to mention that a few days ago I transcribed the "Top Gun" statistics, decomposed the Quality Index (QI) scores, and then did a multiple regression analysis. No doubt many will roll their eyes at this boring stuff, — but IMO it is relevant.

The QI scores are a linear combination of subjective and objective factors. Dr. Rosenberg constructed the weighting function out of whole cloth (remember he weighted image quality by 2x?). Well, it's not evident at first but he also gave zero weight to several other objective factors important enough to be included in the table and which most people would take into consideration when evaluating binoculars. These are magnification (power), objective size, weight, and price.

The pure subjective index (SI) part of the QI can be obtained by subtracting out the objective ranking scores for FOV and close-focus, leaving SI = 2x(image quality) + overall feel + eyeglass friendliness. The regression analysis addressed the simple question: "To what extent can SI be predicted from the physical properties of the binoculars, without knowing brand name or model?" The five predictors were FOV, close focus, magnification, objective size, and price.

Okay, subject to sampling error and several "assumptions," the multiple linear regression R= .831 suggests* that one is able to predict R^2 = 69% of the variability in the subjective scores, which include the all-critical "image quality". This prediction is made without knowing the brand of the binoculars or anything physically about their image quality. The computed weights of the function:

SI = .11(Power) - .41(Objective Size) + .52(FOV) + .36(Weight) – .26(Close Focus) + .49(Price)

suggest that for these (averaged) raters, FOV and Price are positively related to the SI rating, while objective size and close focus are negatively related. Larger objectives and longer near-focus decrease the SI, while power (magnification) has the least predictive influence of all. What's the point? The Quality Index is largely determined by physical design factors, and may have very little residual relationship to "quality" as it might commonly be understood. As I see it, the metric is basically tautological.

It's really not worth the effort to try to unscramble Dr. Rosenber's egg any further, but I will add this comment. If all that was intended was a composite ranking of 20 "Top Gun" binoculars, what in the world would have been the problem with simply asking the 40 evaluators to independently place them in rank order (1-20) and then analyzing the ranks for each binocular? For each binocular one could compute the mean and standard deviation of the ranks (better the median and interquartile range). For example, if everyone rated a particular binocular as #1, its average ranking would be 1 and its standard deviation 0. Several simple non-parametric methods could also be applied to determine binocular clusters, or even how the ranking results differed between experts and novices.

Okay, if you still believe the Cornell method has great meaning/merit you're certainly welcome to that opinion. My purpose has not been to discredit Dr. Rosenberg, which it's probably what it sounds like, so this will be my swan song on the subject (mute swan song ;)).

Elkcub
* the analysis is based on a very small sample and the data are combined across an unknown and varying number of reviewers, hence, the results are suggestive at best.
 
Last edited:
Bill Atwood said:
The blindness exhibited on this "optics" forum is utterly amazing.

Bill,

We took our non-evaluated, unranked, ancient technology Nikon SE 8X32 porro prisms out the other day and only saw a few birds. Our list included: Two Bald Eagles, two Osprey, many Great Blue Herons, a couple of snow white Great Egrets, a lonely Spotted Sandpiper, one Kingfisher, flocks of Canada Geese (aren't they cool pilots when they land in formation?), a few Snow Geese, a Pheasant family bathed in late afternoon sunlight, innumerable Tree Swallows, several glorious juvenile Barn Swallows, a few Bluebirds, a solitary Harrier scanning the fields, a very distant Meadowlark, two Green Herons, eight DC Cormorants, 20+ Cedar Waxwings, American Goldfinches, crows, and Catbirds and a few others I can’t recall.. First on the viewing list, however, was the heart stopping image we got of a Common Yellowthroat perched among a field of wild flowers bathed in the crimson glow of late afternoon on an August evening. A short list, to be sure, but what can you expect from a porro and a brief 2.5 hours of birding time?

I know owning a leaky porro is really silly, but the views are acceptable and we’re usually left alone in the field. However, I sometimes envy the birders who get pulled aside for mandatory, impromptu, star tests. What a thrill it must be to participate in a scientific study of field optics. Oh well, I guess I’ll have to be satisfied with looking at the birds.

John
 
Last edited by a moderator:
John please tell me you are joking!!! Well, at the absolute minimum, if you provide any data for biological studies (BBS or other), please, please, please let them know about your untested bins, so that they may perform the appropriate regression analysis on your data.
 
Last edited by a moderator:
The table at the end has the specs for the bins tested, and a huge number were tried, so in that respect the article is useful. In most cases the scores are not that different from other reviews and tests, and my own far more limited experience. So as a rough guide it is okay, and the table is useful.

But I do think that using one number to grade the image quality is too crude a measure.

Also they say that "In terms of pure image quality, six models received “perfect scores” from our reviewers, indicating an absolutely flawless, bright, and crisp-from-edge-to-edge image.". Unfortuntely no such binocular exists. The Zeiss FL 8x42 DOES NOT have a crisp-from-edge-to-edge image. Or at least mine doesn't, and comments on BF agree. I'm sure John Traynor will agree too. I'm not trashing the Zeiss, as it is one of my favourites, but this does make me question the carefulness of the testing and I wonder about the value of the results.

At the risk of sounding rude, I don't think the tests are very useful. I will stick to more careful and reliable testing - for example Alula.

Leif
 
Bill Atwood said:
John please tell me you are joking!!! Well, at the absolute minimum, if you provide any data for biological studies (BBS or other), please, please, please let them know about your untested bins, so that they may perform the appropriate regression analysis on your data.
Bill,

Pro Avibus just completed an exhaustive scientific review of the Nikon SE 8X32. You can peruse their findings at http://www.birdforum.net/showthread.php?p=400406#post400406

If it wasn't pouring rain, I wouldn't be here!

John
 
Last edited by a moderator:
Alula agreed with the Cornell Study.

Leif said:
The table at the end has the specs for the bins tested, and a huge number were tried, so in that respect the article is useful. In most cases the scores are not that different from other reviews and tests, and my own far more limited experience. So as a rough guide it is okay, and the table is useful.

But I do think that using one number to grade the image quality is too crude a measure.

Also they say that "In terms of pure image quality, six models received “perfect scores” from our reviewers, indicating an absolutely flawless, bright, and crisp-from-edge-to-edge image.". Unfortuntely no such binocular exists. The Zeiss FL 8x42 DOES NOT have a crisp-from-edge-to-edge image. Or at least mine doesn't, and comments on BF agree. I'm sure John Traynor will agree too. I'm not trashing the Zeiss, as it is one of my favourites, but this does make me question the carefulness of the testing and I wonder about the value of the results.

At the risk of sounding rude, I don't think the tests are very useful. I will stick to more careful and reliable testing - for example Alula.

Leif

Alula agreed with the Cornell study. Zeiss 8x42 FL number one. Two studies same result.

Dennis
 
You guy's are just a bunch of sore losers! I've tried almost all of the "Top Gun" binoculars in the Cornell test and I agree with it 95%. It is an excellent test of binoculars and I totally agree with the results. You probably have Leicas and your mad because they came in third place. Bite the bullet and trade them in and get the Zeiss Fl.

Dennis

I have Leica, they came no 1 in my review, and i am not a sore loser. I demand an apology from you.
 
Otto, if your Leica was not tested for miscollimation and other defects and if it was not judge by at least 40 scientists whose findings then underwent a thorough statistical anlysis; then your review is utterly worthless and you need to throw those Leicas off a high cliff.

And God help you if you try to publish your review in a birding mag.

:bounce:
 
Bill Atwood said:
Otto, if your Leica was not tested for miscollimation and other defects and if it was not judge by at least 40 scientists whose findings then underwent a thorough statistical anlysis; then your review is utterly worthless and you need to throw those Leicas off a high cliff.

And God help you if you try to publish your review in a birding mag.

:bounce:

OK, i give up.
 
Alula

Leif said:
I'm not sure what you are saying. Leif

You said that Alula was a more scientific study than the Cornell study. I think it is interesting that both studies came up with the same conclusion even though they used perhaps different methodologies. That's my point.

Dennis
 
Warning! This thread is more than 19 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top