• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Which one do you like better, the SLC or the SV. (1 Viewer)

The Allbinos website reports on a particular set after it's been in use for a relatively long time. With regard to blackouts they could simply say, when they list the cons, that many users have reported blackout problems etc.
Anyway, the blackout issue was just an example and not the main point of the discussion.
 
I should add that, while the method of averaging is questionable, in order to obtain a global score one has to use some form of averaging. The features whose scores are summed up indeed are rather different, yet by using a different maximum score for each of them, averaging their scores boils down to weighted averaging which in my mind would be a sound approach, Of course the way the maximum scores are assigned is the delicate part, and it sure can be improved upon.//Peter.

Hi Peter,

Yes, of course, my post was a spoof designed to illustrate the serious problems inherent in ranking by composite scores. Allpeople's and Allbinos' scores are equally meaningless, and as you immediately saw they cast suspicion on the underlying agenda of the scorekeeper. I admit to doing that deliberately to make the point.

I'm not sure that I can agree with you that a global score is needed, or that a weighted average of attributes is any more meaningful than a weighted average of mixed fruits in a basket. Could we use the composite score to rank the baskets?

Anyway, it was just an outburst. I'll try to control myself. :king:

Ed
 
Last edited:
Hi Peter,

Yes, of course, my post was a spoof designed to illustrate the serious problems inherent in ranking by composite scores. Allpeople's and Allbinos' scores are equally meaningless, and as you immediately saw they cast suspicion on the underlying agenda of the scorekeeper. I admit to doing that deliberately to make the point.

I'm not sure that I can agree with you that a global score is needed, or that a weighted average of attributes is any more meaningful than a weighted average of mixed fruits in a basket. Could we use the composite score to rank the baskets?

Anyway, it was just an outburst. I'll try to control myself. :king:

Ed

Ed,

If ipso facto, your hyperbole proves that allbinos scores are meaningless, and since allbinos is the "only game in town" because no other site (at least that's written in English) measures and ranks what Arek does, what kind of methodology do you propose to compare binoculars?

Or are binoculars such a personal instrument that all such comparisons are meaningless (in which case, they might as well shut down the BF binoculars forum, because it's all rubbish, some posts more so than others). ;)

Brock
 
This thread is about comparing two different binos so luckily a discussion on how to do that is on topic (which is a bit unusual....).
Allbinos ranks binos in two ways:
a)Averaging users rankings, as many other websites do---I think this is a sound way as if many users liked/disliked a set then I am also likely to do the same.
b)Establishing a set of features that are deemed to be important, measuring/estimating them, expressing the results in numbers and summing/averaging these numbers. This approach is open to questioning, but I cannot see any other way in which a single person can rank a set of...sets. Possible questions here:
-is there a better set of features?
-is there are a better way of assigning numbers to features?
Both are valid questions, and contributions are welcome.

Regarding whether "to rank or not to rank", once the above questions are properly answered I can see no harm in doing the sum and the corresponding ranking.

Peter.
 
Last edited:
Hi Ed: if your point was that there is no sound/universally accepted set of features and no proper way of assigning numerical values to them, then I am not sure I agree: after so much experience in the field with all these babies, I think in this...field we know pretty well what matters, and what's more/less important. Peter.
 
Last edited:
This thread is about comparing two different binos so luckily a discussion on how to do that is on topic (which is a bit unusual....).
Allbinos ranks binos in two ways:
a)Averaging users rankings, as many other websites do---I think this is a sound way as if many users liked/disliked a set then I am also likely to do the same.
b)Establishing a set of features that are deemed to be important, measuring/estimating them, expressing the results in numbers and summing/averaging these numbers. This approach is open to questioning, but I cannot see any other way in which a single person can rank a set of...sets. Possible questions here:
-is there a better set of features?
-is there are a better way of assigning numbers to features?
Both are valid questions, and contributions are welcome.

Regarding whether "to rank or not to rank", once the above questions are properly answered I can see no harm in doing the sum and the corresponding ranking.

Peter.
Allbinos is a beauty contest, nothing more. You cannot, based on single samples, conclude much of anything. Here's a rock, it's granite, all the rocks around here must be granite. Well, you're a fool if you take every rock for granite.

PS
They could measure effective eye relief but they don't. I wonder why.
They assign a numeric value to IPD when the only relevant value to a consumer is pass or fail.
Phrases coupled to numerical ratings like, "a bit higher than medium 7.2/10.0 ", "Practically zero 9.7/10.0", "Perfect 4.9/5.0" and "Tripod You can buy an optional brand name adapter. 2.0/3.0" should reveal apparent weaknesses in their process. How do you get a 4.9/5.0 for perfect?:brains:

PPS
Word of mouth, personal review and collective wisdom do a better job. Sort of like the best of BF.:t:
 
Ed,

If ipso facto, your hyperbole proves that allbinos scores are meaningless, and since allbinos is the "only game in town" because no other site (at least that's written in English) measures and ranks what Arek does, what kind of methodology do you propose to compare binoculars?

Or are binoculars such a personal instrument that all such comparisons are meaningless (in which case, they might as well shut down the BF binoculars forum, because it's all rubbish, some posts more so than others). ;)

Brock

Hi Brock,

Glad you're back!! Missed you. I'm off to the dentist, but in the meantime would you take a peek at Measurement Theory?

I wouldn't use a stone mason as a dentist, even if he/she decided to take up the trade. ;)

Ed
 
Last edited:
...If ipso facto, your hyperbole proves that allbinos scores are meaningless, and since allbinos is the "only game in town" because no other site (at least that's written in English) measures and ranks what Arek does, what kind of methodology do you propose to compare binoculars?

Actually, the kind of stuff you do rather well (sometimes ;)). There is no need for Allbinos' numerical alchemy to make your opinions more factual, valid or influential.

This is not meant as idle flattery, but I've gotten more out of your observations than anything Allbinos ever said.

I guess, as a generality, listen to people who's opinions you've come to trust. Use them as your guide.

Ed
 
Last edited:
Hi Ed: if your point was that there is no sound/universally accepted set of features and no proper way of assigning numerical values to them, then I am not sure I agree: after so much experience in the field with all these babies, I think in this...field we know pretty well what matters, and what's more/less important. Peter.

Oh, we agree. The problem is in creating a measurement system that reflects our experiential knowledge — and can also be validated. All of these systems, like Allbinos, Cornell, and so forth, essentially produce commercial hype under the guise of being scientifically rigorous.

If someone wanted to develop a system, one place to start would be a modified version of the Cooper-Harper aircraft rating scale. Back in 1967 I was lucky to be assigned to NASA's technical review board, and got quite an education. The scale isn't perfect, and it took a lot of work to develop and validate. So these kinds of things can be done. In this case there would be a need for birding task analyses, expert birders, good statisticians, and research support, i.e., mucho $$$. Don't blame me, I'm only the messenger. ;)

Ed
 
Last edited:
Actually, the kind of stuff you do rather well (sometimes ;)). There is no need for Allbinos' numerical alchemy to make your opinions more factual, valid or influential.

This is not meant as idle flattery, but I've gotten more out of your observations than anything Allbinos ever said.

I guess, as a generality, listen to people who's opinions you've come to trust. Use them as your guide.

Ed

Ed:

The Allbinos website is the only one of its kind, that displays this
large number of tests, and yes some of the scores are subjective.
I have read the Cornell tests, and that one is even more subjective.
As mentioned sample variation, may be an issue, but how does one
even begin to deal with that.

I have or have owned some of the models tested, and in general
I find the results much to my agreement.

I think you have a beef with them because they scored the SLC-HD
so poorly, and it seems it is one of your favorites. ;)

I also question the placement of that one. It stands out.

Jerry
 
Ed:

The Allbinos website is the only one of its kind, that displays this
large number of tests, and yes some of the scores are subjective.
I have read the Cornell tests, and that one is even more subjective.
As mentioned sample variation, may be an issue, but how does one
even begin to deal with that.

I have or have owned some of the models tested, and in general
I find the results much to my agreement.

I think you have a beef with them because they scored the SLC-HD
so poorly, and it seems it is one of your favorites. ;)

I also question the placement of that one. It stands out.

Jerry

Hi Jerry,

I believe their statement about the SLC-HD (they evaluated the 10x42, not the 8x42 that I own) was intemperate, if not outrageously out of place for fair-minded product evaluators. Clearly such autocratic, stiff-necked attitudes undermine the objectivity of everything they say. FWIW, I would speak out as strongly about any other product that was thrown under the bus; for example, capitalizing on two leaking Leica samples to condemn the whole Leica production line for not meeting waterproofing standards. Good grief.

I'm curious, what do you mean by: "...and in general I find the results much to my agreement"? Not arguing, of course, but have their ratings informed you of anything you didn't already believe? ;) Confirmation Bias.

Ed
 
Last edited:
Actually, the kind of stuff you do rather well (sometimes ;)). There is no need for Allbinos' numerical alchemy to make your opinions more factual, valid or influential.

This is not meant as idle flattery, but I've gotten more out of your observations than anything Allbinos ever said.

I guess, as a generality, listen to people who's opinions you've come to trust. Use them as your guide.

Ed

I'm happy to hear that somebody finds my observations useful! Thank you for mentioning it. I do wonder sometimes.

The problem is, and here is an example, if I say that a certain 8x30 looks brighter to me than a certain 8x32, an expert will chime in and assert that the 8x32 has 13.234% more light gathering power than the 8x30, and therefore, all other things being equal, the 8x32 must be brighter according to the Laws of Physics.

Well, perhaps it must be, but it doesn't appear that way to me since there's more going on with people's perceptions of bins than cold-blooded physics will allow. The reason this conflict arises is that experts often take human perception out of the "equation."

Some people are trying to scientifically study human perception.

Maybe someday they will finally understand what goes on between our ears, but until they do, like you, I will listen to people whose opinions I've come to trust and use them as a guide to tell if I would or would not be interested in a particular bin.

I also have years experience looking through binoculars myself, and while I'm occasionally surprised, I pretty much know what I like and what I don't like. There are some automatic disqualifiers such as narrow FsOV, heavy weight (particularly in roofs), too little pincushion, too much pincushion, oversized eyecups, ultra fast focusers, roof prism compacts, steep fall off at the edges, etc., and some features that pique my interest -- pretty much the opposite of the disqualifiers.

This is not to say that I don't find expert tests useful since they help to confirm or deny the manufacturer's claims ("sharp to the edge")("good for eyeglass wearers")(5 ft. close focus), etc., but those test numbers are only the starting point for me. Reading others' opinions of the binoculars and factoring in my likes and dislikes tells me more. Only actually using the bin will tell me for sure whether or not I will like it, but I've rarely been surprised after going through this three-step process.

I'm always interested in reading people's opinions about bins unless they are merely looking to confirm that their bin is better than someone else's or in denying flaws in their bin because they own it. Unfortunately, there's too much of that going on sometimes on these forums, which makes me turn away and find something more interesting to do.

Brock
 
Ahmen! :t:

Even when your pupil is ≤ 3.75mm, and the "effective" XP of an 8x32 is the same as an 8x30, the latter can still appear brighter for several reasons. The one reason that's never mentioned is that the transmitted spectrum, weighted by the human visual sensitivity function, may yield more lumens. It depends on the coatings and the light source. Of course, it also has to do with the ambient light level and the momentary state of retinal adaptation. I'm not saying this is a major brightness factor, because I don't know how perceptible it is. To my knowledge it's never been studied empirically, — but a computer model might shed some light. In the meantime all we have to go on is expert observers.

Ed
 
Last edited:
Warning! This thread is more than 9 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top