• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Holger’s Binocular Book - in English in 2023! (5 Viewers)

So there seem to be a number of forum members who have ordered or will or may order the book.

But is that enough to improve the quality of our bino discussions?

Statistically, of those people who order a book,

  • 14% put it in their bookshelf without opening it
  • 27% put it in their bookshelf while claiming to have read it
  • 32 % will superficially leaf through some chapters while listening to the news before putting the book in their bookshelf
  • 19.6% will read parts of the book but don‘t understand a word of what they have read
  • 7.4% will read and understand but forget what they read within two weeks.

So which category are YOU in?
 
Last edited:
So there seem to be a number of forum members who have ordered or will or may order the book.

But is that enough to improve the quality of our bino discussions?

Statistically, of those people who order a book,

  • 14% put it in their bookshelf without opening it
  • 27% put it in their bookshelf while claiming to have read it
  • 32 % will superficially leaf through some chapters while listeing to the news before putting the book in their bookshelf
  • 19.6% will read parts of the book but don‘t understand a word of what they have read
  • 7.4% will read and understand but forget what they read within two weeks.

So which category are YOU in?
I am sure there are also people who will read a book and understand and remember what they read for a long time, but that category has 0% in the above statistics. Also not all categories are mutually exclusive, but the percentages add up to 100%. As Mark Twain wrote: "lies, damned lies, and statistics."
 
So there seem to be a number of forum members who have ordered or will or may order the book.

But is that enough to improve the quality of our bino discussions?

Statistically, of those people who order a book,

  • 14% put it in their bookshelf without opening it
  • 27% put it in their bookshelf while claiming to have read it
  • 32 % will superficially leaf through some chapters while listeing to the news before putting the book in their bookshelf
  • 19.6% will read parts of the book but don‘t understand a word of what they have read
  • 7.4% will read and understand but forget what they read within two weeks.

So which category are YOU in?

Canip is having some fun it seems! Truth is: The book is going to be easy and fun to read and a vast majority of our formidable forum is going to understand it :)

Cheers,
Holger
 
Yes I do ;) - and nobody can prove my „statistics“ wrong :LOL:
(and you know that I wish your book all the success it deserves)
I dare to question that claim: "your statistics" will be proven wrong when members of the BF will recite from memory facts from Holger's book they read (and understood) more than 2 weeks back. At least I hope so....
 
Last edited:
Probably in the 7.6%, but you ought to have provided confidence intervals on your numbers…. Precision is easy, accurate less so!

Peter
 
Today I’ve received an offer from Springer for a 50 % discount on many books and ebooks, and Holger’s book is eligible for this offer! This puts the hardcover book at 23.73 euros only, including shipping cost. Of course I ordered the book immediately. 🙂

Then I discovered that Neil’s book, Choosing and Using Binoculars, is also available on this site, and also eligible for the discount price, so it costs only 15.81 euros. I ordered it too.

I don’t know is this offer is also valid to newcomers. For those who are interested, you can try to register on the Springer site, and type HOL50 for the discount code. This special offer is valid until December 31.

Jean-Charles
 

Attachments

  • Order.png
    Order.png
    38.2 KB · Views: 12
My copy has just arrived from Heidelberg, so great job and thank you to Springer and UPS for such a speedy despatch and delivery, I now have Christmas reading for while 'the olds' have their afternoon snoozes on Christmas Day and Boxing Day. 😉
 
While waiting, 11 wisdoms:

All verry interesting.
Take a look at Wisdom no. 6 and Wisdom no. 9. It seems we can forget a lot of posts/debates about high-end binoculars.
I believe most individuals will concur with Holger's comments, albeit the range of binoculars obviously does not cover every current modern model e.g. GPO HDs, Opticron Aurora, Svbony SV202s, Adventure T WP porros, to name a few across the price ranges. (EII is now £700!)

The aspect I would "challenge" is Wisdom No 8: Plotting price vs 'Relative Optical Performance' is certainly a good idea and worthy of consideration as we all look at binoculars in this way.
However, the measure 'Relative Optical Performance' might (?) have a level of subjectivity and personal preferences applied.

If (big IF and I haven't looked !) this 'ROP' criteria is a list/matrix of weighted or unweighted criteria then each bin has been assigned a value for each of the chosen criteria. Maybe some are based on specs., or measured features e.g. size/weight. This methodology is totally fine; but for each of us this could be heavily influenced by personal preferences and experiences. Examples - I like flat field, not sensitive to CA, dislike excessive FC and prefer not wearing glasses with -7D myopia. Image stabilisation would have very high weighting.

A similar evaluation methodology is used by engineering designers when they have multiple designs and the organisation need to down-select the product options at 'design freeze', so that tooling and manufacture can then ramp-up.
The number of and list of each criteria, with their relative weightings is critical for the product to be a commercial & engineering success. Often it is picking and weighting 'apples vs oranges'.
In the binocular domain it could be FoV vs mass, centre sharpness vs edge sharpness, light transmission at 500nm vs 600nm, etc. .... what is the list of criteria, is it extensive enough and what is the weighting of each criteria in the 'ROP' matrix?

Maybe Holger has the matrix and weighting for each criteria in his book (or deeper on his website?) .... my copy of his book is in the post and I can check when received.

So, before I get lots of negative comments about the above ....... I am not criticising Holger in any way, in fact I concur with his methodology, having used similar methods to select engineering designs, but what I believe is that the 'ROP' criteria matrix could be somewhat personal to Holger and might not fit each of our own criteria & relative weightings.
This 'shoulder curve' reflects most products ... the sweet-spot is usually the items that fall in the mid point .... good performance for the price.
That was the EII at one time, but with price changes and new entrants, this graphic will change over time.
 
Last edited:
I am not criticising Holger in any way, in fact I concur with his methodology, having used similar methods to select engineering designs, but what I believe is that the 'ROP' criteria matrix could be somewhat personal to Holger and might not fit each of our own criteria & relative weightings.

My thoughts:

If I have little experience with optics systems, I can trust Holger's technical and general experience and accept his 'Relative Optical Performance'. Even without knowing the details. Because I do not have a clear "ROP' criteria matrix and I know I do not have one.

If I am experienced in this domain, I would like to know the details of 'ROP', not for criticizing, but for comparing. But not in public space.

If I am interest by the resale value, I can ignore all criteria (Holger's included) and buy from the brand(s) with the best resale value. This cost a lot of money and intrinsically the quality is excellent/super/the best of the best/the only one to be considered/etc.

A lot of other 'if' can be added.

I think that Wisdom #8 applies to a majority of binoculars users, birders or not. Hence the value of it.
 
Last edited:
I think that Wisdom #8 applies to a majority of binoculars users, birders or not. Hence the value of it.
I agree .... Wisdom #8 applies in principle to the majority of products; it is choosing the 'ROP' criteria that place each of the respective products at their applicable point on the curve.

An example ..... I am interested in a GPO 8x42 HD vs an Opticron Aurora, where are each on the curve, relative to a EII?

Interestingly - here is the Cornell graphic showing the same type of evaluation (Wisdom #8 graphic) :

 
Last edited:
I believe most individuals will concur with Holger's comments, albeit the range of binoculars obviously does not cover every current modern model e.g. GPO HDs, Opticron Aurora, Svbony SV202s, Adventure T WP porros, to name a few across the price ranges. (EII is now £700!)

The aspect I would "challenge" is Wisdom No 8: Plotting price vs 'Relative Optical Performance' is certainly a good idea and worthy of consideration as we all look at binoculars in this way.
However, the measure 'Relative Optical Performance' might (?) have a level of subjectivity and personal preferences applied.

If (big IF and I haven't looked !) this 'ROP' criteria is a list/matrix of weighted or unweighted criteria then each bin has been assigned a value for each of the chosen criteria. Maybe some are based on specs., or measured features e.g. size/weight. This methodology is totally fine; but for each of us this could be heavily influenced by personal preferences and experiences. Examples - I like flat field, not sensitive to CA, dislike excessive FC and prefer not wearing glasses with -7D myopia. Image stabilisation would have very high weighting.

A similar evaluation methodology is used by engineering designers when they have multiple designs and the organisation need to down-select the product options at 'design freeze', so that tooling and manufacture can then ramp-up.
The number of and list of each criteria, with their relative weightings is critical for the product to be a commercial & engineering success. Often it is picking and weighting 'apples vs oranges'.
In the binocular domain it could be FoV vs mass, centre sharpness vs edge sharpness, light transmission at 500nm vs 600nm, etc. .... what is the list of criteria, is it extensive enough and what is the weighting of each criteria in the 'ROP' matrix?

Maybe Holger has the matrix and weighting for each criteria in his book (or deeper on his website?) .... my copy of his book is in the post and I can check when received.

So, before I get lots of negative comments about the above ....... I am not criticising Holger in any way, in fact I concur with his methodology, having used similar methods to select engineering designs, but what I believe is that the 'ROP' criteria matrix could be somewhat personal to Holger and might not fit each of our own criteria & relative weightings.
This 'shoulder curve' reflects most products ... the sweet-spot is usually the items that fall in the mid point .... good performance for the price.
That was the EII at one time, but with price changes and new entrants, this graphic will change over time.

I agree that such a plot is of very qualitative character - it is just supposed to show the general trend that makes the rule of diminishing returns. No overall performance can reliably be cast into a single number, and there will always be individual products which offer outstanding performances in particular tasks.

Cheers,
Holger
 
Any model that consistently gets mentioned in these sorts of wide scale rankings by organisations that know what’re doing and don’t have a vested interest in the result should give you models to consider, there maybe some features of each that you specifically care about… glare, eye relief, field of view, mechanics that would help identify which you might prefer. It could be fun to plot bino performance in a multi-dimensional space (how many parameters do you want to consider??), but as Holger notes it wouldn’t really add much.

Peter
 
Any model that consistently gets mentioned in these sorts of wide scale rankings by organisations that know what’re doing and don’t have a vested interest in the result should give you models to consider, there maybe some features of each that you specifically care about… glare, eye relief, field of view, mechanics that would help identify which you might prefer. It could be fun to plot bino performance in a multi-dimensional space (how many parameters do you want to consider??), but as Holger notes it wouldn’t really add much.

Peter
What I find interesting is that every time there is a new thread asking for " please recommend a bin....", that subjective suggestions are made often recommending underperforming bins, or overpriced bins for their performance. This is very obvious on BF as many 'like minded' individuals frequent the forum .... with perhaps some 'groupthink'.

Off course the opinion of the poster is valid to them, but it is commonly based on having hands on experience of only one or a few bins, or bins that are in a different price category or little more than their perception, often not reality.

The value of scale rankings by Cornell, Neill English, Bestbinoculars, Allbinos and others, is that they bring a more level and grounded assessment using a range of bins, across many price categories. It's easy to pick the £3000 bin and claim it is the best and everything else is garbage.

What is worse in my view is brand bashing..... Writing off individual bins based on the make ..... Yet if one reads BF it is easy to find repeat major flaws in top priced models e.g. armour failing, bad focusers, many samples needed to get a keeper, glare, prism spiking, etc.

IMHO there are definitely sleeper bins that perform better than their price points ..... Which is what the typical shoulder graphic that Holger and Cornell have used to pull them out.

However, getting agreement on the assessment categories and their relative weighing is the problem needing to be overcome in order to get some level of collective conclusion on what the standout bins are. With every new bin, the assessment would need to be updated.

I for one have taken a conscious decision not to pay £1000+ extra for criteria that are minor in my opinion, when there are much higher hitting features, such as IS, which elevate low/ mid priced (£425) bins way higher than the most expensive models (£3000).
 
Last edited:

Users who are viewing this thread

Back
Top