What's new
New posts
New media
New media comments
New profile posts
New review items
Latest activity
Forums
New posts
Search forums
Gallery
New media
New comments
Search media
Reviews
New items
Latest content
Latest reviews
Latest questions
Brands
Search reviews
Opus
Birds & Bird Song
Locations
Resources
Contribute
Recent changes
Blogs
Members
Current visitors
New profile posts
Search profile posts
ZEISS
ZEISS Nature Observation
The Most Important Optical Parameters
Innovative Technologies
Conservation Projects
Log in
Register
What's new
Search
Search
Search titles only
By:
New posts
Search forums
Menu
Log in
Register
Install the app
Install
BirdForum is the net's largest birding community dedicated to wild birds and birding, and is
absolutely FREE
!
Register for an account
to take part in lively discussions in the forum, post your pictures in the gallery and more.
Forums
Binoculars & Spotting Scopes
Binoculars
Nikon
Nikon 10x42 SE & Swift Audubon 820 8.5x44
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="edz" data-source="post: 1262558" data-attributes="member: 44073"><p>1. No they would not. If they were of different quality, potentially when looking thru both with 45x one might see 4” and the other 6”.</p><p></p><p>2. I agree. But we are not about to attempt to all match ourselves in a lab. Not practical. Now here you can take two arguments to try and resolve the issue you raised. On one hand you can say every difference we see between two testers is due to a difference in acuity between them. OK, if you choose to go that route, the only valid test comparisons are those all conducted by the same individual. Any number of products tested by one person can be compared, but they could not ever be compared to other products tested by someone else. Or you can say, the differences we see in the brands tested by these observers really show some differences in quality. While there may always be some differences due to acuity, this is the more reasonable approach. Why? Because we have some history of various testers and some means of benchmarking that can be relied upon. We can establish patterns of acuity between observers. A well versed tester can report his own acuity and we can take it from there.</p><p></p><p>I’m not going to touch #3, because it is complicating the matters of this discussion and is beyond the scope of this discussion. You could write a paper on that topic alone.</p><p></p><p></p><p>Here’s some other scenarios. (We could probably make up endless number of scenarios. Many are instructional and highlight the points of lack of information in testing method).</p><p></p><p>Let’s say a tester takes a group of binoculars, a variety of nominal 7x50s, 8x40s and 10x50s, and performs resolution tests. Then the raw arcsec values are posted and are used to rank the various binoculars according to their resolution readings. Now what could go wrong? Well first refer back to what I mentioned yesterday as a fourth scenario, repeated here.</p><p> [ONE observer uses two different brands binoculars, but both same size, and both used at numerous powers. For sake of discussion they both do very well, they both see 3". But the first one can see 3" at both 60x and 65x and shows no further increase. the second one does not see 3" until you reach 70x. beyond 70x, there is no further increase and it is fuzzy. This is not a test of acuity. The first binocular reaches the resolution limit at a lower power, has a lower apparent res and is better.]</p><p></p><p>OK, now the data has been collected, and this observer does not report the power at which each reached the maximum resolution of 3 arcsec. Do you have enough information to make your decision about the quality of these binoculars? Can you rank these binoculars by raw resolution of 3”? The answer is No to both. First, one binocular is showing that it reached a good value at a lower power, a clear indication of fine beam transfer, potentially lower aberration and fine contrast. On the other hand, the other binocular may get down to the same low resolution value, but it may take the brute force of additional magnification to get there. This is not showing two equal binoculars. </p><p></p><p>Magnification can be used to make an image large enough to see, regardless of how good that image really is. Without the benefit of knowing what power it took to see the results, there is missing information in this resolution test. Both appear to be 3” binoculars, however one achieves this result much easier than the other, an indication of finer quality. The power factor shows the difference.</p><p></p><p></p><p>Here is yet another scenario</p><p>Same as before, let’s say a tester takes a group of binoculars, this time just a variety of nominal 8x40s, and performs resolution tests. Then the raw arcsec values are posted and are used to rank the various binoculars according to their resolution readings. Now what could go wrong? </p><p></p><p>We know that all binoculars do not measure exactly the nominal power stated. I have 12 different 8x binoculars. At (near) infinity focus, they range in power from 7.5x to 8.4x. ( I had one 10x42 roof that actually measures 10.8x). Not only do these binoculars all have different actual powers, but as each is refocused to close focus, in most the power gets higher. (I have one extreme close focus binocular that goes from 6.5x at distance to 8.0x at close focus of 0.5m). A typical example would be, one that is 8x when focused at infinity is actually 8.3x when focused at 15 feet. </p><p></p><p>Also I have several small auxiliary scopes. My little 6x scope can range in power from about 5.8x to about 6.5x, depending at what distance the scope is focused. Now, we might see test results published that show all tests were performed at 48x. However, based on these actual power data, due to the variations in binoculars and test scope (and focus) what we really might have is test results that reflect binocular boosted magnifications from 43 power up to 55 power. And if any of the tests were conducted at close focus the test powers would be even higher.</p><p></p><p>So what affect might this have? I’ve already explained how brute force of magnification can be used to get what appears to be an equal result. Here lies the potential for differences in magnifications to skew all the data. Conceivably, in fact it is likely, we have raw data arcsecond readings that were obtained at different powers. Had I not first tested the power of my binoculars, I could be comparing results among my 8x models where I’m comparing the reading from a 46x test to a 52x test. </p><p></p><p>Here again, the brute force of an additional 13% power, (or potentially 28%) , could be masking the true output. If two binoculars vary by only 1/10 of one arcsecond resolution, a seemingly tiny variance, but enough to rank them, readers should know if it took only took 46x (or 43x) for one to reach that fine performance, while it took the other one 52x ( or perhaps as much as 55x) to reach that same value. With a potential for more than 20% difference in power to obtain the readings, there could be some considerable differences in contrast or any variety of aberrations that don’t show in the data because magnification was not tested to give those clues.</p><p></p><p>Yesterday I gave several examples to show that Apparent resolution cannot and indeed is not always showing difference in acuity. Here you have examples of why the lack of information about power used can lead to skewed results. </p><p></p><p>edz</p></blockquote><p></p>
[QUOTE="edz, post: 1262558, member: 44073"] 1. No they would not. If they were of different quality, potentially when looking thru both with 45x one might see 4” and the other 6”. 2. I agree. But we are not about to attempt to all match ourselves in a lab. Not practical. Now here you can take two arguments to try and resolve the issue you raised. On one hand you can say every difference we see between two testers is due to a difference in acuity between them. OK, if you choose to go that route, the only valid test comparisons are those all conducted by the same individual. Any number of products tested by one person can be compared, but they could not ever be compared to other products tested by someone else. Or you can say, the differences we see in the brands tested by these observers really show some differences in quality. While there may always be some differences due to acuity, this is the more reasonable approach. Why? Because we have some history of various testers and some means of benchmarking that can be relied upon. We can establish patterns of acuity between observers. A well versed tester can report his own acuity and we can take it from there. I’m not going to touch #3, because it is complicating the matters of this discussion and is beyond the scope of this discussion. You could write a paper on that topic alone. Here’s some other scenarios. (We could probably make up endless number of scenarios. Many are instructional and highlight the points of lack of information in testing method). Let’s say a tester takes a group of binoculars, a variety of nominal 7x50s, 8x40s and 10x50s, and performs resolution tests. Then the raw arcsec values are posted and are used to rank the various binoculars according to their resolution readings. Now what could go wrong? Well first refer back to what I mentioned yesterday as a fourth scenario, repeated here. [ONE observer uses two different brands binoculars, but both same size, and both used at numerous powers. For sake of discussion they both do very well, they both see 3". But the first one can see 3" at both 60x and 65x and shows no further increase. the second one does not see 3" until you reach 70x. beyond 70x, there is no further increase and it is fuzzy. This is not a test of acuity. The first binocular reaches the resolution limit at a lower power, has a lower apparent res and is better.] OK, now the data has been collected, and this observer does not report the power at which each reached the maximum resolution of 3 arcsec. Do you have enough information to make your decision about the quality of these binoculars? Can you rank these binoculars by raw resolution of 3”? The answer is No to both. First, one binocular is showing that it reached a good value at a lower power, a clear indication of fine beam transfer, potentially lower aberration and fine contrast. On the other hand, the other binocular may get down to the same low resolution value, but it may take the brute force of additional magnification to get there. This is not showing two equal binoculars. Magnification can be used to make an image large enough to see, regardless of how good that image really is. Without the benefit of knowing what power it took to see the results, there is missing information in this resolution test. Both appear to be 3” binoculars, however one achieves this result much easier than the other, an indication of finer quality. The power factor shows the difference. Here is yet another scenario Same as before, let’s say a tester takes a group of binoculars, this time just a variety of nominal 8x40s, and performs resolution tests. Then the raw arcsec values are posted and are used to rank the various binoculars according to their resolution readings. Now what could go wrong? We know that all binoculars do not measure exactly the nominal power stated. I have 12 different 8x binoculars. At (near) infinity focus, they range in power from 7.5x to 8.4x. ( I had one 10x42 roof that actually measures 10.8x). Not only do these binoculars all have different actual powers, but as each is refocused to close focus, in most the power gets higher. (I have one extreme close focus binocular that goes from 6.5x at distance to 8.0x at close focus of 0.5m). A typical example would be, one that is 8x when focused at infinity is actually 8.3x when focused at 15 feet. Also I have several small auxiliary scopes. My little 6x scope can range in power from about 5.8x to about 6.5x, depending at what distance the scope is focused. Now, we might see test results published that show all tests were performed at 48x. However, based on these actual power data, due to the variations in binoculars and test scope (and focus) what we really might have is test results that reflect binocular boosted magnifications from 43 power up to 55 power. And if any of the tests were conducted at close focus the test powers would be even higher. So what affect might this have? I’ve already explained how brute force of magnification can be used to get what appears to be an equal result. Here lies the potential for differences in magnifications to skew all the data. Conceivably, in fact it is likely, we have raw data arcsecond readings that were obtained at different powers. Had I not first tested the power of my binoculars, I could be comparing results among my 8x models where I’m comparing the reading from a 46x test to a 52x test. Here again, the brute force of an additional 13% power, (or potentially 28%) , could be masking the true output. If two binoculars vary by only 1/10 of one arcsecond resolution, a seemingly tiny variance, but enough to rank them, readers should know if it took only took 46x (or 43x) for one to reach that fine performance, while it took the other one 52x ( or perhaps as much as 55x) to reach that same value. With a potential for more than 20% difference in power to obtain the readings, there could be some considerable differences in contrast or any variety of aberrations that don’t show in the data because magnification was not tested to give those clues. Yesterday I gave several examples to show that Apparent resolution cannot and indeed is not always showing difference in acuity. Here you have examples of why the lack of information about power used can lead to skewed results. edz [/QUOTE]
Insert quotes...
Verification
Post reply
Forums
Binoculars & Spotting Scopes
Binoculars
Nikon
Nikon 10x42 SE & Swift Audubon 820 8.5x44
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.
Accept
Learn more...
Top