• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Does higher power demand fancier optics? (1 Viewer)

How much of the less sharp image in the higher magnification can be attributed to the impurities in the atmosphere also being magnified ? On a day when 8X looks acceptable, haze and mirage make the 15X Kaibab almost unusable.
Give the Kaibab some good air, and while not quite as bright and sharp as a good 8 or 10X, it is still amazing. We obsess a lot about optical properties, but the atmosphere is seldom ever perfectly clean.

You're absolutely right: I took a 20x/40x Swift spotter and the 10x50s out
to Holt Hill a month ago. The wind and atmospheric mixing made all powers
kind of pointless past 300 yards. Other days I could do much better.
The 10x50s won because the extra light made the image easier to put together in the cortex. If you're going for distance (say, past 150 yards),
the air really spoils attempts to spend for quality.
Altitude is a big fixer, though:
at only 3000 ft above sea level, things are a lot sharper. Partly thinning
air but also looking through more air that's less roiled by surface effects.
Those filming surfers say going from 5 feet to 12 feet above the
beach helps a lot. Maybe lightweight ladders or tree stands are
good for the obsessive types.
 
Ed,

Thanks for pointing that out. Please correct me if this is wrong:

I believe the origin of actuance must lie in the pixelation of retina. So as a blurred edge is increasingly magnified, the blur spreads over more pixels, and looks blurrier.

A naive interpretation (something I'm good at) predicts that the effect would occur whenever the magnified blur exceeded the eye's resolution of about 1 arc minute, corresponding to retinal cell spacing. A good 40mm binocular will resolve 3 arc seconds, so it might seem from that, that one would not notice actuance blur up to 20x.

But, the marginal visiblity of something small, like the visibilty of edge blur, occurs at smaller angles than required for the resolution of two objects. For example, the Cassini division in Saturn's ring is frequently seen at magnifications where its size is considerably less than what the eye can "resolve'. Unfortunately I don't "know the numbers" on these effects, but I suspect that actuance explains why low power binoculars look sharper. I will do some homework. If the visiblily threshold is al small as a third or a half of the resolution threshold, we are there.

Ron

Hi Ron,

Acutance refers to the rate of change of luminance from light to dark (or visa-versa), which occurs at the edges of objects that are imaged on the retina. The retina processes this edge contrast information at the retinal ganglion layer using specialized cells devoted to edge detection.

Comparatively speaking, increased magnification reduces the steepness of the edge contrast gradient, resulting in the perception of less image "sharpness." This is consistent with several observations you reported above (post #12), as well as the loss of sharpness at close working distances. The latter effect would be exacerbated by the fact that the instrument's magnification also increases at close working distances.

So, like the depth-of-field, which diminishes inversely with image magnification, so too does image sharpness. They are part of the price one pays for a larger image size, and both derive from the workings of the eye itself. This assumes, of course, that aberrations and image movements are controlled.

Ed
 
Last edited:
So, like the depth-of-field (of the eye), which diminishes inversely with image magnification, so too does image sharpness.
They are part of the price one pays for a larger image size. They both derive from the eye itself.
I get it... reminds me a lot of the 2-stage Fujifilm EXR scheme. The difference/cells to detect edges
also allow and extended contrast depth. At high power you don't have the cells to throw away on the
calculation, though.

I did some experiments with interference patterns today. That checks the optical performance
beyond the resolution of the eye (the eye sees a gross pattern set up by the fine phase
distortion). I got the Meopta 6.5x32 today (yay!).. it and the 1950s 6x30 Wuest had a
more detailed Moire pattern the 10x42s I have (a limited lower-grade set), but also compared
to the Selsi 10x50 porros I love.

I still see the images at 150 yds 'pop' on the 6x30s and 6.5x32s, though.

I think there is some geometric 'lesser distortion' going on regardless of the eye, plus your
'better-edge-detection' effect. The two together can be quite striking.

But I am studying the image, and it takes ~2-5 seconds to see the 'popping'.
At 10x, what detail I see I notice almost instantly. So there is that factor,
the 'speed of detail'. So, If I'm looking very far away, the 10x will still give me
more immediate results. The practical resolution for movement is better at 10x.

For either power, putting the set on a tripod adaptor improves speed and resolution.
Less background calculations to do. Spotter scopes for long-range snipers are often
used at 10x-15x, which seems oddly low at 1 mile, but that might speak to the
real effects we're talking about here. But note that: the sniper stares at a small
spot for a fairly long time. Someone following an eagle at a third of that range
does not have the time so 10x-12x (wich would scale up to the sniper range
at 30-36x) gives them better "speed of detail". The birdwatcher's cortex has
a different job to do.

I'm looking to incorporate the obvious preference many have for (10-12) x (40-50),
and reconcile with your edge detection. The usable resolution might be the key,
and time could be the hidden factor. "cortex laziness" too, perhaps: it's a lot
of work to see into a perfectly-resolved .1% of the field, and a lot less to see
that feather with twice the eye cells at a higher power. The higher power
might reflect the adage they gave to me digging water lines at a summer job.
The foreman points to the backhoe and says "kid, get out of the pit 'til
we get close: let the machine to the work". 10x 'lets the machine do the work'
and that might outweigh all the braincells and seconds that would distract
from you enjoying the feeding action on the nest.
 
Last edited:
Ed,
That is a good explanation. It would be nice to quantify the effect so we could know if this is responsible for the observation that low power binoculars are sharper.

Some folks have argued, as if the eye's resolution were the same as its actuance blur, that any binocular presents an image sufficiently sharp that nobody can tell one binocular from another, as long as they are in reasonably good adjustment. Yet reports of sharpness differences persist. This may be the explanation, if so it would in my opinion would be a great step forward for binocular opto-nerdism.

All I could find slumming the web is that under ideal illumination and highest contrast, the eye can detect something 1 arc second or less in extent. If that is also the ballpark of the actuance blur, then extremely minor differences between binocular should be visible. But it's probably on a sliding scale with contrast.

Could you give us an idea of what might be a "typical" minimum angular width of percieved edge blur?

Ron
 
Ron,

I’m a bit perplexed by the reference to “acutance blur,” blur being associated with focus/resolution and acutance with the edge contrast gradient. I would admit, though, that with regard to systematic differences that are perceived as instrument magnification is increased, both DOF (based on blur) and acutance (based on the contrast gradient) would appear to reinforce each other to reduce sharpness. It would be difficult to say with any confidence, that the perceived softness one sometimes is aware of with increased magnification is due exclusively to one or the other; and it is even more difficult for me to imagine how these influences could be separated experimentally. Nonetheless, I’m willing to posit that differences in perceived sharpness are most likely related to the workings of the observer's visual system, rather than opto-mechanical aspects of the instrument.

Many thanks for the thought provoking discussion.

Ed
 
Last edited:
Ed,

You've been patient with my clumsy question, thank you. I think I can ask it better.

Let's say there is a white region, and an adjacent black region, and that the boundary between them is a smoothly varying grey scale. That is, the boundary is "blurred". What is the smallest angular extent of that boundary region, subtended at the eye, that can be distinguished from a perfectly abrupt white to black transition?

It seems to me that that value would be more closely related to perceived sharpness than the eye's resolving power is. For a diffraction limited optic at least, its imperfect and blurred representation of a perfectly sharp edge is understood. If this quantity was known, it would seem simple to declare, for that case, whether the eye could perceive that blur at a given magnification.

I suspect that the answer may be significantly less than the 1 arc minute commonly given as the eye's resolving power, and if so, might justify scientifically the many claims of different optics appearing to have differing sharpness, claims which appear specious under the old visual resolution based arguments. Unfortunately I have had no success with the google method of finding the answer, and invite one more round of your professional expertise, if you will be so kind. (I am guessing the answer lies in the literature with which you are familiar.)

Ron
 
Last edited:
Ed,

I'd like to second ronh's question! Also, how does this relate to MTF charts? Can this then be correlated to acutance quantum, or gradient? :cat:
TIA! :t:



Chosun :gh:
 
Ron, Ed,

I don't want to interfere in a very interesting discussion but perhaps adding a few numbers might help.

At their maximum packing density the receptors of the retina are 2 micron across. It the depth of the eyeball is 17mm that subtends an angle of 24 arcseconds. Two cells are required to detect a feature so that gives a potential retinal resolution of 48 arcseconds. The eye's maximum acuity is typically found when the pupil diameter is 2.5mm. The Dawes limit for 2.5mm is 46 microns.

So we have two bit of information that suggest that the theoretical optical and biological resolution limit for the eye is just under 50 arcseconds. There have been reports of 20/8 (48 arcsecond) acuity but they are very rare, but the potential is there for the eye to be effectively diffraction limited. There may be one or two on the forum that can manage 20/10 but the majority will be 20/15 or worse. However at the Dawes limit and other tests of acuity the the contrast difference is down to a few percent. I'm sure the packing matrix comes into the story as well, but acutance is a new one on me. Back to you guys.

David
 
Doing some ballparking...

Looks like 50 arcseconds amounts to 0.145 inches at 50 feet.
Since that's a line seperation, recognizing a character would require more 'pixels' of sight than one.
Let's say...4 (4x4=16 blocks might be able to identify a character)

So...a font about .6 inches high (1.5 mm).
All this is to the un-aided eye, of course.

I'm looking at the "remote alarm lamp" label on a fixture plate 30 feet away.
The letters are about .5" high, but I can only see a gray blur (glasses on).
There is a font 1 inch high below that, but that's also just a blur from 30 ft..

I've always had a hard time reconciling the 'best acuity' numbers with
actual physical features like beaks and claws and letters. Something seems
to be missing. Seperating two dots is a long way from recognizing an eye or a Z.

Of course, that only multiplies the coarseness of the 50 arc-second box you're setting
to illustrate the realities.

A thought: if I magnify a view and keep the optical perfection well past the edge dimensions,
the acuity scheme will only make the edge easier to detect. Are we talking about diminishing
returns, though? That is, a sharper edge presented across just 2 cells or less doesn't look
any sharper because you've hit the limit of the eye system. The thing is, blowing up
the image should actually make life easier on your eye and push that resolution higher.
Doesn't the acuity model actually favor the higher power, rather than blunting it?

I seem to be shifting more towards 'there's something in the lens distortions'.
Watching interference fringes out the window (which test the binos more than the eyes)
keeps me stuck on the optics. Don't forget, there are at least 16 glass-air or glas-glass
surfaces in a row, and the desisgn is some kind of approximation outside center field.


But , per ronh:
I believe the origin of actuance must lie in the pixelation of retina. So as a blurred edge is increasingly magnified, the blur spreads over more pixels, and looks blurrier.

A naive interpretation (something I'm good at) predicts that the effect would occur whenever the magnified blur exceeded the eye's resolution of about 1 arc minute, corresponding to retinal cell spacing. A good 40mm binocular will resolve 3 arc seconds, so it might seem from that, that one would not notice actuance blur up to 20x.


So....the blur is pretty much there either way, and magnifying it only reveals it to you better? That sort of makes sense.
Now...if the true optical blur increased with power as well, that would cover what I see. I need to explain the fringing patterns.
The line resolution tapered test in those charts usually reveal much more about the instrument than the eye.


I just had to revise my guess about relating resolution and font size:

http://www.axis.com/academy/identification/resolution.htm
"
Other criteria are valid for objects such as license plates, where typical recommendations are that the height of letters should be represented by 15 pixels (corresponding to about 200 pixels/m) to ensure legibility.
"

15 pixels high....wow.
(not 4 as I've been using)
So...take your acuity figure and multiply by 15 for font height. What's what the surveillance guys use.
Well, that fits my view of the signs at work and at home!
 
Last edited:
I am not trying to beat the Dawes limit for resolving two point sources. (I already tried that years ago and concluded that Dawes was not a half bad double star observer!) I am only pointing out that since single features as small as 1 arc second can be detected by the eye, there is more to visual acuity than the Dawes limit. And, just perhaps, that the perception of edge blur beats the Dawes limit as well.

I only have a vague hunch how this might happen. For one thing, the image of the blurred edge in question will strike different pixel pairs at different angles, not always along the greatest distance separating two pixels. So, two pixels can be stimulated, and stimulated differently, by a blur that is smaller than that estimate predicts. Also, the eye has quite an image processor behind it, which can combine images which have fallen differently on the retina, possibly increasing effective resolution and cutting out noise.

http://en.wikipedia.org/wiki/Superresolution

Ron
 
I see assertions of an arc-second and an arc-minute....now there's a difference.

Hand-held (shake portion not taken out by cortex) and in daylight,
(not two pinpricks in the pitch black sky from a tripod), I can't see doing better than that the 50 arc-seconds.
I can't get anyone at work to do better than that at work, without a gadget,
looking at the wall-plate from 30 ft. Practical Science wants realistic observations.

At any rate, we are still stuck with the common observation
that some detail in an 8x30 or 6x30 view doesn't appear to
have the 'fuzzing' that the 10x42 or 12x view does.
I think that effect takes place way past 1 arc second....
 
Last edited:
"
since single features as small as 1 arc second can be detected by the eye,
"
1 arc second, divide by 60, divide again by 60 (degrees now)
tangent
times 50 (feet) times 12 (for inches) is: 0.0028088... (three thousandths of an inch at 50 ft)


Something's not right here ... detecting a feature 3 thousandths of an inch with the bare eye at 50 feet??
 
Ed,

You've been patient with my clumsy question, thank you. I think I can ask it better.

Let's say there is a white region, and an adjacent black region, and that the boundary between them is a smoothly varying grey scale. That is, the boundary is "blurred". What is the smallest angular extent of that boundary region, subtended at the eye, that can be distinguished from a perfectly abrupt white to black transition?

It seems to me that that value would be more closely related to perceived sharpness than the eye's resolving power is. For a diffraction limited optic at least, its imperfect and blurred representation of a perfectly sharp edge is understood. If this quantity was known, it would seem simple to declare, for that case, whether the eye could perceive that blur at a given magnification.

I suspect that the answer may be significantly less than the 1 arc minute commonly given as the eye's resolving power, and if so, might justify scientifically the many claims of different optics appearing to have differing sharpness, claims which appear specious under the old visual resolution based arguments. Unfortunately I have had no success with the google method of finding the answer, and invite one more round of your professional expertise, if you will be so kind. (I am guessing the answer lies in the literature with which you are familiar.)

Ron

Hi Ron, et al.

Thanks for the clarification. I intuited that's what you (and now others) were getting at, and it's a perfectly reasonable question. However, it’s unavoidable to point out that we are now talking about the "receptive fields" of ganglion cells, which are defined as the collection of photoreceptors (rods and cones) that synapse with them. The response of each ganglion cell is tuned to some particular pattern of light stimulation that falls on its receptive field (again, many photoreceptors). This constitutes, in effect, the first stage of image processing that the brain accomplishes. For those interested, axonal output signals from the ganglion cells travel through the optic nerve to higher brain cells, whose receptive fields, in turn, are defined as the ganglion cells that synaptically connect with them.

So, even at this first level of image processing at the retina we are beyond explaining things by front-end optics of the eye (except with respect to pattern characteristics). The next logical question, of course, might be to ask about the size of the receptive field, but here the question breaks down. The photoreceptors connected to each ganglion cell are not necessarily contained within a given surface region of the retinal. Also, the receptive fields differ with regard to pattern orientation, e.g., vertical vs. horizontal, etc. Worse yet, any particular rod or cone receptor may be a member of several receptive fields. Putting it all together, angular size, per se, is not all that meaningful.

There are good Wiki articles about these things. Its a very active field of research in visual science, but probably beyond what most people want to consider relative to visual instruments. Before you know it, there wouldn't be time left to watch birds (or even binoculars ;)).

Ed

PS. A good book is Visual Coding and Adaptability, (1980), edited by Charles H. Harris. Used copies are available for as little as $7.42 on ABE Books.
 
Last edited:
I think there are experiments people can conduct in this area to learn about the different layers
of processing. The laser-pointer experiments I did were very informative on the operations of the
optic cortex, how much shake we don't realize is being taken out in the brain, why that causes apparent
edge-dimming, and how having both eyes magnified tames the shake better than the monocular view.

As for the first stage, at the retina, I'm wondering if test patternscould be made that uniquely probe
that layer of the trip. Most of the charts now used end up testing optical acuity more than our sensors.
You see the resolution patterns all the time in camera tests, but that drives home the point that
at the image, the line pairs have already interfered with each other, whoever or whatever looks next.
Or maybe taking digital pictures through them and doing post-processing so the image is like
'what we see' would give some clues. There may be amateur discoveries to be made.
 
Last edited:
...As for the first stage, at the retina, I'm wondering if test patterns could be made that uniquely probe
that layer of the trip. Most of the charts now used end up testing optical acuity more than our sensors.
You see the resolution patterns all the time in camera tests, but that drives home the point that
at the image, the line pairs have already interfered with each other, whoever or whatever looks next.
Or maybe taking digital pictures through them and doing post-processing so the image is like
'what we see' would give some clues. There may be amateur discoveries to be made.

If you're suggesting that digital image processing algorithms could be used to understand the workings of the human visual system, I would agree. Quite a few professionals have gotten a head start, though. In that light, it's probably true that the success of photo 'sharpening' algorithms is directly related to the edge processing characteristics of the eye, and were designed to emulate them.

Ed
 
Last edited:
Ed,

I'd like to second ronh's question! Also, how does this relate to MTF charts? Can this then be correlated to acutance quantum, or gradient? :cat:
TIA! :t:



Chosun :gh:

I believe modulation transfer functions relate to all this, but how is beyond me.

Yes, I'm also concerned about a TIA. ;)

Ed
 
Ed,

Thanks for putting your thinking hat on and sharing your expertise. I feel so lucky to have all of that going on in my eyes and head, but also kind of dizzy now that I am aware of it.

I can imagine an "obvious" experiment to answer my own question. Present an observer having normal eyesight with a blurred edge and a sharp edge, and move these objects farther away until he cannot tell the difference. In order to keep unknowns at bay, it might be nice to also include a test of the resolution of two closely spaced lines in the trial.

Unfortunately I am not a candidate for observer, as my distance of best acuity is fixed at about 1 meter. Viewing through a binocular to keep the image focused is probably not a super idea. Maybe I can figure out how to make blurred edges of varying widths. You have implied that the eye/nerves/brain are so complex that such an approach will not give a very useful result. Still, it would seem to compare the visual resolving power to blur discernment.

Ron
 
"I can't imagine why it should be any harder to adjust a large binocular than a small "

i spent a couple of days with a navy opticalman learning how to align binouclars -the 7 & 8 power binos were faily easy to see what you had to do {using a nay mk5 collimator}
we also rec'd a badly adjusted pair of 20x80 -thee were extremely difficult to adjust -the high power leaves no room for error, you had to be 3 times as precise with your adjusting as your lower power bins -the mechanics as holger stated were the big problem

as an aside the navy mk5 relies on an aux scope -which itself has a 3x power, so when adjusting the bino you were looking at magnifcation of 21x (for a 7 power bino) & 60x for the 20x power bino..its just like finding a bird at 20x as opposed to 60x, the lower power is much easier & faster
 
Warning! This thread is more than 10 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top