Doing some ballparking...
Looks like 50 arcseconds amounts to 0.145 inches at 50 feet.
Since that's a line seperation, recognizing a character would require more 'pixels' of sight than one.
Let's say...4 (4x4=16 blocks might be able to identify a character)
So...a font about .6 inches high (1.5 mm).
All this is to the un-aided eye, of course.
I'm looking at the "remote alarm lamp" label on a fixture plate 30 feet away.
The letters are about .5" high, but I can only see a gray blur (glasses on).
There is a font 1 inch high below that, but that's also just a blur from 30 ft..
I've always had a hard time reconciling the 'best acuity' numbers with
actual physical features like beaks and claws and letters. Something seems
to be missing. Seperating two dots is a long way from recognizing an eye or a Z.
Of course, that only multiplies the coarseness of the 50 arc-second box you're setting
to illustrate the realities.
A thought: if I magnify a view and keep the optical perfection well past the edge dimensions,
the acuity scheme will only make the edge easier to detect. Are we talking about diminishing
returns, though? That is, a sharper edge presented across just 2 cells or less doesn't look
any sharper because you've hit the limit of the eye system. The thing is, blowing up
the image should actually make life easier on your eye and push that resolution higher.
Doesn't the acuity model actually favor the higher power, rather than blunting it?
I seem to be shifting more towards 'there's something in the lens distortions'.
Watching interference fringes out the window (which test the binos more than the eyes)
keeps me stuck on the optics. Don't forget, there are at least 16 glass-air or glas-glass
surfaces in a row, and the desisgn is some kind of approximation outside center field.
But , per ronh:
I believe the origin of actuance must lie in the pixelation of retina. So as a blurred edge is increasingly magnified, the blur spreads over more pixels, and looks blurrier.
A naive interpretation (something I'm good at) predicts that the effect would occur whenever the magnified blur exceeded the eye's resolution of about 1 arc minute, corresponding to retinal cell spacing. A good 40mm binocular will resolve 3 arc seconds, so it might seem from that, that one would not notice actuance blur up to 20x.
So....the blur is pretty much there either way, and magnifying it only reveals it to you better? That sort of makes sense.
Now...if the true optical blur increased with power as well, that would cover what I see. I need to explain the fringing patterns.
The line resolution tapered test in those charts usually reveal much more about the instrument than the eye.
I just had to revise my guess about relating resolution and font size:
http://www.axis.com/academy/identification/resolution.htm
"
Other criteria are valid for objects such as license plates, where typical recommendations are that the height of letters should be represented by 15 pixels (corresponding to about 200 pixels/m) to ensure legibility.
"
15 pixels high....wow.
(not 4 as I've been using)
So...take your acuity figure and multiply by 15 for font height. What's what the surveillance guys use.
Well, that fits my view of the signs at work and at home!