• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
Where premium quality meets exceptional value. ZEISS Conquest HDX.

AI and rare birds (1 Viewer)

Indeed, it would be a big help if AI came with some estimate how good is its prediction. Unfortunately, companies seem too proud to do that. Perhaps they fear it will undermine trust in their product, while in reality in is the opposite.
Some image recognition software (e.g. Obsidentify, plantnet) does that, but "100% certain" can be 100% wrong!
 
Had an interesting discussion about AI recently, and some conclusions are pessimistic.

One idea is that AI would perform better if small companies would develop small models for specialized purposes, fine-tune them, add expert knowledge etc. One purpose ChatGPT is inferior, just like Encyclopedia Brittanica is about everything, but very good at nothing.

Another thought is that AI may actually not progress much. As long as it is owned by few big companies with limited competition, they are interested in status quo - they would essentially undercut their existing revenue by introducing new AI. Just like Microsoft Office has been essentially the same product for 25 years, still with bugs and annoyances.

People inside big tech are actually loathy of the idea of adding expert knowledge or real life check. They think that better training models will solve everything.
 
This is already happening with the 1000s of people using software like Obsidentify and unquestionably publishing utter nonsense records.
This can be checked by people, but the few volunteers sifting through the records just cannot cope.
There is no AI to correct the AI...
Such software can help, sometimes by actually correctly ID'ing the bird/moth/beetle/whatever, although this should checked by cross-referencing with other material or with more experienced friends/acquaintances.
More often though, the software can put you in the right area so you can research a little more and, hopefully, come to the correct conclusion.
Blind faith in 100% accuracy of AI is naive in the extreme.
 
People inside big tech are actually loath of the idea of adding expert knowledge or real life check. They think that better training models will solve everything.
Have you been talking to the Obsidentify team too? ;)
Blind faith in 100% accuracy of AI is naive in the extreme.
For people who do use their brain it is a useful tool, but many people using image recognition software are complete novices and will accept anything.
It has such a low threshold that huge amounts of useless data are gathered, which no one can ever check.
 
It’s obviously incredibly flawed but already if you change the location of the photo you get different results. Indeed I think some of the bizarre results are probably location being prioritised over photo recognition rather than necessarily just bad photo recognitions. If most people in a place are asking what a red admiral is and the photo is unidentifiable it will suggest red admiral.
 
On a somewhat serious note, this is now a huge problem in physics - and I can guess this could be similar in other fields of science: there is now an entire generation of people who insist on using machine learning for everything. They can't even fathom using anything else, because they have this weird idea that machine learning is "better". They can't prove it, they just feel like it. So they use it now even in situation where a clear physical model can be formulated or simply as a replacement to linear regression. They say "the algorithm will find the pattern that you don't see" - only we have no idea what the patterns are, it contributes nothing to actual understanding and there is no guarantee that the next number that it says won't be completely random. It also actively suppresses any new discovery because the algorithms try to massacre any weird data so that they resemble the training set. We see literal dumbing down of science in real time, because every monkey can train a neural network and then show "results".

I sincerely hope that this will pass, then all those people will have really tough time doing anything actually useful and I'll be even more overpaid than I already have, for my only useful quality that I have and that's skepticism.

That having said, as a tool to make hobbies more fun (like AI on iNat definitely does for me), this is very fine thing.
 
On a somewhat serious note, this is now a huge problem in physics - and I can guess this could be similar in other fields of science: there is now an entire generation of people who insist on using machine learning for everything. They can't even fathom using anything else, because they have this weird idea that machine learning is "better".

In medicine, the opposite is true. Doctors want models as simple as possible, and explainable where results comes from. Because a mistake can cost somebody health or life and somebody must bear responsibility.

Which caused at one internet giant to announce a huge project to enter medical field and back off within a year. Makes one wonder how trustworthy are its other AI creations...
 
On a somewhat serious note, this is now a huge problem in physics - and I can guess this could be similar in other fields of science: there is now an entire generation of people who insist on using machine learning for everything. They can't even fathom using anything else, because they have this weird idea that machine learning is "better". They can't prove it, they just feel like it. So they use it now even in situation where a clear physical model can be formulated or simply as a replacement to linear regression. They say "the algorithm will find the pattern that you don't see" - only we have no idea what the patterns are, it contributes nothing to actual understanding and there is no guarantee that the next number that it says won't be completely random. It also actively suppresses any new discovery because the algorithms try to massacre any weird data so that they resemble the training set. We see literal dumbing down of science in real time, because every monkey can train a neural network and then show "results".

I sincerely hope that this will pass, then all those people will have really tough time doing anything actually useful and I'll be even more overpaid than I already have, for my only useful quality that I have and that's skepticism.

That having said, as a tool to make hobbies more fun (like AI on iNat definitely does for me), this is very fine thing.
That just strikes me as "fad" type thinking, where people tend to latch onto the new thing and try to use it for everything. Or you have folks who are really good at one technique, and so they just use it for everything, even if it would be simpler to do the old-fashioned way.
 
Warning! This thread is more than 1 year ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top