On a somewhat serious note, this is now a huge problem in physics - and I can guess this could be similar in other fields of science: there is now an entire generation of people who insist on using machine learning for everything. They can't even fathom using anything else, because they have this weird idea that machine learning is "better". They can't prove it, they just feel like it. So they use it now even in situation where a clear physical model can be formulated or simply as a replacement to linear regression. They say "the algorithm will find the pattern that you don't see" - only we have no idea what the patterns are, it contributes nothing to actual understanding and there is no guarantee that the next number that it says won't be completely random. It also actively suppresses any new discovery because the algorithms try to massacre any weird data so that they resemble the training set. We see literal dumbing down of science in real time, because every monkey can train a neural network and then show "results".
I sincerely hope that this will pass, then all those people will have really tough time doing anything actually useful and I'll be even more overpaid than I already have, for my only useful quality that I have and that's skepticism.
That having said, as a tool to make hobbies more fun (like AI on iNat definitely does for me), this is very fine thing.