Deep learning and neural networks have until now been something where every unit has to learn (again). I do not know if it is possible to use this process to make rules that can be transferred to other units similar to changes in operating system etc?
I do think that it should be possible to do "something" that can be transferred to new units and help with bird photography -- but are we a big enough part of the business side to do that? I expect that there are a lot of professional photographers doing e.g., soccer/football matches that would pay for such a tech, are there enough of us?
Niels
That problem seems to be solved.
But perhaps they use some kind of simplified neural network model.
From what I understand, Olympus is not using any AI specific hardware* rather than using the extra image processor in the EM1x.
It also should mean that new subject types could be added in firmware updates.
From an interview with Olympus:
"How did you train the deep learning system?
Hisashi Yoneyama: It’s not done within the camera,
we used a high specification computer. We used 10,000 images per category.
For example, when talking about cars, there are different shapes of cars, like Formula 1 and NASCAR. Per type, we give the system a couple thousand images to let the system run to recognize that car.
This information is given on a high specification laptop,
then transferred to the camera. "
"Do you see using more deep learning algorithms on future cameras?
Hisashi Yoneyama: Yes, we are considering applying this technology to additional cameras. But, the current challenge is that this camera has two engines. We need big power to run this algorithm, and this can’t be achieved by all the models, so we have to consider which models will receive this technology. But the answer is yes.
Akihito Murata: I’d like to add that, to fully utilize this technology, you need a very powerful engine. Without having two engines, it’s very difficult to achieve this. That’s why some brands use some of the deep learning technologies, but currently, it’s not possible to fully utilize that data. That’s why Olympus is, at the moment, the only one to utilize deep learning technology for cars, trains, and planes.
https://www.digitaltrends.com/photography/olympus-developers-e-m1x-q-and-a/
*Special hardware is used in some smartphones.
"Huawei was the first phone company to try to base the key appeal of one of its phones around AI, with the Huawei Mate 10. This used the
Kirin 970 chipset, which introduced Huawei’s neural processing unit to the public.
Camera app scene recognition was the clearest application of its AI. The Mate 10 could identify 13 scene types, including dog or cat pictures, sunsets, images of text, blue sky photos and snow scenes."
https://www.techradar.com/news/what-does-ai-in-a-phone-really-mean