• Welcome to BirdForum, the internet's largest birding community with thousands of members from all over the world. The forums are dedicated to wild birds, birding, binoculars and equipment and all that goes with it.

    Please register for an account to take part in the discussions in the forum, post your pictures in the gallery and more.
ZEISS DTI thermal imaging cameras. For more discoveries at night, and during the day.

Bird Detecting Camera.... (2 Viewers)

John Cantelo

Well-known member
The new Panasonic Lumix S1 hasn't got ay long lenses to go with it yet but has got what sounds like interesting AI technology which should help bird photographers. This is its "Advanced Artificial Intelligence Technology Detecting Humans, Cats, Dogs and Birds programme. Presumably, this will be able to handle leaves, twigs etc intruding into the view so should be very useful ... if it works! Either way, it seems an interesting development which may well trickle down to other Panasonic cameras and perhaps be copied by other manufacturers.
 
The new Panasonic Lumix S1 hasn't got ay long lenses to go with it yet but has got what sounds like interesting AI technology which should help bird photographers. This is its "Advanced Artificial Intelligence Technology Detecting Humans, Cats, Dogs and Birds programme. Presumably, this will be able to handle leaves, twigs etc intruding into the view so should be very useful ... if it works! Either way, it seems an interesting development which may well trickle down to other Panasonic cameras and perhaps be copied by other manufacturers.

Very interesting, thanks. It's overdue for AI to help birders, a bit different than Avix. Quote:
Conventional bird control techniques are not always effective, cost-efficient or correctly implemented. As a result many companies are now paying for the consequences of not being able to repel birds.
 
The new Panasonic Lumix S1 hasn't got ay long lenses to go with it yet but has got what sounds like interesting AI technology which should help bird photographers. This is its "Advanced Artificial Intelligence Technology Detecting Humans, Cats, Dogs and Birds programme. Presumably, this will be able to handle leaves, twigs etc intruding into the view so should be very useful ... if it works! Either way, it seems an interesting development which may well trickle down to other Panasonic cameras and perhaps be copied by other manufacturers.

Sony have AI cooking for their upcoming firmware updates in the A9. Olympus will easily be able to update their AI modes in the EM1x for a dedicated bird-mode. (The air-planes AF mode seems to already work to some extent for birds).

The question is if AI can compensate for the lack of phase detection in the S1 cameras. And if the AF-tracking is not up to the competition, the demand might be limited for longer sports/action lenses.

The S1 does not appear to be a sport camera but I guess Panasonic might release a dedicated action camera in the future, to compete with the Sony A9, Nikon D5 and Canon 1Dx etc.

We'll see how brave Sigma will be in this area but I doubt we'll see any long prime lenses for the L-mount this year.
 
Last edited:
Hi John,

The new Panasonic Lumix S1 hasn't got ay long lenses to go with it yet but has got what sounds like interesting AI technology which should help bird photographers.

Sounds great! I hope the AI takes into account that birders often take pictures where the object of interest is nowhere near full-frame size, to put it mildly :)

I've been using manual focus with single-shot autofocus occasionally, but that only works satisfactory if the bird is stationary and it's just some stuff in the foreground that's detracting the autofocus.

Vespobuteo's tip on using airplane mode is good as well, that should keep the autofocus from focusing on anything close to the camera.

So I think there's really potential for AI to handle things more intelligently, I'm looking forward to seeing that in action!

Regards,

Henning
 
The new Panasonic Lumix S1 hasn't got ay long lenses to go with it yet but has got what sounds like interesting AI technology which should help bird photographers. This is its "Advanced Artificial Intelligence Technology Detecting Humans, Cats, Dogs and Birds programme. Presumably, this will be able to handle leaves, twigs etc intruding into the view so should be very useful ... if it works! Either way, it seems an interesting development which may well trickle down to other Panasonic cameras and perhaps be copied by other manufacturers.
John,

Reviewing, and testing of this new release continues of course, but I would think any claims of "Advanced Artificial Intelligence Technology" are more marketing terms at this stage than any real abilities. With the RGB sensing of DSLR's now done by the main sensor (as is AF) in Mirrorless design cameras, things like simple shape and combined contrast recognition algorithms for faces, and eyes are becoming more commonplace. These would be at a rudimentary level, and have varying degrees of difficulty with interference, turning away or angled profiles etc.

Any claims of 'machine learning' are likely to be very basic as the capacity to reference any camera generated and learned 'databases' in real time are limited by sensor processing and computing. Mostly they are flat out reading the millions of pixel sites and processing that data for AF and recording information etc.

With more computational photography emerging, one day we may be able to tell a Greater Booby from a Spangled Drongo, but at the moment even tracking eyes effectively at 10, 20, 30 fps through all the twists and turns of movement and interference is a great feat yet to be 100% achieved.

I think we are a long way from referencing external databases in real time and being able to differentiate and identify different immature gulls (or whatever your particular white whale happens to be) etc !




Chosun :gh:
 
John,

Reviewing, and testing of this new release continues of course, but I would think any claims of "Advanced Artificial Intelligence Technology" are more marketing terms at this stage than any real abilities. With the RGB sensing of DSLR's now done by the main sensor (as is AF) in Mirrorless design cameras, things like simple shape and combined contrast recognition algorithms for faces, and eyes are becoming more commonplace. These would be at a rudimentary level, and have varying degrees of difficulty with interference, turning away or angled profiles etc.

Any claims of 'machine learning' are likely to be very basic as the capacity to reference any camera generated and learned 'databases' in real time are limited by sensor processing and computing. Mostly they are flat out reading the millions of pixel sites and processing that data for AF and recording information etc.

With more computational photography emerging, one day we may be able to tell a Greater Booby from a Spangled Drongo, but at the moment even tracking eyes effectively at 10, 20, 30 fps through all the twists and turns of movement and interference is a great feat yet to be 100% achieved.

I think we are a long way from referencing external databases in real time and being able to differentiate and identify different immature gulls (or whatever your particular white whale happens to be) etc !




Chosun :gh:

I think you misunderstand what this thread is about. All we're talking about is an AI system that promises to detect and focus on an image it recognises as a bird, not identifying it to species level.
 
I think you misunderstand what this thread is about. All we're talking about is an AI system that promises to detect and focus on an image it recognises as a bird, not identifying it to species level.

But I think if the technology is on the verge of doing that now (or presumably can, but not yet in public released equipment), then it will only be a few years before the technology will be there - don't forget the exponential rate of change of tech (albeit bearing in mind it tends to increase most where there is a good financial or military incentive to do so in any particular area.)

Smart phones now have the power ordinary computers had only a few years ago etc etc ... AI will be advancing similarly ... all it needs is a few algorithms.
 
John,

Reviewing, and testing of this new release continues of course, but I would think any claims of "Advanced Artificial Intelligence Technology" are more marketing terms at this stage than any real abilities. With the RGB sensing of DSLR's now done by the main sensor (as is AF) in Mirrorless design cameras, things like simple shape and combined contrast recognition algorithms for faces, and eyes are becoming more commonplace. These would be at a rudimentary level, and have varying degrees of difficulty with interference, turning away or angled profiles etc.

Any claims of 'machine learning' are likely to be very basic as the capacity to reference any camera generated and learned 'databases' in real time are limited by sensor processing and computing. Mostly they are flat out reading the millions of pixel sites and processing that data for AF and recording information etc.

With more computational photography emerging, one day we may be able to tell a Greater Booby from a Spangled Drongo, but at the moment even tracking eyes effectively at 10, 20, 30 fps through all the twists and turns of movement and interference is a great feat yet to be 100% achieved.

I think we are a long way from referencing external databases in real time and being able to differentiate and identify different immature gulls (or whatever your particular white whale happens to be) etc !

Chosun :gh:


Nikon's 3D matrix metering and tracking/face detect use data from a separate RGB sensor, not the image sensor.
So it's a lot less data, 180,000 points in the D5 for example.
Might be some ML in there of course, but to me it sounds more like simple shape and color recognition algorithms.
Exposure metering references a database of 30,000 scenes, not very intelligent you might think.

https://imaging.nikon.com/lineup/dslr/d4/features01.htm

Pentax, Panasonic and Sony are instead using "deep learning" and "neural networks". Just train the neural network by feeding it with 10,000 photos of birds where the eye is marked as the point to focus on and the NN will learn to recognize any "bird shape" and also focus on the eye. The rules for this will be hidden in the neural network, just like the rules for identifying an orange or an apple is hidden in our brains.

"The neural network itself is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data inputs.[2] Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules"

https://en.wikipedia.org/wiki/Artificial_neural_network

How fast and reliable the implementations of "AI focusing" will be is to be seen.
For race cars it seemed to work pretty ok on the EM1x.
If a pro race car shooter will trust it to be good enough, is another question.
 
Last edited:
Sounds like a good idea. Since smartphone cameras now recognize faces by default, it should be easy to make a normal camera which automatically focuses on eyes of a person, or for that matter, an animal or a bird. It would save me countless unsharp pictures. ;)
 
Deep learning and neural networks have until now been something where every unit has to learn (again). I do not know if it is possible to use this process to make rules that can be transferred to other units similar to changes in operating system etc?

I do think that it should be possible to do "something" that can be transferred to new units and help with bird photography -- but are we a big enough part of the business side to do that? I expect that there are a lot of professional photographers doing e.g., soccer/football matches that would pay for such a tech, are there enough of us?

Niels
 
Deep learning and neural networks have until now been something where every unit has to learn (again). I do not know if it is possible to use this process to make rules that can be transferred to other units similar to changes in operating system etc?

I do think that it should be possible to do "something" that can be transferred to new units and help with bird photography -- but are we a big enough part of the business side to do that? I expect that there are a lot of professional photographers doing e.g., soccer/football matches that would pay for such a tech, are there enough of us?

Niels

That problem seems to be solved.
But perhaps they use some kind of simplified neural network model.
From what I understand, Olympus is not using any AI specific hardware* rather than using the extra image processor in the EM1x.
It also should mean that new subject types could be added in firmware updates.

From an interview with Olympus:

"How did you train the deep learning system?

Hisashi Yoneyama: It’s not done within the camera,
we used a high specification computer. We used 10,000 images per category.

For example, when talking about cars, there are different shapes of cars, like Formula 1 and NASCAR. Per type, we give the system a couple thousand images to let the system run to recognize that car.
This information is given on a high specification laptop,
then transferred to the camera. "

"Do you see using more deep learning algorithms on future cameras?

Hisashi Yoneyama: Yes, we are considering applying this technology to additional cameras. But, the current challenge is that this camera has two engines. We need big power to run this algorithm, and this can’t be achieved by all the models, so we have to consider which models will receive this technology. But the answer is yes.

Akihito Murata: I’d like to add that, to fully utilize this technology, you need a very powerful engine. Without having two engines, it’s very difficult to achieve this. That’s why some brands use some of the deep learning technologies, but currently, it’s not possible to fully utilize that data. That’s why Olympus is, at the moment, the only one to utilize deep learning technology for cars, trains, and planes.

https://www.digitaltrends.com/photography/olympus-developers-e-m1x-q-and-a/

*Special hardware is used in some smartphones.

"Huawei was the first phone company to try to base the key appeal of one of its phones around AI, with the Huawei Mate 10. This used the Kirin 970 chipset, which introduced Huawei’s neural processing unit to the public.

Camera app scene recognition was the clearest application of its AI. The Mate 10 could identify 13 scene types, including dog or cat pictures, sunsets, images of text, blue sky photos and snow scenes."

https://www.techradar.com/news/what-does-ai-in-a-phone-really-mean
 
Last edited:
Training a neural network is computationally expensive. Running it on new inputs after training is relatively cheap.
Most image recognition models downsample images to linear resolution of 500 pixels max (e.g. 400x300 pixels for M4/3).

All the camera needs to do for birds and wild animals is to recognise where the eye or head is. It's a much simpler task then identifying a species of a duck. I worked on a model which was tasked with detecting whether an animal was present in a photo taken by a CCD camera, it can be done with 95% accuracy using a standard type of deep network. The camera needs to detect which part of the frame the animal is in, which adds more complexity, but OTOH it doesn't need to classify the species.

There was a ton of research done, already before the deep learning got big, on tracking objects in video. As camera get more powerful, they will be able to run more and more of this algorithms. It's just the beginning.
 
Training a neural network is computationally expensive. Running it on new inputs after training is relatively cheap.

You hit the nail on the head imho.
Once trained, the system can be run even on the smart phone processor. As long as the scenes are vaguely familiar, it will be able to focus on the essential elements. Consequently automatic eye focus on birds should be readily achievable.
The issue is whether any of the firms believes this is a big enough market to justify the training expense.
 
You hit the nail on the head imho.
Once trained, the system can be run even on the smart phone processor. As long as the scenes are vaguely familiar, it will be able to focus on the essential elements. Consequently automatic eye focus on birds should be readily achievable.
The issue is whether any of the firms believes this is a big enough market to justify the training expense.

The new Panasonic S1 can recognize birds, cats, dogs and humans. It's not a typical sports camera though, only 6 fps with AF-C.

"The "deep learning" that Panasonic has quoted as part of the Venus Engine processor uses human body and animal recognition (dogs, cats and birds). The AI combines three aspects of the system: DFD, Face/Eye detection, and the deep learning of human bodies and animals. By combining the focus of the lens and sensor (which calculates at 480 frames per second) and the DFD calculations, Panasonic believes the autofocus will be excellent even without the now-typical on-sensor phase detection AF technology. A scene's information is constantly retrieved while monitoring, and spacial information is continuously updating while shooting. According to Panasonic, the camera can then determine the distance to the subject at the instant the shot is taken, and based on that information can adjust for quick and accurate autofocus."

https://www.imaging-resource.com/PRODS/panasonic-s1/panasonic-s1A.HTM
 
Warning! This thread is more than 5 years ago old.
It's likely that no further discussion is required, in which case we recommend starting a new thread. If however you feel your response is required you can still do so.

Users who are viewing this thread

Back
Top