Artificial intelligence that recognises a wide variety of different types of birdsong is a boon for conservationists
EVERY bird sings a different tune. Crows caws, chickadees whistle andadd a flute-like flourish. But can software identify each one even against busy background noise?
Timos Papadopoulos and colleagues at the University of Oxford are developing an algorithm to do just that.
Such algorithms could be a boon to conservationists, who regularly journey out on footto tally up the number of birds living in a given area. If, instead, audio recordings could be converted into , it would be easier to track whether a particular bird is in decline or changing its migration patterns.
Bird sounds are tricky to sort through, says Dan Stowell at Queen Mary University of London. For one thing, we don’t know what the birds are saying, while recordings tend to be noisy and distant.
“There are 12 birds in a tree somewhere rather than one person talking into a microphone,” Stowell says. “That’s what makes it a really interesting challenge.”
To develop the tool, the Oxford team collected recordings of 15 different bird species found around Europe and Asia, including the common nightingale, the great tit and the song thrush. They blended recordings with different audio environments: in one case, the gentle background noise of an urban park; in another, the din of an open air market, dense with city sounds and people’s voices.
These mash-ups were used to train a machine learning algorithm to identify birdsong from distracting backgrounds. It fared less well for a few species with songs that are closer in frequency to the city noise ().
Other groups are working on their own bird algorithms, especially those that can identify different species. Stowell and his colleagues are testing an app called Warblr that tells different songs apart and is due to launch later this year. A team at the University of Wisconsin at Madison developed a similar app called WeBIRD, designed for local birds. And the Merlin app, developed at the Cornell Lab of Ornithology, helps citizens identify a bird they have spotted by asking a few simple questions about its size, colour and location.
An algorithm like the Oxford team’s could be a valuable addition, since data recordings taken in the wild can be bogged down with uninteresting sections, says Mario Lasseck of the Animal Sound Archive at the Natural History Museum in Berlin, Germany. Lasseck is an entrant in this year’s BirdCLEF competition to be held in Toulouse, France, in September to create automatic bird-classifying algorithms based on audio alone.
“If you have data from a forest where the bird was just singing for five minutes then it’s very useful to find out where these five minutes are,” Lasseck says.
This article appeared in print under the headline “Ask the auto twitcher”
- and you’ll get:
- New Scientist magazine delivered every week
- Unlimited access to all New Scientist online content –
a benefit only available to subscribers
- Great savings from the normal price
If you would like to reuse any content from New Scientist, either in print or online, pleasedepartment first for permission. New Scientist does not own rights to photos, but there are a available for use of articles and graphics we own the copyright to.
This entry passed through the Full-Text RSS service – if this is your content and you’re reading it on someone else’s site, please read the FAQ at fivefilters.org/content-only/faq.php#publishers.