|A team of University of California Merced researchers is developing a model to recognize bird calls. Recording devices called AudioMoths have been placed across bird habitat in Sonoma County.|
Author: Leigh Bernacchi, UC Merced, October 8, 2020
Bird species usually are counted twice a year by wildlife surveyors: once during the breeding season and again during the Christmas Bird Count.
New technology, however, is increasing the accuracy of bird population studies. A team of UC Merced researchers is developing a model to recognize bird calls.
Recording devices called AudioMoths have been placed across bird habitat in Sonoma County. During two-week periods, the device wakes up every 10 minutes and records one minute of sound. Department of Computer Science and Engineering professor Shawn Newsam and Electrical Engineering and Computer Science graduate student Shrishail “Shree” Baligar are using artificial intelligence (AI) to detect bird calls in the recordings.
Their model can detect 45 different species so far, and will be used to produce maps of where, when and how many species are present.
Newsam and several colleagues joined up to explore the idea with two recently funded awards.
“I love working on multidisciplinary problems. It charges me,” Newsam said. He and Baligar are working with geography professor Matthew Clark from Sonoma State University, and Leo Salas, a quantitative ecologist at Point Blue Conservation Science, a nonprofit focused on climate-smart conservation.
“I’m excited to apply AI for the benefit of the Earth,” Newsam said. “Being able to passively detect and map species distributions using low-cost audio recording devices allows a range of down-stream research by domain scientists.”
This summer, Newsam also received a $90,000, one-year “AI for Earth Innovation” grant from Global Wildlife Conservation in partnership with Microsoft. The nonprofit relies on research to work with local communities to address the root causes of threats to wildlife.
Newsam’s is one of only five projects funded out of 135 applications. The grant supports AI projects that can scale quickly. The research will benefit many other projects because it is open source.
For Newsam, there are many questions about processing the data, and many technical challenges. The recordings have biophony, geophony and anthrophony noise, and the bird calls are often faint. Some species have different calls for different communications: warning calls, mating calls and others. Which one should the AI focus on?
“Birds often modify their calls by changing frequency, for example, if other birds are also calling,” Newsam said. “I am learning a lot about bird calls.”
Baligar hears the calls as something more than just bird communication.
“I like to think of birds as musical instruments,” he said. “All the violins are orange crowned warblers, but no two violins are the same. A bird song plays different notes, and every bird likes to play a song differently every time.”
Each AudioMoth gathers about 2,000 minutes of data per site. So far, the team has more than 500,000 minute-long recordings — more than 8,000 hours of data from over 600 locations — and terrabytes of data to manage.
However, training the AI model requires a lot of annotated data.
“Deep learning is data hungry,” Baligar said. “The more data the better. On average, we have just 650 training clips per bird species, which is not a lot.”
|Print this article|
|Rating: 5.00 (1 votes) Rate this article|
|Bookmark and share this article:|