This story originally aired on July 8, 2022.
This story is part of our series, Wild Pennsylvania. Check out all of our stories here.
In the mountains east of Johnstown, Pennsylvania, Blue Knob State Park stretches 6000 acres. Jeff Lakin points out the different birds signing on a hike off-trail through the woods.
“You can hear that raven off in the distance,” he said.
LISTEN to the story
Larkin is a conservation biologist at Indiana University of Pennsylvania and a forest bird habitat coordinator for the American Bird Conservancy. He’s trying to learn what kinds of birds are living in this forest so he can better help protect them. He carries a GPS unit to guide them to a specific tree just down the hill.
“We’re going to be navigating to point 1572. We’re currently 247 meters away from that point.” he said.
When Larking arrives at the point on his GPS by a tall red oak tree, a small red plastic bag is zip-tied around the trunk. Inside the translucent bag is a small circuit board, about as big as a smartphone with a blinking green light. The audio recording device is known as an audio moth. There are about 400 of these audio moths in Blue Knob. Each device turns on for two hours in the morning and two hours at night.
In the past, Larkin recorded the birds he found in the forest by standing in a spot for around 20 minutes and listening for their unique sounds.
“I would take a blank datasheet. I would take my stopwatch. I would note the time of day,” he said.
Now, the audio moths allow him to capture much more data. Of course, Lakin can’t possibly listen to hundreds of hours of audio each field season, but a computer can.
Teaching a machine birdsongs
That’s where Justin Kitzes comes in. He’s an assistant professor in the Department of Biological Sciences at the University of Pittsburgh. After 35 days of recording, audio moths are taken to Kitzes’ lab, which created a program to analyze the audio using a neural network — a program trained to sift through the hours of audio and identify various bird sounds based on a variety of sample lengths. It’s like the app Shazam, which identifies songs.
“These models are what are known as convolutional neural networks. I sometimes say they’re basically the same model that Google uses to tell you that there’s a cat in a photo. They are image classification models.” explained Kitzes.
The computer looks at a digital representation of the audio called a spectrogram and compares it to known images of bird sounds. Kitzes says that technology has gotten much better in the last 10 years.
“The accuracy improvements put this approach kind of over that threshold of, ‘This is nice and we can use it in some limited circumstances,” to, ‘Wow, this actually has potential to be competitive with other methods,'” he said.
Audio of bird songs collected from the audio moth, represented by the spectrogram (courtesy of Justin Kitzes / University of Pittsburgh)
Helping to save birds
This kind of monitoring can help Larkin and others restore bird habitat at Blue Knob. Here, Larkin points out that the forest floor is nearly all ferns from heavy deer grazing that takes out tree saplings, allowing ferns to grow. That’s bad news for birds, like blue-headed vireos that need trees to nest in.
“If we want to have diverse wildlife communities in our forests, then we need to make sure our forests are diverse in both structure and composition,” Larkin said.
Larkin and Justin Kitzes’ lab will return at the same time next year to continue their research, which is expected to last for 10 years.