Cog Sci Professor Researches Communication's Development in Newborns
"I've always been interested in the origin of complex human behavior," Warlaumont said, "and communication is one of the most important human behaviors."
Newborns mostly cry, burp and grunt, and shortly after, produce some simple, quiet, vowel-like sounds. After a couple months, they begin to expand their pitch and volume, she said. Babies also begin incorporating consonant-like elements, which involve tongue and lip movements. By about seven months, they babble in well-formed syllables, and can usually say their first words by their first birthdays.
Warlaumont studies these developments with the help of a 2-ounce digital language processor that can capture up to 16 hours of language every day. The device can be used in multimonth and multiyear studies, offering a fuller picture of children's development. The research could someday be used to develop speech-therapy treatments for children.
The device also allows researchers to study children in their normal environments instead of a laboratory. After collecting the data, Warlaumont uses computers running speech-processing and other algorithms to study the different sounds children make. It’s time-consuming for humans to listen to and code the large amount of data gathered, Warlaumont said.
Warlaumont, who earned her Ph.D. in speech language science from the University of Memphis, is the newest professor in UC Merced's cognitive science group. She is one of the 26 faculty members who joined UC Merced this academic year.
She is exploring research collaborations with the Early Childhood Education Center as well as preparing to get her research with human subjects approved through the campus's Institutional Review Board.
One finding from Warlaumont's research is that preschool-age children are more likely to get responses from adults when they create speech or speech-like sounds, such as "baba." The adult response leads children to make more speech-like sounds — essentially creating a feedback loop.
When a child has autism, there are fewer instances of this loop in large part because the child doesn't make as many speech-like sounds, Warlaumont found. This study was based off 438 12-hour recordings from the LENA Research Foundation, which also makes the language processing devices.
In addition to analyzing the emergence of verbal communication, Warlaumont is working on creating neural-network models of infants. A fully developed neural-network model would offer greater insight into how verbal communication develops in humans and how it responds to different parenting approaches, she said.
"It's hard for me to think we can understand these complex behaviors without having a model," she said.
UC Merced's strong and well-regarded cognitive science group drew Warlaumont to campus. She said she's excited to be in an interdisciplinary environment because her work incorporates speech language science, developmental psychology/neuroscience and computer science.