Support The World's Smartest Network

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

This site uses cookies.
Learn more.


This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.

Learning to Hear


for Members

Learning to Hear

Thursday, October 26, 2006

The New York Academy of Sciences

Presented By

Presented by the Imaging Discussion Group


Organizer: B.J. Casey, Weill Medical College of Cornell University

Speakers: Lori Holt, Carnegie Mellon University; Michael Goldstein, Cornell University; Jason Zevin, Weill Medical College of Cornell University; April Ann Benasich, Rutgers University; David S. Vicario, Rutgers University


Auditory categorization: What general learning principles can teach us about language development
Lori L. Holt
, Carnegie Mellon University

Learning plays an important role in infant language acquisition and adult second language acquisition, yet the mechanisms of auditory learning are not well-understood. We are investigating how listeners organize regularity in the sound environment for speech and nonlinguistic sounds with the aim of understanding the means by which characteristics of general auditory learning constrain language development. I will describe a series of studies using a range of converging methodologies (second-language acquisition experiments, speech production measurements, non-speech category learning experiments, and non-human animal learning models) that attempt to uncover some of the general learning principles shaping how experience with the structure of the auditory environment, including the structure of spoken language, shapes how we hear.

Learning by vocalizing: Parallels in the vocal development of songbirds and human infants: Michael Goldstein, Cornell University

The early vocalizations of songbirds and human infants, though immature in form, are similar in function. Producing these early sounds is crucial for the later development of speech and song. The mechanism of vocal development has a strong social component – the responses of conspecifics create social feedback for early sounds that guides the young towards mature vocalizations. I will present experiments that demonstrate how the immature sounds of young birds and babies regulate and are regulated by interactions with conspecifics. These studies view the infant as taking an active role in its own development and introduce new paradigms for understanding the origins of communicative skills. In cowbirds, Molothrus ater, the immature vocalizations of young males elicit reactions from adult females (who do not sing), and this feedback facilitates the development of more advanced forms of song. In humans, playback experiments show that mothers use prelinguistic vocal cues to guide their responses to infants. Vocal learning studies reveal that prelinguistic infants use social feedback from caregivers to build more developmentally advanced forms of vocalizations. In both taxa, vocal learning is non-imitative and is driven by perceiving regularities in social partners' reactions to immature vocalizations. Feedback from conspecifics thus provides reliable cues about the consequences of vocalizing. These cues serve to facilitate infants' acquisition of the basic building blocks of speech and song.

Learning and limits on adult plasticity for speech
Jason Zevin
, Weill Medical College of Cornell University

Learning a language in adulthood is more difficult than learning one as a child -- as anyone who has tried can attest. In particular, learning to hear and produce speech contrasts (such as the difference between the first sound in "rock" and the first sound in "lock") appears to be subject to a sensitive period. We are exploring the possibility that acquisition of expertise in one's native language plays a role in limiting adult plasticity. Learning to categorize speech sounds is a process of tuning the perceptual system to the dimensions along which differences are meaningful in a given language. This may eventually result in diminished sensitivity to foreign speech contrasts. Using a combination of fMRI, EEG and behavioral studies, we are studying the relationship between the age at which native Japanese speakers started learning English and their ability to perceive English speech contrasts not present in Japanese.