Support The World's Smartest Network

Help the New York Academy of Sciences bring late-breaking scientific information about the COVID-19 pandemic to global audiences. Please make a tax-deductible gift today.

This site uses cookies.
Learn more.


This website uses cookies. Some of the cookies we use are essential for parts of the website to operate while others offer you a better browsing experience. You give us your permission to use cookies, by continuing to use our website after you have received the cookie notification. To find out more about cookies on this website and how to change your cookie settings, see our Privacy policy and Terms of Use.

We encourage you to learn more about cookies on our site in our Privacy policy and Terms of Use.


Songs of Experience: Music and the Brain

Songs of Experience
Reported by
Kathleen McGowan

Posted September 30, 2008

Presented By


Music has special access to the human brain. From our first weeks of life, we have a strong sense of rhythm and an acute sensitivity to melody. As adults, music is integral to both our most basic and some of our most sophisticated cognitive processes.

At certain times, clinicians have also seen hints that music has the power to cure. Melody and rhythm sometimes activate neurological abilities that have been lost to disease or damage. Such observations suggest that music may have broader applications in therapy. In addition, observing how music molds the brain's response to injury may answer more basic questions about neuroplasticity, the ability of the nervous system to reshape and reorganize itself in response to environmental changes.

This four-day conference, titled Neurosciences and Music III—Disorders and Plasticity, was held at McGill University on June 25–28, 2008, bringing together neurophysiologists, brain imaging researchers, rehabilitation specialists, musicologists, music educators, musicians, and psychologists, among others to share their work.

Use the tabs above to see a meeting report and multimedia from this event.


This conference and eBriefing promoted by:

Fondazione Pierfranco e Luisa Mariani neurologia infantile

Web Sites

Pierfranco and Luisa Mariani Foundation
The Mariani Foundation was the lead organizer for the Neurosciences and Music III conference. This nonprofit organization's core meeting is to promote progress in child neurology by providing services, educational events, and funding for research. Go to their site for more information about neuroscience and music conferences and see their neuromusic news releases for information about recent papers in the field.

Brams (International Laboratory for Brain, Music, and Sound Research)
This research unit affiliated with McGill, the Université de Montreal, and the Montreal Neurological Research Institute, provides links to many music and brain science researchers in the Montreal region.

House Ear Institute
Information about ongoing research into the auditory system and technologies to improve hearing. Tigerspeech is a program designed to help people with cochlear implants learn to make use of their prosthetics.

Nature Magazine Web Focus: Science & Music
This recent nine-part series features essays on the interface between science and music, as well as a podcast.

This is Your Brain on Music
The promotional Web site for Daniel Levitin's book describes the role of various brain regions in music processing and includes references to the brain in popular songs.


Brown S, Volgsten U, eds. 2005. Music and Manipulation: On the Social Uses and Social Control of Music. Berghahn Books, New York.

Blacking J. 1995. How Musical is Man? University of Washington Press, Seattle.

Levitin DJ. 2006. This Is Your Brain on Music: The Science of a Human Obsession. Plume, New York.

Mithen S. 2006. The Singing Neanderthals: The Origins of Music, Language, Mind and Body. Harvard University Press, Cambridge, MA.

Patel AD. 2007. Music, Language and the Brain. Oxford University Press, New York.

Peretz I, Zatorre R, eds. 2003. The Cognitive Neuroscience of Music. Oxford University Press, New York.

Sacks O. 2007. Musicophilia: Tales of Music and the Brain. Knopf, New York.

Thaut MH. 2005. Rhythm, Music, and the Brain: Scientific Foundations and Clinical Applications. Routledge, New York.

Wallin NL, Merker B, Brown S, eds. 2001. The Origins of Music. The MIT Press, Cambridge, MA.


Ayotte J, Peretz I, Hyde K. 2002. Congenital amusia: A group study of adults afflicted with a music-specific disorder. Brain 125: 238-251. (PDF, 385 KB) Full Text

Brown S, Martinez MJ. 2007. Activation of premotor vocal areas during musical discrimination. Brain Cogn. 63: 59-69. (PDF, 765 KB) Full Text

Brown S, Ngan E, Liotti M. 2008. A larynx area in the human motor cortex. Cereb. Cortex 18: 837-845. Full Text

Chen JL, Penhune VB, Zatorre RJ. 2008. Listening to musical rhythms recruits motor regions of the brain. Cereb. Cortex. [Epub ahead of print]

Chen JL, Penhune VB, Zatorre RJ. 2008. Moving on time: brain network for auditory-motor synchronization is modulated by rhythm complexity and musical training. J. Cogn. Neurosci. 20: 226-239.

Chen JL, Zatorre RJ, Penhune VB. 2006. Interactions between auditory and dorsal premotor cortex during synchronization to musical rhythms. Neuroimage 32: 1771-1781.

Dalla Bella S. 2008. Singing out of tune: Disturbances of vocal performance in the general population. J. Acoust. Soc. Am. 123: 3379.

Grahn JA, Brett M. 2007. Rhythm and beat perception in motor areas of the brain. J. Cogn. Neurosci. 19: 893-906.

Gunji A, Ishii R, Chau W, et al. 2007. Rhythmic brain activities related to singing in humans. Neuroimage 34: 426-434.

Hyde KL, Zatorre RJ, Griffiths TD, et al. 2006. Morphometry of the amusic brain: a two-site study. Brain 129: 2562-2570. (PDF, 424 KB) Full Text

Large EW. (in press). Resonating to musical rhythm: theory and experiment. Grondin S, ed. The Psychology of Time. Elsevier, New York.

Mandell J, Schulze K, Schlaug G. 2007. Congenital amusia: an auditory-motor feedback disorder? Restor. Neurol. Neurosci. 25: 323-334.

Marmel F, Tillmann B, Dowling WJ. 2008. Tonal expectations influence pitch perception. Percept. Psychophys. 70: 841-852.

Patel AD. 2003. Language, music, syntax and the brain. Nat. Neurosci. 6: 674-681.

Patel AD, Iversen JR. 2007. The linguistic benefits of musical abilities. Trends Cogn Sci. 11: 369-72.

Patel AD, Iversen JR, Chen Y, Repp BH. 2005. The influence of metricality and modality on synchronization with a beat. Exp. Brain Res. 163: 226-238.

Peretz I, Zatorre RJ.2005. Brain organization for music processing. Annu. Rev. Psychol. 56: 89-114.

Ruiz MH, Koelsch S, Bhattacharya J. 2008. Decrease in early right alpha band phase synchronization and late gamma band oscillations in processing syntax in music. Hum. Brain Mapp. [Epub ahead of print]

Schön D, Boyer M, Moreno S, et al. 2008. Songs as an aid for language acquisition. Cognition 106: 975-983.

Shannon RV. 2005. Speech and music have different requirements for spectral resolution. Int. Rev. Neurobiol. 70: 121-134.

Stegemöller EL, Skoe E, Nicol T, et al. 2008. Musical training and vocal production of speech and song. Music Perception 25: 419-428.

Wong PCM, Skoe E, Russo NM, et al. 2007. Musical experience shapes human brainstem encoding of linguistic pitch patterns. Nat. Neurosci. 10: 420-422.

Volkova A, Trehub SE, Schellenberg EG. 2006. Infants' memory for musical performances. Dev. Sci. 9: 583-589.

Zatorre RJ, Chen JL, Penhune VB. 2007. When the brain plays music: auditory–motor interactions in music perception and production. Nat. Rev. Neuro. 8: 547-558.


Isabelle Peretz, PhD

Université de Montréal
International Laboratory for Brain, Music, and Sound Research (Brams)
e-mail | web site | publications

Isabelle Peretz is professor of psychology at the Université de Montréal, and codirector of the International Laboratory for Brain, Music, and Sound Research (BRAMS). She also holds the positions of Canada Research Chair in Neurocognition in Music, Casavant Chair in Neurocognition of Music, and associate professor in the School of Hearing and Audiology at the University of Montreal. Her research interests include the biological foundations of music, how the brain is organized for music, music-specific impairments, neural correlates of musical emotions, speech prosody, music and speech in singing, and the neural correlates of pitch-related deficits.

Robert J. Zatorre, PhD

Montreal Neurological Institute, McGill University
e-mail | web site | publications

Robert Zatorre is a professor in the Department of Neurology and Neurosurgery at the Montreal Neurological Institute at McGill University. He heads the Auditory Processing Laboratory, which conducts basic research to understand the function of complex auditory processes, especially the processing of musical sounds and speech. He also works on auditory spatial processes and cross-modal plasticity, as well as anatomical measures of auditory cortex and its relation to hemispheric asymmetries.

Virginia Penhune, PhD

Concordia University
e-mail | web site | publications

Virginia Penhune is professor of psychology at Concordia University in Montreal. She heads the Laboratory for Motor Learning and Neural Plasticity, which investigates changes in the human brain that occur due to motor learning and performance. Her team is working to identify brain regions that control changes in movement kinematics through learning, specific kinematic parameters that change as a skill is acquired, and how musical training affects the ability to learn.

Giuliano Avanzini, PhD

Fondazione IRCCS, Istituto Neurologico "C. Besta", Milan
e-mail | web site | publications

Giuliano Avanzini is emeritus chairman of the Department of Neurosciences at the Institute Neurologico "C. Besta" in Milan, Italy and professor of neurology at the University of Ferrara, Italy. Since the 1970s he has conducted research in neurophysiology, in particular epilepsy. He is editor-in-chief of Neurological Sciences and a member of the editorial boards of the European Journal of Neurology, Epilepsy, Epilepsy Research, and Neurological Acta Scandinavia. He is president of the International School of Neurological Sciences of Venice, director of the Summer School of Epileptology, and past-president of the International League Against Epilepsy.

Luisa Lopez, MD, PhD

University of Rome "Tor Vergata"

Luisa Lopez is a neurophysiologist with a PhD in child neurology. She heads the child neurology unit in "Eugenio Litta" Rehabilitation Center in Grottaferrata, Rome. She also consults and teaches in the Child Neurology Department at University of Rome, Tor Vergata. Her combined interests in child neurology and music have driven her toward the Mariani Foundation, where she has consulted in its neuroscience and music project since 2000.

Maria Majno, PhD

Pierfranco and Luisa Mariani Foundation
e-mail | web site | publications

Maria Majno is executive director of the Fondazione Pierfranco e Luisa Mariani for Child Neurology (Milan, Italy), a nonprofit organization dedicated to care and services, research, and continuous specialized education in the field of developmental disabilities. Ever since her initial involvement with the Mariani Foundation in 1987, she has been in charge of general management and programming of training and international meetings, and of related publishing activities. Her background in the humanities and music has steered her interest toward the relationship between music and the neurosciences, which in recent years has become a main focus of the Foundation's activities.

Keynote Speaker

Steven Mithen, PhD

University of Reading
e-mail | web site

Stephen Mithen is an archeologist whose work centers on four themes: late Pleistocene and early Holocene hunter-gatherers and farmers; computational archeology; the evolution of the human mind, language, and music; and water, life, and civilization. He completed his PhD in archaeology at Cambridge University. Between 1987 and 1992 he was a research fellow at Trinity Hall and then lecturer in archaeology at Cambridge. After moving to the University of Reading, he was promoted to senior lecturer (1996), reader (1998), and then professor of early prehistory (2000). In August 2002 he was appointed as the first head of the School of Human & Environmental Sciences, formed by the Departments of Archaeology, Geography, Soil Science and the Postgraduate Institute of Sedimentology. He was elected as a Fellow of the British Academy in 2004. He is the author of The Singing Neanderthals: The Origins of Music, Language, Mind, and Body.


Eckart Altenmüller, MD

University of Music and Drama, Hannover
e-mail | web site | publications

Emmanuel Bigand, PhD

Institut Universitaire de France
e-mail | web site | publications

Elvira Brattico, PhD

University of Helsinki
e-mail | web site | publications

Steven Brown, PhD

Simon Fraser University
e-mail | web site | publications

Joyce L. Chen

McGill University, BRAMS
e-mail | publications

Simone Dalla Bella, PhD

University of Finance and Management, Warsaw
e-mail | web site | publications

Luciano Fadiga, PhD

University of Ferrara
e-mail | web site | publications

Jessica A. Grahn, PhD

Cambridge University
e-mail | web site | publications

Pamela Heaton, PhD

Goldsmiths, University of London
e-mail | web site | publications

John R. Iversen, PhD

The Neurosciences Institute
e-mail | web site | publications

Lutz Jäncke, PhD

University of Zurich
e-mail | web site

Stefan Koelsch, PhD

University of Sussex
e-mail | web site | publications

Nina Kraus, PhD

Northwestern University
e-mail | web site | publications

Edward W. Large, PhD

Florida Atlantic University
e-mail | web site | publications

Daniel J. Levitin, PhD

McGill University
e-mail | web site | publications

Mathias Oechslin

University of Zurich
e-mail | web site

Caroline Palmer, PhD

McGill University, BRAMS
e-mail | web site | publications

Christo Pantev, PhD

University of Münster
e-mail | web site | publications

Aniruddh D. Patel, PhD

The Neurosciences Institute
e-mail | web site | publications

Maria Cristina Saccuman, PhD

University Vita-Salute San Raffaele

Jenny Saffran, PhD

University of Wisconsin
e-mail | web site | publications

Séverine Samson

University of Lille 3
Villeneuve d'Ascq & La Salpêtrière Hospital, Paris
e-mail | web site | publications

Gottfried Schlaug, MD, PhD

Beth Israel Deaconess Medical Center
Harvard Medical School
e-mail | web site | publications

Matthew Schulkind, PhD

Amherst College
e-mail | web site | publications

Robert V. Shannon, PhD

House Ear Institute
e-mail | web site | publications

Joel S. Snyder, PhD

University of Nevada, Las Vegas
e-mail | web site | publications

Karsten Steinhauer, PhD

McGill University
e-mail | web site | publications

Mari Tervaniemi, PhD

University of Helsinki
e-mail | web site | publications

Michael H. Thaut, PhD

Colorado State University
e-mail | web site | publications

William Forde Thompson, PhD

Macquarie University
e-mail | web site | publications

Laurel J. Trainor, PhD

McMaster University & Rotman Research Institute
e-mail | web site | publications

Sandra Trehub, PhD

University of Toronto
e-mail | web site | publications

Patrick C.M. Wong, PhD

Northwestern University
e-mail | web site | publications

Kathleen McGowan
Kathleen McGowan is a freelance magazine writer specializing in science and medicine.

This conference and eBriefing promoted by:

Fondazione Pierfranco e Luisa Mariani neurologia infantile

To understand cognitive function, neuroscientists often look to the exceptions: people with exceptional abilities or deficits. Tone-deafness, a specific deficit of melodic processing, may not present a major challenge to leading a normal life, but it's useful in the lab, said Aniruddh Patel of the Neurosciences Institute in San Diego: "It's exciting for cognitive neuroscience in general, because it gives us a system to trace the path between genes and complex cognition." Similarly, those rare people who possess absolute pitch—the ability to identify a tone in isolation—can help researchers identify the neural underpinnings of pitch discrimination, and the relative contributions of genetics and training.

Untrained singers believe they cannot carry a tune, although pitch and time analyses reveal that most are actually adequate.

Congenital amusia, a term coined by Isabelle Peretz, is a developmental deficit in melodic discrimination and pitch processing that probably affects between 2% and 4% of the population. Patel, exploring how this core deficit affects speech comprehension (making it difficult, for example, to distinguish between questions and statements in English), probes how amusics discriminate melodic contours, changes in the pace and direction of pitch during a sentence. He suggested that humans' unusual sensitivity to relative pitch in comparison with other animals could indicate that a natural range of genetic variations arose and was maintained in the population over the course of evolution. The most extreme variations give rise to this specific deficit. Amusia, most commonly described in terms of the failure to perceive or reproduce pitch change, he argued, may also have several subtypes, such as rhythmic deficits.

The science of bad singing is generating many provocative findings about the mechanisms that underlie musical production. Untrained singers generally believe they cannot carry a tune, although pitch and time analyses reveal that most are actually adequate singers. For those few who truly cannot sing, the cause of failure is not obvious: Is it a lapse of vocal control, or an inability to match perception with production? Simone Dalla Bella of the Warsaw University of Finance and Management, in describing the phenotypes of bad singers, has found a variety of selective errors, but in general, poor singers are more likely to sing out of tune than out of time.

Most people can reproduce the pitch and time intervals of a tune with reasonable accuracy, but bad singers who err by more than two standard deviations when reproducing pitch tend also to make mistakes in timing.

In related research, Steven Brown of Simon Fraser University investigates singing as an example of imitative or mirroring behavior. He posits that the basic problem of poor-pitch singing is a sensorimotor mistranslation: a failure to convert auditory information into appropriate motor signaling. "Vocal pitch imitation is perhaps the consummate mirror behavior," he said. "We can measure the accuracy of pitch imitation much better than things like imitation of facial expression, hand gestures, and the like." The larynx and the region of motor cortex that controls it could be key sites for this sensorimotor integration.

Cochlear implants, as a prosthetic for both congenitally deaf children and adults who lose their hearing, transmit a relatively crude signal from the outer ear to the auditory nerve that is interpreted by the brain as sound. The implants work very well for speech perception, in part because we are so overtrained in recognizing speech, said Robert Shannon of the House Ear Institute, but not as well for music recognition and interpretation of the emotional content of speech. Experiments with implant users show that linguistic and musical processing call upon some of the same mechanisms, an insight that can could help improve both music-listening and speech-perception. "We need to systematically understand the training regimes that would enable the shared resources to be capitalized upon," said Nina Kraus of Northwestern University. Interestingly, though, children with early cochlear implants are much more engaged in musical activities and actually enjoy both listening and making music. "Their performance in singing may be poor by our standards," said Sandra Trehub of University of Toronto at Mississauga, "but surely they can appreciate music."

The other extreme of human musical performance is absolute pitch. Much research has suggested that this remarkable ability is strongly determined by genetics, but training is also key: two-thirds of adults with absolute pitch began studying music before age 6. Mathias Oechslin of the University of Zurich has found that as many as 80% of young children undergoing Suzuki training show a capacity for tone discrimination and distinctive neural responses that presage the capacity to develop absolute pitch. The finding is controversial, but points the way toward our brains' powerful ability to be fine-tuned by environmental stimuli.

Hearing and recognizing music come naturally to most of us, but are the result of complex cognition, requiring sensitive attention to pitch variations, rhythm processing, and the ability to distinguish timbre. Neuroscientists who study music and the brain are now mapping the basic neurocircuitry of these responses—which parts of the brain respond, in what order, and through what relationships.

Sensory and motor regions of the brain have traditionally been considered as two separate systems, but evidence from the science of music suggests this distinction may be overstated. Humans, unlike most other animals, have a powerful tendency to perceive meter and synchronize, or "entrain," with it, moving our bodies in rhythm to a regular beat. We often feel unconsciously compelled, for example, to tap our toes or bob our heads in time with music. New findings suggest this has a basis in our neurology. Music automatically activates regions of the brain involved in initiating the movements necessary for singing and dancing. This coupling between auditory and motor regions is strengthened in trained musicians, suggesting that these interconnections are flexible. Joyce Chen of McGill University has done experiments using fMRI to identify the activity of various brain regions as it processes rhythms of varying complexity. Her research suggests the superior temporal gyrus, and ventral premotor and dorsal premotor cortex are all engaged by action-related sounds.

Premotor cortex responds to auditory stimuli. The superior temporal gyrus, and ventral premotor and dorsal premotor cortex all show engagement with action-related sounds.

Mapping the circuitry of music is also leading to a more sophisticated understanding of intention and action-planning. We typically imagine that when we perform an action, higher cortical "planning" regions call the shots, and direct the activity of motor regions that carry out instructions. This schematic is being replaced by a new appreciation of the role of motor and premotor regions in higher-level processing.

Researchers have shown, for example, that when we listen silently to music, the neurons that control the larynx become more active. Others have documented that when piano players passively listen to piano music, the premotor regions of their fingers are activated. Imaging studies have identified a distributed network of brain regions, including Broca's areas, the insula, and the ventrolateral premotor cortex, involved in this audiomotor translation. Research by conference co-organizer Isabelle Peretz (University of Montreal) and colleagues shows that the supplementary motor area (SMA) is also activated when subjects listen silently to familiar melodies—they are, in a sense, mentally singing along with the tune. "These regions are really important in premotor planning," said Peretz.

Similarly, cognitive interpretations of the sounds we hear influence our basic sensory perception. Our brains respond differently when we hear the downbeat on the first rather than the second of two tones, even if the stimulus is objectively identical. "Not only does sound enable us to move rhythmically, but the way we hear the beat causes us to hear differently," said John Iversen of the Neurosciences Institute.

Perception of a stable periodicity in music is constructed in the mind.

When we listen to music, we perceive a steady beat that is intrinsic to the music itself, but that stable periodicity is a percept—something we construct in our minds. It is not clear how we create this sense of rhythm and where our timing abilities arise from. Functional MRIs show activity in regions including the premotor cortex, basal ganglia, cerebellum, prefrontal cortex, and putamen. Edward Large of Florida Atlantic University proposed that the human capacity to maintain rhythms in the absence of external input may be due to "neural oscillators." He posited that these could be an emergent property of groups of neurons that create periodic processes in the brain through alternating bursts of activity. Such a mechanism could underlie our ability to perceive rhythms in very ambiguous stimuli, such as highly syncopated music.

Other findings suggest that language and music processing share many neuronal networks and functions, and may have similar organizational structures in the brain. Along these lines, Luciano Fadiga of the University of Ferrara proposed a new description of Broca's area, the left hemispheric frontal-temporal region identified more than a century ago as essential to speech. This region is not limited to speech, but responds to both music and complex sentences in humans, and its homolog in monkeys is activated while executing or perceiving precision grips. Fadiga suggested it might function at the level of representation of actions or "potential motor acts." Humans with damage to Broca's area have nonfluent aphasias, but they also have difficulty ordering photographs that show a clear chain of events, he pointed out. "The premotor nature of Broca's region cannot be neglected," he said.

As anyone who can belt out the lyrics of a Barry Manilow hit will tell you, music seems to have a unique place in memory. For this reason, musical behavior provides a good way to study learning and neural change.

Even if it seems obvious that "earworms" stick in your head, however, this basic phenomenon has yet to be proven, said Matthew Schulkind of Amherst College. "If these claims could be verified empirically, it would suggest there's something special about musical memory," he said. "It would suggest we can retain information for a very long time without rehearsal," which would challenge our findings and data about how memory works.

In general, memory research has focused on visual and verbal stimuli, and memory for music is poorly understood. Some research indicates that music can have a potentiating effect on memory in dementia patients, perhaps by increasing attention and arousal. And there's no doubt we are very good at recognizing familiar tunes: findings presented by Emmanuel Bigand of the Université de Bourgogne show that minute, fragmentary passages of music of as little as 50 milliseconds each can reliably generate a feeling of recognition in trained musicians, although they may not be able to identify the melody.

Humans' skills in music recognition begin developing very early. Young babies exposed to music rapidly learn to distinguish consonant from dissonant tones, variations in timbre, and other subtle musical attributes. An fMRI study of newborns just a few days old exposed to dissonant phrases and excerpts from Scarlatti and Schubert indicates that specialized systems dedicated to musical stimuli are present even at birth, suggesting a neural predisposition for musical processing. "The newborn brain appears to be able to extract regularities from musical stimuli," said Maria Cristina Saccuman of the University Vita-Salute San Raffaele, Milan.

By the age of six months, babies are highly attuned to "infant-directed speech," speaking with exaggerated prosody. "Why do they like it so much?" queried Jenny Saffran of the University of Wisconsin, Madison. Its musical quality seems to make language acquisition easier, her research suggests, as infants learn more quickly from sung speech than from spoken speech. Richer information may actually speed learning; infants also learn melodies with lyrics more quickly than those that don't have words. "We may underestimate how infants learn," said Saffran. "The additional complexity of natural speech and music may be particularly beneficial."

Infants attended more closely to a novel list of numbers when it was sung (right) rather than spoken (left)—and this richer information may facilitate learning.

A variety of evidence suggests that musical training induces lasting changes in the brain. The auditory cortex delivers stronger and earlier responses. The regions of sensory cortex in violinists that represent the most important fingers are enlarged. Conversely, piano training reduces activation in the cerebellum in pianists. Correlational studies have also linked musical training to improvements in language, mathematical ability, and spatial reasoning.

Debate persists as to whether training actually boosts these higher cortical functions, or whether a pre-existing and perhaps genetic predisposition accounts for both musical and other skills, but the specificity of the effects of musical training suggest that experience is essential, said Laurel Trainor of McMaster University and the Rotman Research Institute. Her research has linked music training to increased activity in the gamma-band that has been connected to attention, anticipation, and feature binding of stimuli. "It looks like this response might tell us about how the auditory cortex changes through musical practice in ways that affect other areas of brain," she said.

Multisensory information integration might be a key. Volunteers who learned melodies by playing them performed better and, with MEG imaging, showed distinct electrophysiological changes associated with learning, compared to volunteers who learned the melodies by listening only, showed the University of Münster's Christo Pantev. Motor input potentiated plastic change in the auditory cortex.

Northwestern University's Patrick Wong is studying how both formal musical training and informal exposure to different types of music relate to cognitive and neural function. His experiments suggest, for example, that although it is not essential, formal musical training can make it easier for individuals to distinguish tones in foreign languages. MRI studies also suggest that extensive training in a specific instrument can produce a cortical network of expertise specifically attuned to that instrument. Such effects may be similar in untrained, everyday music listeners for whom cultural experience can frame their responses to unfamiliar music of other cultures. A study comparing Westerners and emigrant Indians to people living in rural India who have had little exposure to Western music suggests that cultural circumstances may shape some people to be "bimusical": The brains of these subjects respond similarly to musical stimuli from both cultures.

Wong argued that such studies will eventually have educational and clinical implications, as researchers gain a more complete understanding of how music can contribute to learning. He also provided some early evidence that it may be important to account for differences in auditory behaviors between cultures.

Music has a direct line to the emotions, and can even evoke contradictory emotions simultaneously. "It can induce such sad feelings, but also induce joy," said Elvira Brattico of the University of Helsinki. She explores the brain processes that lie behind emotional and aesthetic responses to music, performing brain scans on volunteers while they listen to both beloved and loathed music. "Favorite music has powerful effects on the deep structures of brain," she said.

Temple Grandin, an author with autism, has said that she doesn't love music but merely finds it pretty. But many people with autism spectrum disorders, in which emotional processing and socialization are impaired, are very interested in and engaged with music, and many with even severe disabilities develop impressive musical skills. "The idea that [people on the autism spectrum] are interested in music and have emotional deficits can be seen as a paradox," said Daniel Levitin of McGill University, one that he is exploring in his research. When presented with short excerpts that most people would have no trouble describing as happy, sad, frightening, or peaceful, people with autism and related disorders are less skilled at making these associations. They are also unable to distinguish between different levels of expressivity in music.

Pamela Heaton, by contrast, found generally good performance, although a high level of variability, in her studies of high-functioning autistics asked to associate musical excerpts with emotions. In general, people with autism might be responding to nonemotional aspects of music, as Levitin suggested, or they may find in music an alternative to interpersonal emotional experiences. Autistic people "find interpersonal relations extremely stressful," said Heaton, and this might be why they are drawn to song: "Music is a living, emotional stimulus that we don't have to interact with."

Music sometimes has apparently miraculous effects on patients with nervous system damage. For decades, the clinical literature has documented examples of aphasia patients who, suffering from a lesion in the left frontal or superior temporal lobe, can sing fluently but have almost entirely lost the ability to speak. Other isolated reports have described people with advanced senility who have forgotten even their own family, but may be able to remember tunes and lyrics. The fact that these abilities can be readily engaged by music suggests that these functions are not entirely lost, and could potentially be restored. Understanding how music naturally brings these capacities back online in the brain could lead to new therapeutic strategies for neurodegeneration and brain damage.

Music simultaneously activates sensory and motor systems in ways that may facilitate plasticity.

Beyond these anecdotal reports, there are other reasons to believe that music could be a uniquely powerful tool in rehabilitation. Learning and listening to music simultaneously activates sensory and motor systems in ways that can facilitate the plasticity necessary to regain lost functions. Increased attention and arousal can boost recovery in a more generalized way. Music helps people re-engage socially, and, because it is a lot more fun than most rehab programs, can both improve mood and make rehabilitation more appealing.

The effects of music on aphasia suggest that language might be supported by duplicate networks in the brain. After significant left hemispheric lesions, patients can begin to recover some function through two distinct routes, said Gottfried Schlaug of Beth Israel Deaconess Medical Center and Harvard Medical School. They may learn to recruit what is left of the damaged left hemisphere. Alternatively, the right hemisphere may take over some of the specialized language ability that was limited to the left side of the brain, and music may aid in this redevelopment and retraining process.

Schlaug used a program, Melodic Intonation Therapy, in which patients begin by humming phrases while tapping the hand along to a steady beat; humming is replaced by singing and then, over eight weeks, gradually by speaking with exaggerated musical prosody. "The idea is that melodic intonation leads to more right hemispheric activation," said Schlaug. Such was the hypothesis when the technique was first described; now it can be proven. Schlaug showed several impressive videos of stroke patients making progress with this technique. DTI scans indicate growth in the arctuate fasciculus, which connects temporal and premotor regions. The results are now being tested in a randomized clinical trial.

Here, diffusion tensor imaging reflects changes in the arctuate fasciculus of patients with left hemispheric damage, after training with Melodic Intonation Therapy has lead to improvements in speech.

Stroke or neurodegenerative diseases might also be effectively treated with music. Music could act "prosthetically" for Parkinson's disease patients, suggested Jessica Grahn, providing sensory stimulation that makes it easier to engage motor mechanisms. In stroke patients, Eckart Altenmüller of the University for Music and Theater, Hannover, found that three weeks of training on the piano led to strong improvements in upper limb fine motor control. Increases in event-related desynchronization in EEG directly before movement onset provided additional evidence that this technique was changing neuronal responses. Its success is "probably based on the optimal shaping of task demands," said Altenmüller. "Music is also great because it's self-rewarding, can line up very closely with actual abilities, and provides immediate auditory feedback"—all characteristics that can assist with rehabilitation.

Rhythmic stimulus provided by a metronome as part of a training program also helped stroke and traumatic brain injury patients regain the ability to walk, inducing improvements in velocity, stride length, and cadence superior to those following conventional training methods. "There's no magic to why these people suddenly get better when they hear music," said Michael Thaut of Colorado State University. The damage has made it difficult to control positional changes at the proper velocity, and rhythmic stimulation provides a more explicit timing cue.

The accumulating evidence that music processing connects into so many fundamental networks and functions in the brain raises a basic question: Why? Why would something that seems so incidental to survival be so deeply rooted in the brain, and so intimately intertwined with essential functions such as speech, emotional processing, learning, and even walking? "There are not many other activities to which people are so compulsively drawn, and those that exist usually have obvious survival value," said keynote speaker Steven Mithen, an anthropologist at the School of Human and Environmental Sciences at the University of Reading.

Mithen believes music is part of our evolutionary legacy. Our hominin ancestors, he proposed, communicated via variations in rhythm and pitch. As they became increasingly dependent on their social network to hunt and survive, understanding one another's emotions and mental states would have become more and more important. Musicality, said Mithen, would have both served as a communication method and a social glue. By the time of the Neanderthals, hominins may have developed a musical proto-language, using body and voice together—a system Mithen calls "Hmmm": "It would have been singing and dancing, their musicality critical to their social lives."

Some 200,000 to 70,000 years ago, that ancestral form diverged into two forms, what we now call speech and language. Speech became the primary method of transmitting information, whereas music remained a method of emotional communication and a means to facilitate social bonding. "Music was essential to the survival of our stone-age ancestors, and we have inherited the compulsion to engage in music," he said.

Mithen's persuasive argument is that music tells the story of where we came from. But the diverse research presented at this conference also suggests that music may point the way toward what we could be. Studying how music works in the brain might permit us to develop the treatments and therapies that could heal the sick—and to finally answer the question of who we truly are.

Why does musical training improve linguistic, mathematical, and spatial reasoning performance?

Does musical training improve attention and memory, and if so, by what cortical mechanisms?

What brain areas are activated during rhythm perceptions?

What neural computations give rise to rhythm perception?

What is a "beat," cognitively speaking? Does it have to do with expectancy, or attention?

Is bad singing a failure of vocal control, or an inability to match perception with production?

What are the neural mechanisms of out-of-tune singing?

Is a neural predisposition for processing music present at birth?

Is memory for music different than memory for other types of information?

Can music potentiate memory?

Why is infant-directed speech so compelling to babies, and does it promote language acquisition?

How and where is multisensory information processed in the brain?

Is absolute pitch a rare, genetically endowed trait, or the reproducible product of early training?

Are there cultural differences in how we process music, based on exposure?

Can we localize the musical lexicon in brain?

Why are very young babies so good at distinguishing music, when their auditory cortex is immature?

What is the purpose of "mirror" mechanisms in the brain?

Why are people with autism deficient in emotional processing yet intensely interested in and engaged with music?

How can people with cochlear implants perceive speech readily, even though they get only crude auditory information?

Why do humans love music?

Music has special access to the human brain. From our first weeks of life, we have a strong sense of rhythm and an acute sensitivity to melody. As adults, music is integral to both our most basic and some of our most sophisticated cognitive processes. Listening to music changes everything from the way we talk to the way we move. It also has an extraordinary ability to evoke and modulate emotions, a deep structural relationship to language, and a profound hold on memory. For scientists of the brain and of human behavior, music offers a unique window on how we comprehend and respond to the world.

Music may be a new way to engage the brain's innate ability to heal itself.

At certain times, clinicians have also seen hints that music has the power to cure. Melody and rhythm sometimes activate neurological abilities that have been lost to disease or damage. People who can no longer speak may still be able to sing. Patients who have lost neurological control of their muscles may move fluidly to a beat. These isolated observations suggest that music may have broader applications in therapy. Music may turn out to be a new way to engage the brain's innate ability to heal itself. In addition, observing how music molds the brain's response to injury may answer more basic questions about neuroplasticity, the ability of the nervous system to reshape and reorganize itself in response to environmental changes.

Music's capacity to reshape the brain was the main focus of a conference titled The Neurosciences and Music III: Disorders and Plasticity, promoted by the Pierfranco and Luisa Mariani Foundation in partnership with Brams (International Laboratory for Brain, Music, and Sound Research), McGill University, the Montreal Neurological Institute and Hospital, and the Université de Montréal, with cooperation from the New York Academy of Sciences. This four-day conference, held at McGill in Montreal, Canada on June 25–28, 2008, brought together neurophysiologists, brain imaging researchers, rehabilitation specialists, musicologists, music educators, musicians, and psychologists, among others. It will also be the focus of an upcoming volume of the Annals of the New York Academy of Sciences.

Their investigations into our ability to perceive and produce music—the basic hardware of music processing—are revealing the relationship between human musical abilities and faculties such as speech, emotion, memory, and motor control. "In music, we think in sound," said Michael H. Thaut of Colorado State University. "Music is probably a biological language of the brain, and may be a precursor to higher cognitive function."

Advances in techniques such as functional magnetic resonance imaging (fMRI), which visualizes activity within the brain with great spatial precision, diffusion tensor imaging (DTI), which depicts the strength of connections between different brain areas, and electroencephalograms (EEG) and magnetoencephalography (MEG), which track brain responses with near real-time accuracy, allow new visualization of these circuits and responses. Investigating the responses of infants, professionally trained musicians, and people with music-specific deficits such as tone-deafness provides a wealth of information about the substrates and mechanisms of musical processing. But why we are such an intensely musical species—why we feel almost universally compelled to play, listen, sing, and dance—is still an open question.

Highlights from the meeting:

  • Imaging studies are revealing the regions in the brain and the neural circuitry involved in human perceptions and responses to pitch, rhythm, and timbre in music.
  • Studies of individuals with absolute pitch or deficits such as tone deafness or an inability to perceive music can offer unique insights into the genetics and physiology of music processing.
  • The brain has a remarkable ability to remember and recognize musical phrases. Researchers are investigating how this occurs.
  • The ability to perceive and respond to music appears to be present from birth. The musical quality of infant-directed speech can make it easier to acquire language.
  • Studies have linked musical training to improvements in language, mathematical ability, and spatial reasoning. The question of whether musical training causes these improvements has not been proved definitively, but researchers are looking for activity in the brain that might provide some clues.
  • Perhaps paradoxically, people with autism are often very engaged with music; researchers are seeking an understanding of why.
  • Using music in rehabilitation programs for victims of stroke and traumatic brain injury has shown a number of positive outcomes.
  • Music may have played an important role in the evolution of language and communication in our hominin ancestors, and persisted as a method of emotional communication and a means to facilitate social bonding.

Continue reading for a more detailed report on the conference, as well as slides and audio from several of the speakers' presentations.