New research suggests that some stroke victims who have lost the ability to speak can be helped by a therapy that connects speech to music, according to a presentation 20 February at the 2010 AAAS meeting in San Diego.
Researchers discussed this therapy as well as other studies that have investigated the connection between music and language.
Gottfried Schlaug of Beth Israel Deaconess Medical Center and Harvard Medical School in Boston, is directing a clinical trial for a speech therapy for stroke victims with Broca's aphasia, a speech condition caused by damage to the left hemisphere of the brain. Patients lose the ability to make meaningful sounds and instead communicate with strings of gibberish.
"Giving patients who cannot speak the ability to answer simple questions can be very powerful," said Schlaug at the AAAS meeting. "Quality of life improves if you can tell people what you need."
Schlaug's treatment, Melodic Intonation Therapy, can allow patients to gain that crucial ability. Harnessing the power of the music-language connection in the brain, Melodic Intonation Therapy teaches patients to speak by assigning melodic tones to syllables. Simple phrases are repeated melodically in a series of 75 one-on-one sessions. Over the course of the therapy, simple utterances that had been impossible after the stroke are re-learned through music.
Not only are patients able to regain the phrases they practice in the sessions, pointed out Schlaug, but they can use the skills they learn to say phrases they did not cover in the one-on-one meetings. In that way, using music as a scaffold for language learning is a powerful teaching tool. A patient who masters the skill of speaking using melodic tones is, in the future, able to use that skill to produce any phrase.
"This is not a new idea. It has been in the literature for over 100 years that stroke victims with aphasia can sing," said Schlaug. "Now we begin to understand why and we begin to apply it clinically."
A full-scale randomized clinical trial of Melodic Intonation Therapy is ongoing at Beth Israel Deaconess Medical Center in Boston.
Musicians and scientists dating back to Darwin himself have theorized about an evolutionary and functional connection between music and language. At AAAS, several scientists highlighted the similarities in how the human brain processes musical and linguistic data. Nina Kraus, director of the Auditory Neuroscience Laboratory at Northwestern University, showed how the oldest parts of the brain involved in processing sounds are strengthened in trained musicians. The same areas are also used to detect patterns within complex sounds like language and music.
"Music has grammar, just like language," explained symposium colleague Aniruddh Patel of the Neurosciences Institute in San Diego. "Pattern recognition, whether it's in music or language, is the method our brains use to understand complex systems of sounds and meanings." Patel presented data showing the importance of context to the human perception of sounds in both music and language.
In addition, Kraus explained, "Musicians are better at hearing speech in noise, at picking out a friend's voice in a crowd." She correlated such selectively enhanced pattern recognition in language to the isolation of a single instrumental sound in a musical ensemble. Musical training, she explained, makes it easier to understand tonal inputs like language in context.
Rebecca Hersher is studying neurobiology in the class of 2011 at Harvard College. Reach her at rhersher@fas.harvard.edu.