Speech and music ‘processed in two different brain hemispheres'
Speech and music are processed in two different “hemispheres” of the human brain uniquely adapted to respond to the different sound types, scientists have found.
These hemispheres, located in the region of the brain known as the auditory cortex, are made up of specialised neural systems that process sound depending on whether it is melody or dialogue.
Dr Philippe Albouy, of McGill University in Canada, the study’s first author, said: “It has been known for decades that the two hemispheres respond to speech and music differently, but the physiological basis for this difference remained a mystery.
“Here we show that this hemispheric specialisation is linked to basic acoustical features that are relevant for speech and music, thus tying the finding to basic knowledge of neural organisation.”
A team of researchers from France and Canada combined 10 original sentences with 10 original melodies and created a collection of 100 a cappella songs, which don’t have instrumental accompaniment.
They then degraded these recordings using time (temporal) and frequency (spectral) distortions.
The researchers invited 49 participants, made up of English and French speakers, to distinguish the words or the melodies of these distorted recordings.
Dr Albouy said: “We degraded the songs in the temporal dimension or the spectral dimension with the hypothesis that the temporal dimensions would affect participants’ recognition of the sentences while spectral degradations would affect the participants’ recognition of the music.”
Next, the researchers set out to analyse how these degraded sounds would affect brain activity.
Brain scans showed that the temporal distortion of sound affected activity in the left auditory cortex of the brain. In this scenario, the participants had trouble distinguishing the speech content, but not the melody.
Conversely, when spectral information was distorted, it affected the brain’s left auditory cortex, where the test subjects struggled to distinguishing the melody, but not the speech.
According to the researchers, the findings show the human brain developed complementary neural systems in each hemisphere to process auditory stimulation.
Dr Albouy said: “This can be considered an elegant solution by the central nervous system to optimise the processing of two important communicative signals in the human brain that are speech and music.”
The findings are published in the journal Science.