單詞和旋律在大腦的不同部位被解碼
Listening to songs is a common activity, yet it relies on our brain’s ability to carry out the difficult task of processing both speech and music at the same time. A new piece of research studied how our brain manages to do this, and it turns out we have different sides of our brain to thank for distinguishing between the words and melodies in songs.
聽音樂是一種常見的活動,但它依賴于我們的大腦同時處理語音和音樂這一艱巨任務(wù)的能力。一項新的研究研究了我們的大腦是如何做到這一點的,結(jié)果表明,我們大腦的不同部分負責(zé)區(qū)分歌曲中的單詞和旋律。
Katy Pallister
Inspired by a songbird’s ability to separate sounds using two measures (time and frequency), Robert Zatorre, a professor at McGill University’s Montreal Neurological Institute and co-author of the study published in Science, told NPR they wanted to see if the same happened in humans.
麥吉爾大學(xué)蒙特利爾神經(jīng)學(xué)研究所的教授、《科學(xué)》雜志上發(fā)表的這項研究的合著者羅伯特·扎托爾(Robert Zatorre)告訴美國國家公共電臺(NPR),他們想看看同樣的情況是否也發(fā)生在人類身上。
To do so, they first enlisted the help of a composer and a soprano. They helped to create 100 unique a cappella songs from 10 sentences each sung in 10 original melodies, only a few seconds long. Then the researchers had some fun. They altered the timing and frequency patterns of some of the recordings before asking 49 participants whether the melody and words were either the same or different in pairs of tunes.
為此,他們首先找了一位作曲家和一位女高音的幫助。他們幫助創(chuàng)作了100首獨特的無伴奏合唱歌曲,每首歌由10個句子組成,每個句子用10個原創(chuàng)旋律演唱,時長只有幾秒鐘。然后研究人員找了些樂子。他們改變了一些錄音的時間和頻率模式,然后詢問49名參與者,在兩組曲調(diào)中,旋律和歌詞是相同的還是不同的。
The researchers found that when timings were changed, participants could no longer understand the lyrics, but could still recognize the melody. On the other hand when the frequencies in the song had been distorted, the lyrics were still recognizable but the melodies no longer were.
研究人員發(fā)現(xiàn),當(dāng)時間改變時,參與者不再能理解歌詞,但仍能識別旋律。另一方面,當(dāng)歌曲的頻率被扭曲時,歌詞仍然可以識別,但旋律不再可以識別。
At the same time that the participants were played the songs, their brains were also being scanned using functional MRI. The results showed that different sides of the brain were more heavily involved in the decoding of each element. Speech content was primarily processed in the left auditory cortex, whilst the melodic part was handled primarily in the right.
在播放歌曲的同時,研究人員還對參與者的大腦進行了功能性核磁共振成像掃描。結(jié)果顯示,大腦的不同部位對每個元素的解碼都有更大的參與。語言內(nèi)容主要在左聽覺皮層處理,而旋律部分主要在右聽覺皮層處理。
The idea that the left and right brain respond to speech and music differently is not unknown. Speaking to NPR, Daniela Sammler, a researcher at the Max Planck Institute for Cognition and Neurosciences in Leipzig, Germany, who was not involved in the study, explained that “If you have a stroke in the left hemisphere you are much more likely to have a language impairment than if you have a stroke in the right hemisphere." Other studies also show that brain damage to parts of the right hemisphere can impact a person’s ability to perceive music.
左腦和右腦對語言和音樂的反應(yīng)不同,這一觀點并非無人知曉。研究員NPR,丹妮拉的《賽姆勒萊比錫馬普研究所認知和神經(jīng)科學(xué),德國,他并沒有參與這項研究,解釋說,“如果你在左半球中風(fēng)更可能有一個語言障礙比右半球的如果你有中風(fēng)。”其他研究也表明,大腦右半球部分的損傷會影響一個人感知音樂的能力。
The study contributes to our existing knowledge about why this specialization exists, and it's down to the type of acoustical information (in this case timing and frequency patterns) contained in the source’s soundwave.
這項研究有助于我們現(xiàn)有的知識,為什么這種專業(yè)化的存在,它的類型聲學(xué)信息(在這種情況下的時間和頻率模式)包含在源的聲波。
Whilst this study did use sentences in both French and English, in the future the team would like to use more melodic languages, such as Thai and Mandarin, to see how this may affect the results.
雖然這項研究確實使用了法語和英語的句子,但在未來,研究小組希望使用更多的旋律語言,如泰國語和普通話,看看這可能會如何影響結(jié)果。