Your native language may affect your musical ability
27 April 2023
Speakers of tonal and non-tonal languages may perceive music differently, according to research conducted through The Music Lab.
Your native language may affect your musical ability, according to University of Auckland and Yale University research.
A study comparing the melodic and rhythmic abilities of almost half a million people speaking 54 different languages found that tonal speakers are better able to discern between subtly different melodies, while non-tonal speakers are better able to tell whether a rhythm is beating in time with the music.
These advantages — in melodic perception for tonal speakers and rhythm perception for non-tonal speakers — were equivalent to about half the boost that you would have from taking music lessons, the researchers reported in the journal Current Biology.
“We grow up speaking and hearing one or more languages, and we think that experience not only tunes our mind into hearing the sounds of those languages but might also influence how we perceive musical sounds like melodies and rhythms,” says Dr Courtney Hilton, a cognitive scientist at Waipapa Taumata Rau (University of Auckland) and Yale who co-led the study.
The research was conducted via The Music Lab, a University of Auckland and Yale collaboration investigating how the human mind creates and perceives music. The head of the lab, Dr Samuel Mehr, recently joined the University of Auckland’s School of Psychology.
While non-tonal languages like English might use pitch to inflect emotion or to signify a question, raising or lowering the pitch of a syllable never changes the meaning of a word. In contrast, tonal languages like Mandarin use sound patterns to distinguish syllables and words.
“This property requires pitch sensitivity in both speakers and listeners, lest one scold (𝑚𝑎̀) one’s mother (mā) instead of one’s horse (mǎ),” says research co-leader Jingxuan Liu. She’s a native Mandarin speaker who started working on the project as an undergraduate student at Duke University in the US.
The research was conducted via The Music Lab, a University of Auckland and Yale collaboration investigating how the human mind creates and perceives music.
The scientists used The Music Lab’s online citizen science platform to test whether speaking a tonal versus non-tonal language impacts people’s musical ability. They recruited almost half a million participants from 203 countries and native speakers of 54 different languages, including 19 geographically dispersed tonal languages such as Burmese, Punjabi, and Igbo.
From New Zealand, the participants included speakers of Mandarin, Cantonese, Thai and Vietnamese (all tonal) and speakers of English (non-tonal).
Participants were given three different musical tasks that tested their ability to discern subtle differences in melody (is this melody the same as the others?), rhythm (is the drum beating in time with the song?), and fine-grained pitch perception (is the vocalist singing in tune?).
Depending on how well they performed, the participants were given increasingly difficult tests, where the differences in melody were more subtle, the mismatched rhythms were almost on beat, and the mis-tuned vocals were closer to being in tune.
Overall, the researchers found that the type of language spoken impacted melodic and rhythmic ability but did not affect people’s capacity to tell whether someone was singing in tune or not. “Native speakers across our 19 tonal languages were better on average at discriminating between melodies than speakers of non-tonal languages, and similarly, all 19 were worse at doing the beat-based task,” says Liu.
That tonal speakers have a slight rhythmic disadvantage was a surprise, but the authors think that may be due to a trade-off in attention. “It's potentially the case that tonal speakers pay less attention to rhythm and more to pitch, because pitch patterns are more important to communication when you speak a tonal language," says Hilton.
This question of whether tonal speakers might have an edge when it comes to musicality has been previously explored, but previous studies were unable to separate linguistic influences from other cultural influences.
“Prior studies mostly just compared speakers of one language to another, usually English versus Mandarin or Cantonese,” says Liu. “English and Chinese speakers also differ in their cultural background, and possibly their music exposure and training in school, so it's very difficult to rule out those cultural factors if you're just comparing those two groups.”
“We still find this effect even with a wide range of different languages and with speakers who vary a lot in their culture and background, which really supports the idea that the difference in musical processing in tonal language speakers is driven by their common tonal language experience rather than cultural differences,” says Liu.
We grow up speaking and hearing one or more languages, and we think that experience not only tunes our mind into hearing the sounds of those languages but might also influence how we perceive musical sounds like melodies and rhythms.
“Music has a lot of universal features across different cultures, but this paper shows that those universals can underlie inter-individual and cross-cultural variability,” says senior author Mehr, of the University of Auckland and Yale.
Speaking a given type of language is no substitute for music lessons, however. “Tonal language speakers had a boost in their abilities proportional to about half the boost that you would have on average if you had music lessons,” says Hilton, “but non-tonal language speakers were better at rhythm, and both melody and rhythm are important parts of music.”
There was variation in musical processing and ability between the different tonal languages and between the different non-tonal languages, but the authors say that more study is needed to dig into these smaller-scale patterns. Likewise, more research is needed to understand the mechanisms and developmental pathways behind the differences.
“One huge challenge for understanding how humans process the world is breaking down big topics like music or language into their components like pitch or beat or melody,” says another senior author, Professor Elika Bergelson, of Duke University. “A second challenge is sampling large enough samples of participants that are diverse enough in their experiences to actually be able to draw confident conclusions. This work takes an important step in both of these directions.”
The research was supported by a Royal Society of New Zealand Te Aparāngi Rutherford Discovery Fellowship, the Royal Society of New Zealand Te Aparāngi Marsden Fund, the Duke Career Center Internship Funding Program for Undergraduate Students, the Harvard Data Science Initiative, and the National Institutes of Health Director’s Early Independence Award.
Media contact
Paul Panckhurst | media adviser
M: 022 032 8475
E: paul.panckhurst@auckland.ac.nz