Yuto Ozaki et al., Globally, songs and instrumental melodies are slower and higher and use more stable pitches than speech: A Registered Report. Science Advances 10, eadm9797(2024). DOI:10.1126/sciadv.adm9797
Abstract: Both music and language are found in all known human societies, yet no studies have compared similarities and differences between song, speech, and instrumental music on a global scale. In this Registered Report, we ana- lyzed two global datasets: (i) 300 annotated audio recordings representing matched sets of traditional songs, recited lyrics, conversational speech, and instrumental melodies from our 75 coauthors speaking 55 languages; and (ii) 418 previously published adult-directed song and speech recordings from 209 individuals speaking 16 languages. Of our six preregistered predictions, five were strongly supported: Relative to speech, songs use (i) higher pitch, (ii) slower temporal rate, and (iii) more stable pitches, while both songs and speech used similar (iv) pitch interval size and (v) timbral brightness. Exploratory analyses suggest that features vary along a “musi-linguistic” continuum when including instrumental melodies and recited lyrics. Our study provides strong empirical evidence of cross-cultural regularities in music and speech.
From the discussion:
“Discrete pitches or regular rhythmic patterns” are often consid- ered defining features of music that distinguish it from speech [(83) and (25) block quote in the introduction], and our analyses confirmed this using a diverse cross-cultural sample. At the same time, we were surprised to find that the two features that differed most between song and speech were not pitch stability and rhyth- mic regularity but rather pitch height and temporal rate (Fig. 4). Pitch stability was the feature differing most between instrumental music and spoken description, but sung pitches were substantially less stable than instrumental ones. Given that the voice is the oldest and most universal instrument, we suggest that future theories of the evolution of musicality should focus more on explaining the differences we have identified in temporal rate and pitch height.
This research was discussed in an article in The New York Times by Carl Zimmer: Why Do People Make Music? (May 15, 2024).
The article opens:
Music baffled Charles Darwin. Mankind’s ability to produce and enjoy melodies, he wrote in 1874, “must be ranked amongst the most mysterious with which he is endowed.”
All human societies made music, and yet, for Darwin, it seemed to offer no advantage to our survival. He speculated that music evolved as a way to win over potential mates. Our “half-human ancestors,” as he called them, “aroused each other’s ardent passions during their courtship and rivalry.”
Other Victorian scientists were skeptical. William James brushed off Darwin’s idea, arguing that music is simply a byproduct of how our minds work — a “mere incidental peculiarity of the nervous system.”
That debate continues to this day. Some researchers are developing new evolutionary explanations for music. Others maintain that music is a cultural invention, like writing, that did not need natural selection to come into existence.
A bit later:
The team, which comprised musicologists, psychologists, linguists, evolutionary biologists and professional musicians, recorded songs in 55 languages, including Arabic, Balinese, Basque, Cherokee, Maori, Ukrainian and Yoruba. Across cultures, the researchers found, songs share certain features not found in speech, suggesting that Darwin might have been right: Despite its diversity today, music might have evolved in our distant ancestors.
“It shows us that there may be really something that is universal to all humans that cannot simply be explained by culture,” said Daniela Sammler, a neuroscientist at the Max Planck Institute for Empirical Aesthetics in Frankfurt who was not involved in the study.
Databases of songs collected by ethnomusicologists sometimes lack important details. It can also be hard for researchers to make sense of the structure and lyrics of songs from other cultures. Computers, likewise, are not very good at recognizing many features of music.
“We thought we should involve the insiders,” said Yuto Ozaki, who earned his doctorate at Keio University in Japan by helping to lead the project.
The article contains samples of music used in the research.
No comments:
Post a Comment