Research|

Elizabeth Margulis, author of books and your brain!

By Elizabeth Hellmuth Margulis, Aeon | You might not be a virtuoso, but you have remarkable musical abilities. You just don’t know about them yet.

Twenty years ago, a pair of psychologists hooked up a shoe to a computer. They were trying to teach it to tap in time with a national anthem. However, the job was proving much tougher than anticipated. Just moving to beat-dominated music, they found, required a grasp of tonal organisation and musical structure that seemed beyond the reach of an ordinary person without special training. But how could that be? Any partygoer can fake a smile, reach for a cheese cube and tap her heel to an unfamiliar song without so much as a thought. Yet when the guy she’s been chatting with tells her that he’s a musician, she might reply: ‘Music? I don’t know anything about that.’

Maybe you’ve heard a variation on this theme: ‘I can’t carry a tune to save my life.’ Or: ‘I don’t have a musical bone in my body.’ Most of us end up making music publicly just a few times a year, when it’s someone’s birthday and the cake comes out. Privately, it’s a different story – we belt out tunes in the shower and create elaborate rhythm tracks on our steering wheel. But when we think about musical expertise, we tend to imagine professionals who specialise in performance, people we’d pay to hear. As for the rest of us, our bumbling, private efforts — rather than illustrating that we share an irresistible impulse to make music — seem only to demonstrate that we lack some essential musical capacity.

But the more psychologists investigate musicality, the more it seems that nearly all of us are musical experts, in quite a startling sense. The difference between a virtuoso performer and an ordinary music fan is much smaller than the gulf between that fan and someone with no musical knowledge at all. What’s more, a lot of the most interesting and substantial elements of musicality are things that we (nearly) all share. We aren’t talking about instinctive, inborn universals here. Our musical knowledge is learned, the product of long experience; maybe not years spent over an instrument, but a lifetime spent absorbing music from the open window of every passing car.

So why don’t we realise how much we know? And what does that hidden mass of knowledge tell us about the nature of music itself? The answers to these questions are just starting to fall into place.

The first is relatively simple. Much of our knowledge about music is implicit: it only emerges in behaviours that seem effortless, like clapping along to a beat or experiencing chills at the entry of a certain chord. And while we might not give a thought to the hidden cognitions that made these feats possible, psychologists and neuroscientists have begun to peek under the hood to discover just how much expertise these basic skills rely on. What they are discovering is that musicality emerges in ways that parallel the development of language. In particular, the capacity to respond to music and the ability to learn language rest upon an amazing piece of statistical machinery, one that keeps whirring away in the background of our minds, hidden from view.

Consider the situation of infants learning to segment the speech stream – that is, learning to break up the continuous babble around them into individual words. You can’t ask babies if they know where one word stops and a new one begins, but you can see this knowledge emerge in their responses to the world around them. They might, for example, start to shake their heads when you ask if they’d like squash.

To investigate how this kind of verbal knowledge takes shape, in 1996 the psychologists Jenny Saffran, Richard Aslin and Elissa Newport, then all at the University of Rochester in New York, came up with an ingenious experiment. They played infants strings of nonsense syllables – sound-sequences such as bidakupado. This stream of syllables was organised according to strict rules: da followed bi 100 per cent of the time, for example, but pa followed ku only a third of the time. These low-probability transitions were the only boundaries between ‘words’. There were no pauses or other distinguishing features to demarcate the units of sound.

It has long been observed that eight-month-old infants attend reliably longer to stimuli that are new to them. The researchers ran a test that took advantage of this peculiar fact. After the babies had been exposed to this pseudolanguage for an extended period of time, the psychologists measured how long babies spent turning their heads toward three-syllable units drawn from the stream. The babies tended to listen only briefly to ‘words’ (units within which the probability of each syllabic transition had been 100 per cent) but to stare curiously in the direction of the ‘non-words’ (that is, units which included low-probability transitions). And since absolutely the only thing distinguishing words from non-words within this onslaught of gibberish was the transition probabilities from syllable to syllable, the infants’ reactions revealed that they had absorbed the statistical properties of the language.

This ability to track statistics about our environment without knowing we’re doing so turns out to be a general feature of human cognition. It is called statistical learning, and it is thought to underlie our earliest ability to understand what combinations of syllables count as words in the complex linguistic environment that surrounds us during infancy. What’s more, something similar seems to happen with music.

In 1999, the same authors, working with their colleague Elizabeth Johnson, demonstrated that infants and adults alike track the statistical properties of tone sequences. In other words, you don’t have to play the guitar or study music theory to build up a nuanced sense of which notes tend to follow which other notes in a particular repertoire: simply being exposed to music is enough. And just as a baby cannot describe her verbal learning process, only revealing her achievement by frowning at the word squash, the adult who has used statistical learning to make sense of music will reveal her knowledge expressively, clenching her teeth when a particularly fraught chord arises and relaxing when it resolves. She has acquired a deep, unconscious understanding of how chords relate to one another.

It’s easy to test out the basics of this acquired knowledge on your friends. Play someone a simple major scale, Do-Re-Mi-Fa-Sol-La-Ti, but withhold the final Do and watch even the most avowed musical ignoramus start to squirm or even finish the scale for you. Living in a culture where most music is built on this scale is enough to develop what seems less like the knowledge and more like the feeling that this Ti must resolve to a Do.

Psychologists such as Emmanuel Bigand of the University of Burgundy in France and Carol Lynne Krumhansl of Cornell University in New York have used more formal methods to demonstrate implicit knowledge of tonal structure. In experiments that asked people to rate how well individual tones fitted with an established context, people without any training demonstrated a robust feel for pitch that seemed to indicate a complex understanding of tonal theory. That might surprise most music majors at US universities, who often don’t learn to analyse and describe the tonal system until they get there, and struggle with it then. Yet what’s difficult is not understanding the tonal system itself – it’s making this knowledge explicit. We all know the basics of how pitches relate to each other in Western tonal systems; we simply don’t know that we know.

Studies in my lab at the University of Arkansas have shown that people without any special training can even hear a pause in music as either tense or relaxed, short or long, depending on the position of the preceding sounds within the governing tonality. In other words, our implicit understanding of tonal properties can infuse even moments of silence with musical power. And it’s worth emphasising that these seemingly natural responses arise after years of exposure to tonal music.

When people grow up in places where music is constructed out of different scales, they acquire similarly natural responses to quite different musical elements. Research I’ve done with Patrick Wong of Northwestern University in Illinois has demonstrated that people raised in households where they listen to music using different tonal systems (both Indian classical and Western classical music, for example) acquire a convincing kind of bi-musicality, without having played a note on a sitar or a violin. So strong is our proclivity for making sense of sound that mere listening is enough to build a deeply internalised mastery of the basic materials of whatever music surrounds us.

Other, subtler musical accomplishments also seem to be widespread in the population. By definition, hearing tonally means hearing pitches in reference to a central governing pitch, the tonic. Your fellow partygoers might start a round of Happy Birthday on one pitch this weekend and another pitch the next, and the reason both renditions sound like the same song is that each pitch is heard most saliently not in terms of its particular frequency, but in terms of how it relates to the pitches around it. As long as the pattern is the same, it doesn’t matter if the individual notes are different. This capacity to hear these patterns is called relative pitch.

Relative pitch is a commonplace skill, one that develops naturally on exposure to the ordinary musical environment. People tend to invest more prestige in absolute pitch, because it’s rare. Shared by approximately 1 in 10,000 people, absolute pitch is the ability to recognise not a note’s relations to its neighbours, but its approximate acoustic frequency. People with absolute or ‘perfect’ pitch can tell you that your vacuum cleaner buzzes on an F# or your doorbell starts ringing on a B. This can seem prodigious. And yet it turns out not to be so far from what the rest of us can do normally.

A number of studies have shown that many of the other 9,999 people retain some vestige of absolute pitch. The psychologists Andrea Halpern of Bucknell University in Pennsylvania and Daniel Levitin of McGill University in Quebec both independently demonstrated that people without special training tend to start familiar songs on or very near the correct note. When people start humming Hotel California, for example, they do it at pretty much the same pitch as the Eagles. Similarly, E Glenn Schellenberg and Sandra Trehub, psychologists at the University of Toronto, have shown that people without special training can distinguish the original versions of familiar TV theme songs from versions that have been transposed to start on a different pitch. ‘The Siiiiiimp-sons’ just doesn’t sound right any other way.

> > > > > > > > > >

Another vastly undervalued skill is just tapping along to a tune. When in 1994 Peter Desain of Radboud University in the Netherlands and Henkjan Honing of the University of Amsterdam hooked up a shoe to a computer, they found what many studies since have demonstrated: that to get a computer to find the beat in even something as plodding and steady as most national anthems you have to teach it some pretty sophisticated music theory.

For example, it has to recognise when phrases start and stop, and which count as repetitions of others, and it has to understand which pitches are more and less stable in the prevailing tonal context. Beats, which seem so real and evident when we’re tapping them out on the steering wheel or stomping them out on the dance floor, are just not physically present in any straightforward way in the acoustic signal.

> > > > > > > > > >
So, the next time you’re tempted to claim you don’t know anything about music, pause to consider the substantial expertise you’ve acquired simply through a lifetime of exposure. Think about the many ways this knowledge manifests itself: in your ability to pick out a playlist, or get pumped up by a favourite gym song, or clap along at a performance. Just as you can hold your own in a conversation even if you don’t know how to diagram a sentence, you have an implicit understanding of music even if you don’t know a submediant from a subdominant.

In fact, for all its remarkable power, music is in good company here. Many of our most fundamental behaviours and modes of understanding are governed by similarly implicit processes. We don’t know how we come to like certain people more than others; we don’t know how we develop a sense of the goals that define our lives; we don’t know why we fall in love; yet in the very act of making these choices we reveal the effects of a host of subterranean mental processes. The fact that these responses seem so natural and normal actually speaks to their strength and universality.

When we acknowledge how, just by living and listening, we have all acquired deep musical knowledge, we must also recognise that music is not the special purview of professionals. Rather, music professionals owe their existence to the fact that we, too, are musical. Without that profound shared understanding, music would have no power to move us.

Elizabeth Hellmuth Margulis is director of the music cognition lab at the University of Arkansas, a trained concert pianist, and the author of On Repeat: How Music Plays the Mind (2013).

Read the whole essay here – very informative and interesting:
https://getpocket.com/explore/item/the-music-in-you

Leave a Reply

Close Search Window