NEA Arts Magazine

Decoding Music's Resonance

A Look at Researcher and Performer Parag Chordia

ParagChordia2.jpg

Man playing the stringed instrument, the sarod.

Parag Chordia performing on the sarod. Photo by Preerna Gupta

Parag Chordia has spent much of his life thinking about music, first as a performer and researcher, and now, as an app developer. This combination has led him to pursue questions that most lis­teners—and even most performers—simply take for granted. “Most of us are musicians or deeply touched by music,” said Chordia of the researchers in his field. “And we also have this kind of engi­neering or scientific drive to understand why.”

Music became a central part of Chordia’s life during high school in South Salem, New York, when he attended his first Indian classical music concert with his father. He was so moved that by college, he’d decided to pursue Indian classical music performance, and took a year off from school to live in India and study the sarod, a fretless, stringed instrument. (He eventually returned to school, receiving a BS in math­ematics from Yale and a PhD in artificial intelligence and music from Stanford University.)

Years later—and after a decade of studying with renowned sarod teacher Pandit Buddhadev Das Gup­ta—Chordia has become an experienced performer. What’s more, his intense connection to music has blossomed into a career off-stage as well. Prior to taking on his current role as chief scientist of the mu­sic app developer Smule last spring, Chordia found­ed and directed the Music Intelligence Group at the Georgia Institute of Technology. His work, partly funded by the National Science Foundation, has fo­cused on a number of questions: “How is sound pro­duced, how can it be manipulated—and, also, how is it perceived?” Chordia said. “How does the brain organize sound, and why does it elicit the types of responses and emotions that it does?”

At Georgia Tech, Chordia and his colleagues want­ed to better understand the connection between music and the voice. “We said, okay, when a person is hap­py, their speech sounds different than when they’re sad,” he explained. A sad person speaks softly, slowly, often mumbles, and has a darker tone; a happy person speaks more quickly and brightly. “We started to wonder, is music bootstrapping off of the same processes? In other words, are those fundamental acoustic cues being used to signify happiness and sadness in music?”

Chordia’s team created an artificial melody, then shifted it to sound either slightly higher or slightly lower. One group of participants heard the higher melody, followed by the original; the second group heard the lower melody, followed by the original—so the second melody both groups heard was exact­ly the same. The surprising results: the participants experienced that identical melody differently. Those in the first group described the original melody as sad, because it was lower than the first sample they heard, while those in the second group described it as happy, because it was higher than the first sample they heard. The upshot was that pitch does confer emotion in music in a way that mimics our response to vocal expression. This is, Chordia explained, why a tremolo in music registers as intense: it reminds us of the way an angry, adrenaline-spurred voice shakes. Indian classical music’s overlap with human vocal properties is also part of what makes it “so emotive and expressive,” Chordia said.

The study’s other takeaway is that our expe­rience of music is relative to what we’ve heard before—our perception of music isn’t static. Nei­ther is music itself. Chordia explained that music strikes a remarkable balance between predictabil­ity and novelty. Humans are simultaneously at­tracted to both elements. On the one hand, evolu­tionarily speaking, there is a reward for accurately predicting what’s to come: if we can anticipate threats, we’re in better shape than if we can’t. On the other hand, the drive toward novelty is vital: if we never sought out new sources of food or new social connections, we’d be less successful. Corre­spondingly, our reward systems kick in—that is, we experience pleasure—in both instances.

Scan of brains showing different areas highlighted.

These fMRI images show areas of the fronto-parietal cortex that responded in similar ways across study participants as they listened to three variations of a symphony. Synchronization was strongest when participents listened to the original, unaltered symphony. Image courtesy of Parag Chordia

“I think what’s really interesting about music is that it plays off of both these things,” said Chordia, who has studied this phenomenon through computational and statistical modeling of music’s structure. “One of the ways that we describe music is ‘safe thrills.’ It’s like a roller coaster. On the one hand, you know nothing really bad is going to happen, but there are all these pleasant surprises along the way. A lot of music is like that: you set up a pattern and expectation, and then you play with it.” That might mean slightly varying the drumbeat, changing the chord pattern, or add­ing or removing instruments. “Those little surprises, it turns out, can be very pleasurable.” They result in what Chordia calls a “supercharged stimulus.”

The surprises aren’t reserved for the first time we hear a song, either. “If you play a segment of music ten times,” Chordia said, “at points of high surprise, there’s a distinct pattern you can see in the brain, and what’s interesting is that that low-level surprise doesn’t disappear.” There’s some habituation, but a piece of music can give us that little jolt of surprised pleasure even if we know it very well.

As a performer, Chordia isn’t just interested in how we perceive music. His research also investi­gates what happens to us while we play it. In one study, Chordia and his colleagues hooked trained musicians up to an EEG machine, which measures electrical activity in the brain, while they played simple, familiar songs, and then improvised. Based on preliminary data, it appeared that when they improvised, certain areas of their brains actually muted. That is, rather than requiring more activity across the brain, a highly creative state benefits from fewer active areas, so that more disparate regions can communicate with each other and create unexpected new insights. (This is perhaps one reason, Chordia sug­gested, that alcohol and music often go hand-in-hand.)

But making music doesn’t just enable new kinds of communication within our brains; it also enables an in­credible level of synchronicity between people. If you’ve ever sung in a chorus, been to a concert, or played in a band, you probably recall the camaraderie. Chordia and his colleagues wanted to figure out whether there was neurological basis for this sensation.

Man with wires connected to his face and head.

One of the subjects of the study of trained musicians by scanning brain activity while they played familiar songs and while they improvised. Photo by Parag Chordia

Using fMRI scans, which measure changes in neural blood flow, the study revealed that people who listened to the same piece of music had activ­ity in similar areas of the brain at the same time. “If you think about it, this is pretty amazing,” Chordia said, pointing out that an fMRI of two people talking or writing or gazing out the window together wouldn’t yield this kind of coordinated brain activity. “I think our powerful intuition that it is a shared experience is true.”

In recent years, Chordia’s interest in the roles of performer and audience, and how the two overlap, has led to his latest endeavor: creating apps that allow listeners to become performers.

Chordia’s main missions in his current role at Smule are encouraging people who don’t think of themselves as musicians to sing and play anyhow, and enabling people to connect with each other through music. He aims to accomplish both using smartphones: “How can we create a 21st-century folk music through technology?” Yes, there’s the iro­ny of fighting isolation via the devices that enable it. But in another sense, this is a natural next step in musical evolution: every instrument is a kind of tech­nology. Smartphones are simply a digital kind.

LaDiDa, one of Smule’s apps that grew out of Chordia’s academic research, creates background music for users’ vocal samples, a sort of reverse karaoke. Songify turns speech into a song, while AutoRap turns speech into rapping. Creating each involved extensive research into the fundamentals of how music works (what is rap, exactly, and how can a computer create it?). The broad message is that everyone can sing—you included.

Other apps help advance the collaborative-music piece of Smule’s mission. Sing! Karaoke allows users to perform karaoke with their friends, logged into smartphones far away. On Guitar!, users can create the background music for others’ vocal samples.

Given Chordia’s academic discoveries, as well as his history of playing Indian classical music, his passion for reviving shared music-making experi­ences isn’t surprising. “Playing classical music is less about performing and more about immersing yourself in it,” Chordia said.

But regardless of his musical study both onstage and in the lab, there are some aspects of this emotional resonance that may never be fully understood. “At the most fundamental level, my research really stems from this question: Why are we as humans so attracted to musical sounds? What is it about music that moves us? Why does this abstract pattern of sonic activity give rise to some of our most cherished human emo­tions? It’s really weird, actually, if you think about it.”

Jessica Gross is a freelance writer in New York City. She has contributed to the New York Times Magazine, the Paris Review Daily, Kirkus, and other publications.