Music professor Michael Casey is working with a new kind of musical instrument: the human brain. He is the director of Bregman Media Labs at Dartmouth, where music, computer science and neuroscience meet.
“It’s a place where the ‘mad scientist’ combines with the ‘intrepid composer,’” Casey told me.
The lab is located in Hallgarten Hall, an academic building that’s so small it looks like a house, complete with a doorbell at the front door. For outsiders, it’s one of Dartmouth’s best kept secrets, a place that I’ve mindlessly walked by hundreds of times. But inside, there are people asking big questions: What is music for? What can music be?
Casey wants to entertain these questions in relation to technology. One of the projects he’s working on right now involves using electroencephalography devices to measure electrical activity in the brain and to convert that activity into sounds. The result is a “brain ensemble,” in which musicians sit in ensemble just as members of a string quartet would. By the act of relaxing or concentrating as a group, they can try to synchronize brain wave activity and control certain aspects of sound.
“So, there may be a very new kind of musician, where the agility resides in precise thought control, rather than in motor skills,” Casey explained.
Why do we care so much? Why is it so exciting to push the boundaries of what music can be? Why do all humans, without exception, enjoy music?
At the end of the 1990s, Steven Pinker, an infamous cognitive scientist at Harvard University, declared that music is simply “auditory cheesecake.” It’s something we all like to indulge in, but it holds little practical value. Casey disagrees with Pinker. When asked about auditory cheesecake, he offered a couple more promising theories.
Music helps us communicate. One idea, Casey explained, is that music and language evolved together. Some of the evidence is that intonation changes the meaning of our words. In mother-infant communication, for example, we use tones; it is the prosody and the melody of the voice that conveys meaning, not the actual words. Inflection always affects meaning. Casey gave me an example.
“When I was growing up, if my mother would say my name in harsh staccato tones, I would know I was in trouble,” Casey said, laughing. “It’s the same word, but it has very different meanings depending on how you deliver it — that is the music of language.”
Music also influences nonverbal communication by strengthening bonds between listeners or musicians. Daniel Shanker ’16 , who co-wrote the anti-musical “Legally Drew,” said that music has become the main way he connects with people.
“Right before my dad dropped me off at college, he gave me two tickets to a Dispatch concert in Boston and told me to make a new friend,” Shanker said, “The friend I ended up inviting, someone I had jammed with once or twice by the time the concert rolled around, ended up becoming one of my best friends and current bandmates.”
Casey is interested in the notion that music creates an opportunity to socialize in difficult scenarios. Music is a common experience that bridges gaps across languages, generations and cultures.
“It may well be that the very propensity of the brain towards music is what draws us together as social beings and allows us to have societies in the first place,” Casey said. “It’s big stuff.”
Music also bridges gaps across disciplines. The students that come to Bregman Media Labs usually have strong backgrounds in music, but they also bring extensive knowledge from other fields like philosophy, math and neuroscience. Casey himself teaches classes in music, computer science and cognitive science.
“The students I work with are just like I was,” he told me. “I was always a double major and I still am.”
Something about music is inherently interdisciplinary. Autumn Chuang ’16 , a double major in engineering and music, has been playing music for as long as she can remember. At Dartmouth, she is part of the Dartmouth Symphony Orchestra and the Subtleties, an all-female a cappella group. When asked, she quickly identified that music and engineering require similar types of thinking.
“Yes!” Chuang said. “Music is very analytical and requires you to be able to think and perform on your feet.”
Similarly, Shanker drew parallels between music, math and computer science. They all have rules, he explained. For example, programming languages have strict syntactic rules, and musical notes fall into scales. For Shanker, the exciting part is breaking the rules in unexpected ways.
“I think seeing the brilliant way certain theorems are used in the most exciting proofs is the same as, say, the way The Beatles’ ‘All You Need Is Love’ fits the verse into 29 beats,” Shanker said.
Casey was excited to explain how music and computing overlap every time we open up Spotify or Pandora. He called it “everyday magic.”
“For us to be able to enjoy music in streaming format, someone had to figure out how to break the sound into little packets that could be sent over the internet and then reassembled on a device and turned back into sound,” Casey said.
With the advent of new technologies, music has changed dramatically over the past half century. In 1967, Dartmouth was one of the first to create an electronic music studio, the Bregman studio, which went on to develop the world's first commercial digital musical instrument in the 1970s; it was called the Synclavier, and it was a joint production of the Dartmouth music department and the Thayer School of Engineering. It became popular when people like Michael Jackson and Sting started to incorporate it into their music.
Now, according to Casey, maybe as much as 90 percent of the music we listen to has been created at least in part by electronic means. In Japan, for example, there are virtual pop artists like Hatsune Miku created entirely using Vocaloid, a singing voice synthesizer. They have become major superstars, and they’re not even human.
So technology largely manipulates how we consume and create music, but it might also help us understand how music works in our brains. Mimi Fiertz ’18 is learning about the interactions between music and the brain in her cognition class. She was fascinated with the complexity of how the brain processes auditory information.
“I think that we often view listening to music as a passive escape,” Fiertz said. “But there are so many things occurring in your auditory cortex as you listen, just to process pitch, for example.”
We listen to music for lots of different reasons — to study, to party, to work out and to fall asleep. We listen to music for relaxation and enjoyment. When we start to look at the effects of music on the mind and body, Casey explained, there’s a potential for music to promote well-being. He takes this idea a step further in his work with the relationship between music and medicine.
Barbara Jobst, a neurology professor who leads the Dartmouth-Hitchock Medical Center epilepsy center, is a highly trained classical musician. She grew up in Bayreuth, Germany, where the annual Richard Wagner festival is held, spending her childhood playing piano and violin. Over dinner one night, she and Casey began to talk about the relationship between music and electrical stimulation to treat epilepsy. They wondered if listening to certain types of music could suppress some of the negative electrical energy in the brain, just by kicking it into a different electrical rhythm.
“These are uncharted waters for the relationship between music and medicine,” Casey said. “But with the equipment that is available right now, we may be able to systematically investigate how music can have a positive medical effect on the human brain.”
Casey emphasized that we live in a very interesting time. He urges students in our generation to think carefully about our relationship with music and with all forms of communication. In an era of instant gratification, we have a world of music available to us at the touch of a button. It’s important that we are taking pause to consciously experience it.
“You are part of a generation that may need to fight to reclaim authentic musical experiences,” Casey told me at the end of our interview.
The question remains: in a world of constant communication, can we appreciate and utilize the power of music to connect us?