The colour of music

sound-308418_1280

Vanessa Wamsley is a journalist who writes science, nature and education stories. After years of misunderstanding science, she brings a keen sense of wonder to her observations of the world. Vanessa is also a content and SEO editor intern at BiohacksBlog.com and studies science and medical writing in the Johns Hopkins University Masters in Science Writing program.

A former teacher, Vanessa penned an education column for WomanScope News Magazine. She worked as a reporter, photographer, and copy editor for a weekly paper, The VOICE News, and she wrote obituaries for the Lincoln Journal-Star. Originally from Nebraska, Vanessa lives in Reston, Virginia, but she has a nomadic lifestyle. In the last 10 years, Vanessa has called Texas, Alabama, Illinois, California, and Maryland home.

The Color of Music: Research on timbre may improve cochlear implants and searches in musical databases

Timbre, in contrast to the other fundamental elements of sound, is defined by what it is not. It is not the pitch, loudness, intensity, or duration of sound. It is everything else. Timbre is the color of sound. Every instrument, for example, has its own shade, its own nuances. Obviously, the colors are not visual. A clarinet does not sound green or purple, but a clarinet does have a unique auditory texture or color that psychoacousticians, scientists who study sound perception, call timbre.

Despite analysis going back to the 1800s, science is unable to locate exactly where timbre is located in the sound wave. However, Dr. Mounya Elhilali, an assistant professor in electrical and computer engineering at Johns Hopkins University, has created a computer model that may mimic how the brain analyses timbre.

“Choose any two instruments,make them play the exact same note with the same level, and they still sound different. This is timbre. The problem with the study of timbre is that we don’t know where in the sound a violin leaves its signature.”

Elhilali says unlocking timbre could open up new technologies for improving hearing aid devices like cochlear implants. She also thinks her research could be applied to improve Spotify and Pandora. Elhilali’s research, published in PLOS Computational Biology, tried to pinpoint the biological basis of timbre perception. Her team created a computer model to understand how the brain analyzes music to understand timbre.

In Elhilali’s model, each neuron in the middle part of the brain, the cortex, is like a camera lens. When the ear hears music, in order to discern what instrument is playing, the neurons work as if millions of tiny lenses, each a different size with different megapixel resolution, collaborate to analyze the sound. Each neuron-lens is one among billions, focusing only on its little area and illuminating a different area of the same picture. Some neurons like to respond to violins while others respond to trumpets. When a violin plays, the neurons in one area light up. When a trumpet plays, a different area lights up. When the instruments have similar sound, like a violin and a cello, the lit up areas may overlap.

All of the lenses are linked to one another, constantly communicating a full image of the musical instruments being played and lighting up or going dark in turn as the song plays.

“The close ones talk more, the far away ones talk less, but they all talk together because they all together are what allow you to hear sound,”

says Elhilali. The system is called a multi-resolution analysis.

Elhilali’s team built a computer system on a smaller scale with only several thousand model neurons and asked it to discern instruments using her multi-resolution model. It worked. Now they think the brain analyzes instruments in the same way.

Previous research on timbre looked at the frequency of the sound, a one-dimensional approach.

“We’re taking the sound and analyzing it through a zillion different lenses. Once you analyze the sound in this way, you take it from one dimension to a complicated space,”

Elhilali says.

Elhilali’s model has applications in both medicine and entertainment. At Johns Hopkins, Elhilali has worked with Dr. Charles Limb, an associate professor at the Johns Hopkins School of Medicine and a faculty member at the Peabody Conservatory of Music.

One aspect of Limb’s medical practice and research is how people with cochlear implants perceive music. A cochlear implant is a complex device that can provide some sense of sound to a person who is deaf or severely hard of hearing. The device detects sounds through a microphone and speech processor worn on the ear like a tiny Bluetooth earpiece. A wire runs from the earpiece to a transistor, which is connected to a receiver implanted just above and behind the ear. Another wire runs from the receiver into the inner ear where it is connected to an electrode that a surgeon like Limb delicately inserts into the cochlea. Signals from the electrode bypass the mechanics of the ear altogether and directly stimulate the auditory nerve to send messages to the cortex.

But hearing through a cochlear implant is not the same as normal hearing, and much of Limb’s work focuses on helping cochlear implant users to hear music. In his research, he trained a deaf cat that had been implanted with a cochlear implant to respond to a particular bugle call for food. But when he tried a similar technique with people, he found that they had a very difficult time telling instruments apart.

“Timbre perception in cochlear implant users is such a difficult problem,”

Limb says.

“Frankly, cochlear implants are essentially speech processing devices. They were not meant for music.”

Elhilali’s model of timbre perception could be used to refine the cochlear implant to help send signals to the brain that would allow the neurons to flash their lenses and analyze timbre more effectively, according to Limb.

“Music is the hardest thing in the world to hear,” he says. “It’s the pinnacle of hearing. Likewise, if you take someone who’s deaf, and you get them to hear music, it’s like going from nothing to everything. It’s the most heroic effort you could do.”

Elhilali sees another application for her work in the music industry. Music information retrieval (MIR) deals with digitally extracting and using musical information. Audio engineers use MIR to design programs that separate a drumbeat from an electric guitar and the vocals in a song. They create programs that transcribe a song from performance to written sheet music. They also try to find ways to categorize music to make it more searchable.

Jay LeBoeuf is the Strategic Technology Director at iZotope, Inc., a company in Cambridge, Mass., that develops audio technologies. He also lectures on audio and machine learning at Stanford University. LeBoeuf says there is currently no good way of searching through massive collections of sound, music, or media without first compiling massive collections of data about the music. LeBoeuf calls the data collection “laborious and uninspiring.” The data might include the duration and pitch of notes, the rhythm in the song, or what instruments are played. Most systems that filter music, like Pandora or Spotify, also use similarities between users’ listening habits to suggest new music.

But LeBoeuf says that using timbre would add a new dimension to searching for music that could increase the value of existing audio collections. Instead of an audio technician painstakingly gleaning raw data that can then be catalogued and searched, a computer model could listen to the music and sort it into a searchable database. Elhilali’s model of timbre could drastically improve machine hearing, a computer’s ability to listen to and understand sound.

“Musicians could find the loops, sounds and grooves they are seeking quickly and creatively,”

LeBoeuf says.

“This capability would allow content creators, owners, and consumers a radically better way of interacting with content.”

George Tzanetakis, an associate professor in the computer science department at the University of Victoria, agrees that Elhilali’s model could be useful to organize and search large collections of sound. However, he is less enthusiastic about its applications for computer software.

“I can’t think of any specific software that would benefit from something like this model. Most MIR companies do not sell software but rather provide data analysis services for other companies,”

he says.

So whether Elhilali’s model is used to improve a cochlear implant device or make digital music collections easier to search, she has given science a new understanding of the color of music.

List of Sources

  1.  Mounya Elhilali, personal interview, 2/06/2014. (410) 516-8185. Barton Hall 307, 3400 North Charles Street
Baltimore, MD 21218. [email protected].
  2. Patil, K., Pressnitzer, D., Shamma, S., & Elhilali, M. (2012). Music in Our Ears: The Biological Bases of Musical Timbre Perception. Plos Computational Biology, 8(11), 1-16. doi:10.1371/journal.pcbi.1002759.
  3. Charles Limb, Music, Mind, Meaning Conference presentation, 1/31/2014. (410) 502-4269. 601 N Caroline St B152, Baltimore, MD 21287. [email protected].
  4. Jay LeBoeuf, email interview, 3/06/2014. (415) 596-5392. [email protected].
  5. George Tzanetakis, email interview, 3/06/2014. (250) 472-5711 [email protected].

Share the love

Leave a Comment





download your brain health checklist

home-word-learn-1

The 9 Habits of Highly Healthy Brains

Backed by neuroscience. Tested by neuroscientists.