Your order

Close

Subtotals

Encoding wiggly air

June 14, 2019

Writing speech or a narrative is easy. We use words. But what about musical language? How do you put musical emotion down on paper?

In 1967 at Atlantic Studios in New York, two very different musical stars – at very different points in their careers – found themselves, briefly, aligned. On one side, there was Aretha Franklin; riding high from the spring breakthrough of Respect and deep into recording on her twelfth studio album, Lady Soul. On the other was then-Cream axeman Eric Clapton, still only 22 but already showing the prowess that would indelibly etch his name in the Big Book Of Important Rock Guitarists.

Clapton had been drafted in to play on the song Good to Me as I Am to You. Guitars were readied and the other players prepared themselves as the young Brit entered the room. A hush fell. But there was just one problem. Clapton looked at the musical notes set before him and saw nothing but meaningless squiggles. “I was so nervous,” wrote the guitarist, remembering it years later. “[Because] I couldn’t read music and they were all playing from music sheets on stands.”

Clapton, unsurprisingly perhaps, was able to play his part by ear and save any embarrassment. But here in miniature we have an illustration of one of the more fascinating dichotomies in music. Though notation is one of the most important inventions in the history of performed and recorded sound, it effectively sorts musicians into two distinct camps: those that can read sheet music easily and those that either can’t or find it to be a struggle. And, though you’d imagine a degree of musical literacy would be essential for any serious performer, Clapton is in pretty esteemed company when it comes to non-reading. Jimi Hendrix. Dave Grohl. Paul McCartney. Time after time, instinctive players have shown that a lack of formal training doesn’t automatically put a cap on your abilities or ambitions. And even Grammy-winning saxophonist Kirk Whalum – who can read music – cites it as something he has always had to work at.

“In the beginning I made a lot of mistakes and that was part of having to read a lot,” he says. “I would find myself in the back section of a big band and I would be struggling. To this day, it isn’t a forte of mine – but it is part of what we do.” 

Time travel

So just why – if it is not too simple a question – is the centuries-old practice of notation so important to modern music creation? How does the hierarchy between those who can read and those who can’t, play out? And, given the rise of computerised production tools and alternative forms of notation, is the old system of notes and staves unnecessarily restrictive and exclusionary to those that can feel it, even if they can’t transcribe it?

Well, the first thing to acknowledge is notation’s immeasurable significance as a cultural breakthrough. Notation of some form has existed as long as music has but, what we understand as written music began in the 11th century with an Italian monk, known as Guido of Arrezo, and a desire to bring some consistency to the way pieces of religious chant were performed. It was Guido of Arrezo who first devised the stave and his other inventions are still, remarkably in use today (he essentially came up with the ‘Do, Re, Mi’ scale).

But the ramifications were deeper than standardising the pitch of a few choirboys; notation was a way to capture the unseen and the ephemeral. It was a way – through its ability to translate sounds that were first born many years and many miles away – to almost travel through space and time.

As Thomas Forest Kelly, medieval music expert and author of ‘Capturing Music’, has put it in the past: “The people who developed this technology also prayed, sang, studied, read, and wrote. They travelled, they danced, they married… they got sick, they grew old, they lived in a world that is not our world but that was very real. To the extent that we can decode the music they wrote, we can hear the music they heard, and we can transport ourselves to a world that teaches us much about them, and even more about ourselves.”

Sometimes you have to let loose

So, yes, notation is important in terms of our connection to the musical past. But here and now, in the present, it is still – in all its dense complexity – the most efficient means of quickly relaying intricate information to musicians, particularly in the classical field.

Notation gives us a lot. But what do we lose in the process of setting things down and codifying that which is improvised, unpredictable and felt in the gut rather than the brain? Wasn’t Michael Jackson’s inability to read music – his reliance on using tape recorders and hummed harmonies to build melodies – precisely what made him such an unconventional, ingenious songwriter? As Slash, another prodigiously gifted non-reader has said: “I just try to make what’s in my head come out [of] my hands and in the guitar.”

Surprisingly, there is some evidence to back up this anecdotal theory. In 2008, a Johns Hopkins University study found that when jazz musicians improvised – as opposed to sight-read – music, MRI readings showed that their brains turned off areas linked to self-censoring and inhibition, while also allowing more freedom of expression. Simply put, there are avenues of creativity that only open up when you are not squinting at a music stand.

The mention of jazz is key, too. Though – as Whalum’s words tell us – lightning-quick literacy is important in the genre, it is a skill that has to coexist with the loose-limbed, free association that is key to also part of jazz’s DNA. The same is perhaps not true of classical composition (where composers who might not read at lightning speed are colloquially dismissed as ‘whistlers’ who have to hum their tunes to a transcriber), but these pecking orders between readers and non-readers are always there. “In jazz you can still get away with a lot without sight reading that well,” says Asaf Peres, musician, composer and founder of songwriting-analysis platform Top40 Theory. “But you still will kind of be looked down upon when you’re playing with saxophone- and piano-players who are great sight-readers.”

It’s worth noting, however, that this old binary – readers on one side, non-readers on the other – might be changing slightly. On one hand, traditional notation is being adapted into a modern, alternative system known as graphic notation: beautiful, idiosyncratic scores that contemporary composers use to transmit information about things like ‘tongue-ram’ and striking a particular part of the instrument for a percussive beat. On the other, DAWs – or digital audio workstations – are levelling the playing field for songwriters who don’t know their quavers from their clefs. “Mozart in the 1700s could only use a paper and pen to signify pitch and rhythm,” explains Peres. “But now, with modern technology, you have the tools to not just control those but also to surgically control the entire sound. Mozart’s final product was a score. Whereas what people produce now, in the most popular genres, is a recording.”

We have reached, you might hope, a point of equilibrium; a place where the musical past is respected and easily accessible but technology is also empowering people to create sound in new, exciting ways. But it begs the question. Will notation ever fall out of favour? Will there be a point when the Eric Clapton of the future doesn’t need to worry about potential embarrassment lying beyond the door of an unfamiliar studio? In short, no. The new doesn’t have to sweep away the old. “As long as the genres that require it are alive, traditional notation will be alive too,” says Peres, with a chuckle. “It’s kind of like evolution. We evolved from monkeys, but monkeys still exist. The old things may be lower in the hierarchy, but they won’t die out.”