How can a wearable device train your brain to become a better guitarist?
According to the ‘Making Music’ research report by the ABRSM, UK’s largest music education body, there are 17.2 million adults in the UK currently playing an instrument. However only 5% currently have any form of lessons.
If you are a guitar player, like me, this probably seems completely reasonable. Though a few take lessons, the vast majority have enjoyed the benefits of online tutorials and the millions of great YouTube tutorials.
The music learning and production industry has been forever changed by the internet and the guitar has played centre stage. Picking up your guitar and being able to strum or pluck your way through one of your favourite songs is one of the great joys, especially when you are just starting to learn.
The process has become even easier with the introduction of guitar tablature. Suddenly, those pesky time signatures, staves and semi-quavers were a thing of the past. The stave was replaced by guitar strings and notes by numbers. It was elegantly simple and easy. The old, stuffy guard of music had fallen away and now anyone could be playing ‘Wonderwall’ in short space of time.
So surely this revolution of guitar learning is nothing short of fantastic. Though the benefits are huge, there is also one large drawback, and, depending on your guitar experience, you may not have even noticed it yet.
After 13 years of tablature books and online tutorials, I first noticed it when I was asked to solo and improvise. Suddenly I was adrift, scratching around the minor pentatonic scale for a blues riff. I resorted to repeating riffs I had already learnt, desperately searching for some Muddy Waters inspiration.
For many, they might first notice the problem when trying in vain to play a song by ear. So what has happened? With the quick fixes of tablature and online tutorials, many have never gained the ability to actually ‘hear’ music. The tablature books I read meant that my ear was never engaged in the process. The numbers and strings in the book or laptop were processed by my visual brain into motor responses, almost completely bypassing my auditory system. The ability to recognise the difference between musical notes and how these intervals are used to build chords and scales is the foundation of music knowledge.
There are plenty of ear training solutions out there but in today’s time precious world many learners still struggle to improve and often give up. It is hard to overstate the importance of this knowledge as it opens up an amazing new world of possibilities in your guitar playing. The ability to recognise musical notes normally takes years of practice but why do so many people struggle?
To explain why ear training is so hard let’s take a step back and take a quick look at the science behind hearing music. In the case of the acoustic guitar, a vibrating string causes air molecules to compress, changing the air pressure. This changing pressure gives the air molecules a certain frequency as they propagate out from the source. This creates sound waves.
It is important to note that the use of the term ‘sound’ in physics is different from that used in physiology and psychology, where the term refers to the perception of the brain (think of the old thought experiment about the tree falling in the wood!). Normally, humans hear sound frequencies between 20Hz and 20,000Hz. To cut a long story short, these sound waves enter the ear and through a complex chain of bones and are transferred to the inner ear and the fluid within a snail-like shape called the cochlea. Within the cochlea are tiny hair cells that are caused to vibrate by the surrounding fluid, transferring mechanical motion into electrical signals to be carried to the brain. The interesting point here is that the hair cells are arranged by frequency. So higher frequencies cause hair cells to move at the base and deeper frequencies work at the apex.
Therefore, individual frequencies are routed to the auditory cortex within the brain. The perception of these frequencies entering the brain is what is called pitch. What’s worth mentioning is that the brain represents pitches directly when processing information, and one will be able to tell simply by the brain activity what pitches are being played. From Daniel Levitin’s book “This is Your Brain on Music: Understanding a Human Obsession”, Levitin uses the example of a red tomato. If, he says, I showed you a red tomato while electrodes are in your visual cortex, there will be no signs of neurons that will make the electrodes turn red. On the other hand, if the electrodes are in your auditory cortex and a pure tone of 440Hz is being played, neurons in the auditory cortex will fire precisely in that frequency, resulting in the electrodes emitting the exact 440Hz of activity.
However, he also points out that interestingly, music is rather perceived by pitch relations than absolute pitch values. In short, “for pitch, what goes into the ear comes out of the brain!”. The brain’s processing of music is a fascinating and complex process and if you are interested you should definitely give the book a read.
Why is it still so hard to recognise pitches?
The question now we need to answer is, if sound enters your brain already divided by frequency, why is it still so hard to recognise pitches? Though the auditory cortex is processing unique frequencies, it is having to match these inputs to pitches stored in the short or long term memory. The storing, retrieval and matching of pitch is not perfect and it is here where most of the problems arise. So how can we improve this memory process?
There are mainly two ways that our brain learns – either by repetition or association. According to Gretchen Schmelzer, both repetition and association can affect the neurons into long term memory. But there are differences. Repetition creates long term memory by “eliciting or enacting strong chemical interactions at the synapse of your neuron.” As the old saying goes, “Repetition is the mother of all learning,” although it is seen as being able to create the strongest learning, it will take a long time to achieve.
On the other hand, association learning relies on “a piece of information to tap into a neural connection that already exists.” This explains why people can’t remember a random set of numbers but could do so if it was their mobile number. There is no definite answer on which method is better, as both are equally important to learning and memory. This is supported by a paper titled “Repetition and Association in Learning” written by Reed, stating that “repetition has been called the golden rule of learning, but a study of the factor of association will convince us, I think, that association is entitled to a place of equal, if not of greater importance.”
What we can say is that repetition takes up too much time, and if we are looking for a more efficient way of learning, association is the way to go.
In fact, the world record for remembering the most playing cards in 30mins, stands at 884 cards and all of this is accomplished using association techniques. Most ear training systems and apps work by repetition alone, building the required neural connections by constant practice. As shown above, the downside is that this takes a long length of time.
How does all this make us better at recognising intervals and subsequently better guitar players?
For music recognition, what you need is a tool or a device that can help you achieve better results much faster and more effortlessly. Vibes was founded by myself with the intent of helping people to enjoy playing and learning music. Vibes applies the latest neuroscience to a combination of a wearable technology and a mobile app, which together will enhance and accelerate the time it takes users to recognise music, play songs by ear, sight sing, and solo and improvise.
The innovative part of Vibes is mainly on the wearable device, which applies the neuroscientific concept of sensory augmentation. Through the output of the mobile app, the device will simultaneously engage multiple senses and create associations in your brain to help you learn the skills of recognising musical notes and transferring those melodies or chords to your instrument.
By definition, sensory augmentation works by stimulating the sensory apparatus of one sense in an attempt to evoke the feel of another. It might sound really scientific or clinical, but surprisingly, the concept of sensory augmentation has already been applied in various industries and projects such as VEST, which gives deaf people a ‘sense’ of hearing by turning sound into touch, and Brainport, which allows blind people to see by converting a video feed to stimulations received on the tongue.
Working with a leading London university to validate the concept, along with building the prototype, we hope that by successfully implementing this idea and bringing this novel technology into market, we can give many who have once thought they are not talented enough and gave up on their music learning, a motivation to pick up their music instrument and start enjoying music again.
Vibes is still on its early prototyping stage to test the scientific concept. This will be followed by larger controlled trials and beta tests to ensure that it can be applied to the general public. In addition, Vibes hopes to spread the word about this promising technology to all the music and guitar learners out there, no matter your age and music level, Vibes welcome all follow music and guitar enthusiasts to join their new but vibrant community.
Sign up to their landing page now to be a part of it and show your support at www.vibes-science.com.
Reed, H. (1924). Repetition and Association in Learning. The Pedagogical Seminary, 31(2), pp.147-155.
Levitin, D. (2011). This is Your Brain on Music: Understanding A Human Obsession. 1st ed. New York: Atlantic Books Ltd.