As a lifelong lover of music and avid reader of biographies about musicians, I was absolutely fascinated by the Superfi blog article on the IBM Watson’s Tone Analyzer.
Not only was it superbly written but the information contained in the article was an eye-opener for me.
The Tone Analyzer detects emotions expressed in a song by analyzing the tone of the instruments and vocals. With the use of well structured and informative infographics, we see a breakdown of popular songs from the sixties until today. Colors are used throughout in the graphs and charts that depict the percentage of anger, disgust, fear, joy or sadness that is detected in the tone of a song.
Individual songs are analyzed as well as particular periods of a band or artist’s career. In the introduction, we see a list of artists and the emotions displayed by these artists over all of the songs that they have recorded. What this does, is provide an interesting comparison to how we perceive an artist and how the computer analysis depicts the tone of their songs.
At first, I wasn’t entirely convinced that a computer can read the emotion of a song better than a human. How can a computer recognize human emotion, right? As I read on, some startling facts began to sway my opinion.
One of the graphs follows the level of happiness detected in Beatle’s songs over a period of seven years. It’s a simple image of how much joy the computer picked up in their songs over this time period. What makes the graph truly interesting, is the fact that they pointed out significant events that influenced the band, corresponding to changes in the graph. Seeing this I had to read more.
Through the excellent use of graphics, the article shows us the overall mood of different artists. I had to laugh out loud when I saw that Kanye West show’s predominantly fear, this while he outwardly displays over-confidence and arrogance.
To add humor to the situation they show the tone of Kim Kardashian’s tweets – Anger! Is this the reason why confident Kanye’s songs depict so much fear? What the article shows us, is that most performers are dominated by fear – 48% being very afraid and 32% being fairly afraid. To me this makes a lot of sense, considering the pressures associated with the public life of a performing artist.
The individual analysis of songs depicting levels of particular emotions made a lot of sense. Another dig at Kanye was good for a laugh. The song “I Love Kanye” showed 0% confidence, isn’t that ironic.
What the article concludes is that the computer detects the mood projected by the sound in the song. We interpret songs based not only on the tone, but also the lyrics and information we read and hear about the artists themselves. This will influence how we experience the emotion of the song.
This is a fantastically written piece, as a music writer, I suppose I should be a little jealous.
However, the main attraction of the article has to be the superb use of infographics. In this case, one can truly say that a picture speaks a thousand words.