Music’s Newest Composer

Madeline Wong ‘17, Opinions Editor

Hang on for a minute...we're trying to find some more stories you might like.


Email This Story






Less than three minutes long, the piece, softly, soothingly, sensitively played on a perfectly-tuned piano, seems unremarkable in the best possible way—unique but vaguely familiar, quiet but not easily forgotten.  Its delicate use of the upper range of the instrument echoes Yiruma’s pensive compositions; its placid ostinato melody, Thomas Newman’s poignant “Finding Nemo” theme.  Certainly, this piece is no Beethoven sonata or Chopin etude.  Still, it possesses an unassuming charm that warrants a second and third listen.

Incongruously named “Berlin Graffiti,” this piece is the product of two men’s two years of work, a stretch that seems egregious considering the piece’s brevity and simplicity.  Then again, to call “Berlin Graffiti” the brainchild of two men—specifically, Ed Newton-Rex and Patrick Stobbs—obscures the minor detail that the men did not, in fact, compose “Berlin Graffiti.”  No man did.  The piece, which can be found on SoundCloud, was written by Jukedeck, an artificially-intelligent composer.

Jukedeck is not the first artificial intelligence to produce a musical composition (“Berlin Graffiti” was created last August and released last month). Most consider the “Illiac Suite,” a string quartet composed in the 1950s, to be the first computer-generated score, created using varied user-inputted data and commands to produce four distinct movements. This past June, Google’s Project Magenta revealed its own AI’s creation, a 90-second piano melody generated from a mere four primer notes. Several months ago, Sony researchers released several songs created by their software, Flow Machines, in the style of artists like the Beatles and Duke Ellington (admittedly, the voices sound heavily autotuned and evidently inhuman, but the work itself is not totally unpleasant). A paper published last December introduced a statistical model, DeepBach, which uses machine-learning to create polyphonic Bach-esque harmonies, some indistinguishable from the composer’s works themselves. Many of these compositions, like “Berlin Graffiti,” could easily, perhaps unsettlingly, pass as the handiwork of a human.

The majority of these artificially-intelligent composers rely on artificial neural networks (ANN) to create music. The ANN imitate, albeit on a much smaller scale, human neural networks: they can recognize patterns, correct errors, and, most importantly, learn.  In two years, Jukedeck’s creations have evolved drastically, from pieces like “Levet,” a synthesized concoction that sounds like a perverted version of Mario’s theme song, to pieces like “Berlin Graffiti,” a complex and pleasantly harmonized composition.  Through tools such as learning algorithms, these ANN—like humans—can improve in efficiency and efficacy each time they complete a task.

There are a vast number of ANN applications, from photo identification and accurate translation, to rapid medical diagnosis and space exploration, to the creation of music; from analysis to art. Yet, while most people accept the role of an AI in YouTube’s post-video suggestions as a function of data and algorithms, it is more difficult to acknowledge an AI as a composer.

For some people, music is a uniquely human creation that transcends mere sound waves in the air.  To them, pieces like “Berlin Graffiti” may sound like music, but they are no more than notes thrown together, albeit in a pleasing manner.

“Music… is derived from one thing an AI lacks, which is spirit,” asserts Evan Li ‘19.  “[An AI] could make a bunch of notes, but those notes wouldn’t equate to what we call music.”

Nonetheless, rules and formulas, governing the expression of composers’ so-called spirits, have long provided the backbone for most music.  There are major and minor keys, expositions and developments, and chord progressions.  Deviation from these rules has often led to dismissal of the piece as “music”—take, for example, Stravinsky’s ballet, The Rite of Spring, whose manic dissonance caused riots at its premiere.

In a society that values harmonizing similarity over spirited dissonance, it seems that listeners, not composers and their spirits, determine music’s inherent value.

“Does it even matter if the music doesn’t mean anything to the creator?” asks Lillian Usadi ‘17.  “[Even if a computer creates] a piece that doesn’t mean anything emotionally per se to the computer, the music could be quite touching to the listener.”

If a piece is beautiful to at least one person, then the sentience of its composer should not detract from its musical merit.

In this regard, as AIs become more adept at producing pleasing, intricate soundtracks, the debate concerning whether or not an AI can produce music will become irrelevant.  The question of whether or not it should, however, rises in prevalence.  In less than a minute, Jukedeck, if given a genre, mood, and duration, can compose a piece—its instrumentation realistic, its title creative (if not vaguely formulaic), and its cost as low as 99 cents.  It is cheaper, faster, and, arguably, smarter than any human composer.  Even dismissing the irrational fear of sentient AIs terrorizing the world, there are those who simply believe that AI will steal the careers of deserving humans.

And yet, in spite of their superior traits, the chances that AIs will replace all human composers are slim.  There will always be people who insist that true music requires a spirit more than it does an audience, who will dismiss all AI music as nothing but notes.  There will always be people who value the person behind the performance, someone whose appearance and lifestyle they can admire and envy.  There will always be people who appreciate musicians, and will willingly commission enthusiastic young composers and performers.

We have long thought of music as one of the defining features of humanity, as evidence that we are more than neurons firing or cells multiplying; that we are creative and innovative and, well, special.  But if all it takes to write music is an algorithm and a lot of data, then perhaps we should define our humanness in our appreciation not of the music itself, but of what surrounds the music: the composer’s labor, the performer’s passion, the listener’s experience… all intangible elements that an AI will never be able to create.

 

[1] http://www.billboard.com/articles/business/7393342/google-art-project-magenta-song

[2] http://www.spin.com/2016/09/first-song-written-by-ai-really-isnt/

[3] https://arxiv.org/abs/1612.01010

[4] https://magenta.tensorflow.org/welcome-to-magenta

[5] https://soundcloud.com/jukedeck

[6] https://techcrunch.com/2015/12/07/jukedeck/

[7] https://www.cnet.com/news/googles-ai-art-project-tickles-the-ivories-in-its-debut-artificial-intelligence-project-magenta/

[8] https://www.extremetech.com/extreme/215170-artificial-neural-networks-are-changing-the-world-what-are-they

[9] https://www.nytimes.com/2017/01/22/arts/music/jukedeck-artificial-intelligence-songwriting.html?_r=0

Print Friendly, PDF & Email
Leave a Comment




Ridge High School News At Its Finest
Music’s Newest Composer