Last week, my younger cousin Armanny sent an Apple Music link to our cousins’ group chat. Sending music links has become our way of saying “Hey, here’s a song to show that I’m thinking of you.” As the group chat’s elder statesman, I usually send songs I listened to during my teenage years. Armanny usually replies, “Who’s this?” Last week, however, the tables turned. The song Armanny sent was “Heart On My Sleeve” by “ghostwriter.” Already perplexed as to who this new artist was, I was floored when Armanny wrote “This Drake x Weeknd song is AI-generated. It’s insane now. Still bumping though.” (Apparently, “still bumping though” means “I’m still going to listen to it”)
When I pressed play, it turns out Armanny was right—it did sound like Drake and The Weeknd. But it wasn’t. I then tried to explain to Armanny that AI-generated music may have the biggest impact on the music industry since Napster. Of course, 17-year-old Armanny responded, “What’s Napster?” (Sigh)
About AI Music
While most of the world has been focused on AI “chatbots” such as Chatgpt, AI has been growing in many other areas and industries. Recent songs from “artists” such as “ghostwriter” shows that Music is not exempt. In fact, despite the recent surge in attention to AI-generated music, the concept didn’t happen overnight. In fact, research suggests that the concept of computer-generated music dates back to the 1800s when mathematician and musician Ada Lovelace wrote, “[s]upposing, for instance, that the fundamental relations of pitched sounds in the science of harmony and of musical composition were susceptible of such expression and adaptations, the Engine might compose elaborate and scientific pieces of music of any degree of complexity or extent.”
I doubt Lovelace was some sort of time traveler, but how else could we explain how spot-on she was? Today, lyrics and recorded music is being entirely authored by AI. Not only that, AI is generating music that sounds like already-existing music artists—i.e., Drake and The Weeknd. This is actually a pretty straightforward process. Once a user feeds a singer’s existing music into their AI system, the machine learns to detect the musician’s patterns (essentially cloning) and then produces new music replicating that artist’s voice.
What’s The Big Deal?
“How could Armanny not be fazed by this?” “What about the authenticity of music?” “What about the story behind the creation of the song?” “What about the legal implications?”
Recently, Universal Music Group told streaming platforms—including Apple Music and Spotify—to block AI systems from ingesting the melodies, voices, and lyrics in their copyrighted music. As mentioned above, the way in which an AI system can create a song that sounds like Drake, The Weeknd, or even Jay-Z, is to first upload an already-existing song from the actual artist to the AI system and then create a new work. Thus, is AI-generated music a potentially infringing derivative work?
Or maybe this is fair use? Copyright law states that it is permissible to use limited portions of a work including quotes, for purposes such as commentary, criticism, news reporting, and scholarly reports. As pertaining to music, the fair use policy allows someone (or AI?) to use copyrighted music without permission from the owner if it has a transformative purpose such as parody, criticism, or commentary on the original work. Whether a particular use qualifies as fair use depends on the circumstances and is examined on a case-by-case basis.
What about artists such as Drake and The Weeknd’s right of publicity? Right of publicity refers to a person’s right to control the commercial use of his or her personal characteristics and prevents the unauthorized commercial use of said personal characteristics. Such characteristics may include the person’s name, signature, distinctive appearance, gestures or mannerisms, and voice. Legal issues regarding the imitative use of a musician’s voice are nothing new. For example, in Midler v. Ford (1988), the United States Court of Appeals for the Ninth Circuit found that when a distinctive voice of a professional singer (Bette Midler) is widely known and deliberately imitated in order to sell a product, the seller may be held liable for a right of publicity violation.
Is AI-Generated Music Copyrightable?
Speaking of copyright, assuming that AI-generated music passes the aforementioned legal hurdles, is AI-generated music copyrightable? If so, who owns AI-generated music? Copyright law protects “works of authorship.” The United States Copyright Office states that “to qualify as a work of ‘authorship’ a work must be created by a human being.” The Copyright Office recently released its Formal Guidance concerning Works Containing Material Generated by Artificial Intelligence. The Office’s Guidance states that it “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.” Accordingly, someone who wishes to register an AI-generated song must meet a human authorship requirement by disclosing the inclusion of AI-generated content in their work and providing a brief explanation of the human author’s contributions to the work. Currently, there is no bright line rule to determine how much input from a human is needed to register an AI-generated work.
The Million-Dollar Question
Some view AI-generated music positively. For example, chart-topping DJ David Guetta thinks “the future of music is in AI.” Guetta, who, on at least one occasion, has used AI websites to create lyrics and music played during his live show, compares AI-generated music to instruments that have led to musical revolutions in the past. Guetta stated, “Probably there would be no rock ‘n’ roll if there was no electric guitar. There would be no acid house without the Roland TB-303 [bass synthesizer] or the Roland TR-909 drum machine. There would be no hip-hop without the sampler.”
Is AI a threat to the music industry? Or is it simply a new iteration of music creation?