We soon won’t tell the difference between AI and human music – so can pop survive? | Music
We’re at an inflection point for AI, where it goes from nerdish fixation to general talking point, like the metaverse and NFTs before it. More and more workers in various industries are fretting about it impinging on their livelihoods, and ChatGPT, Bard, Midjourney and other AI applications are creeping into our awareness.
In music, this tech has been percolating since the 1950s when programmer-composer Lejaren Hiller’s algorithm allowed a University of Illinois computer to compose its own music, but has really grabbed the popular imagination this month with a number of high-profile fakes. A “collaboration” between convincing AI-derived imitations of Drake and the Weeknd earned hundreds of thousands of streams before being scrubbed from streaming services; Drake was also made to imitate fellow rapper Ice Spice via AI, prompting him to respond: “this is the final straw”. An AI version of Kanye West has atoned for his antisemitism in witless verse, and AIsis released an album of all-too-human indie rock with software doing bad Liam Gallagher karaoke over the top of it.
The fear is: could the AI end up doing a better job than the artists it is imitating?
Snarky wags will say that’s easily done when it’s Drake – and admittedly, an AI could not just replicate the sound of his voice but also his lyrics when he’s at his least imaginative. But put the fake Drake next to the real thing’s excellent latest single Search & Rescue: there’s a delicacy, freedom and inimitable humanity to Drake’s dejected singsong flow that the boringly precise AI can’t evoke.
He’s right to be annoyed – these tracks are a violation of an artist’s creativity and personhood – and the fakes are noticeably more sophisticated than those from a few years ago, when Jay-Z was made to rap Shakespeare (this is the kind of humour beloved of AI dorks). The tech will continue to improve to the point where the differences become indistinguishable. Perhaps lazy artists will soon use AI to generate their latest album, not so much phoning it in as texting it. AI composes its music by regurgitating things it’s been trained to listen to in vast song databases, and that’s not so different than the way human-composed pop music is recombined from prior influences. Producers, engineers, lyricists and all the other people who work behind a star could be usurped or at least have their value driven down by cheap AI tools.
But, for now, music is insulated from the effects of AI in a way that, say, accountancy isn’t, because enjoyment of music is so reliant on our very humanity. The situation oddly reminds me of OnlyFans, whose multibillion-dollar success is down to loneliness more than anything. Free pornography is rife online – indeed, AI will be used to produce even more of it – so why would anyone pay to subscribe to someone’s pics on OnlyFans? It’s because there’s a parasocial relationship at play: subscribers feel as if they are making a connection with someone real, however ersatz or creepy that connection may be.
In a more wholesome way, it’s the same with music. We don’t love it because it’s a digitised accumulation of chords and lyrics arranged in a pleasing order, but because it has necessarily come from a human being. The matrix of gossip in Taylor Swift’s music, how she is so frank and so withholding all at once, is what supercharges her appeal beyond her very fine melodies; when Rihanna sang “nobody text me in a crisis” people felt it so deeply because she was telling us something about herself, the Robyn Fenty behind the star name. I can’t yet imagine how an AI could write something like the strident storytelling of Richard Dawson, or the pileup of cultural detritus in the work of rappers such as Jpegmafia or Billy Woods, or thousands of other human dramas that spill beyond the bounds of a stream.
But will an AI experience these dramas itself one day – and if not, will it simulate them so accurately that they affect us just as strongly? It’s the central preoccupation of Blade Runner and so much other sci-fi, and we are creeping towards that future. Avatar-like pop stars such as Miquela are currently very crude and not really artificially intelligent at all, but soon enough they will have an artistry, agency and simulated humanity that will resemble that of real performers.
Those actual humans will react by trumpeting their flesh and blood realness; just as the electric guitar was once seen as perverting the acoustic guitar, or Auto-Tune the rawness of the human voice, we’ll have the most fevered arguments yet about authenticity in music. Some musicians will choose to withhold their music from datasets used by AI to learn how to compose, to keep it ringfenced for human listeners – the Source+ project already allows artists to opt their work out of databases used by AI imaging applications.
Another option for musicians will be to lean into the emotional, poetic possibilities of AI, as the British producer Patten has done with his fascinating album Mirage FM, released last week and made using artificially intelligent production software. He entered text commands and the AI – a program called Riffusion – composed music from it combined from its database of sound, with Patten editing and arranging what it came up with. He has dredged the past, just as Burial or Madlib do with their sampling: the twist is that he’s taking from records that haven’t been made by humans, but rather imagined by machines. It’s a dizzying headspace to be in.
The march of progress is somewhat slowed by the fact that an AI can’t perform live, though the tech will certainly inform live performance. We will see pop stars motion-capturing their likenesses as Abba did, with AI used to accurately replicate their very way of walking across a stage as well as their voice, for use after they die, even writing new material in their name (or, conversely, their wills will forbid any posthumous AI reanimation).
These collaborative creative roles, much more than fake versions of extant stars, will be how AI is predominantly deployed in music. There are already dozens of highly intelligent applications that will apply effects, provide draft vocals or add live-sounding drums. The instances of a song being unwittingly written with the same melody as a prior one, and the attendant plagiarism court cases, would be avoided by an AI scanning a century of pop to create a previously unwritten melody – something Google’s AI Duet is already hinting at.
The next step is that these tools compose entire songs themselves, and as AI is capable of absorbing even more music and influence than a human being can, it’s difficult to argue that it will all be generic or hackneyed. The fakes we hear today are a sideshow, or proof of concept, for the much more profound and insidious ways AI will come to bear on music.
But, because of the way it is trained, AI will always be a tribute act. It may be a very good tribute act, the type that, were it a human, would get year-round bookings on cruise ships and in Las Vegas casinos. But it cannot, by its nature, make something wholly original, much less yearn, or be broken up with, or catch an eye across a dancefloor: all the stuff that music is written about and which makes it resonate. AI makes music in a vacuum, totally aware of musical history without having lived through it. We won’t always be able to spot the difference between humans and AI – yet I hope we can feel it.