The music industry opposes AI. Universal Music Group, home to superstars like Taylor Swift, Nicki Minaj and Bob Dylan, has urged Spotify and Apple to stop AI tools from removing lyrics and melodies from copyrighted songs by its artists, THE FinancialTimes reported last week. UMG Executive Vice President Michael Nash wrote in a recent editorial that AI music “dilutes the market, makes original creations harder to find, and violates artists’ legal rights to compensation for their work.”
Neither Apple nor Spotify returned requests for comment on the number of AI-generated songs on their platforms or whether AI has created more copyright infringement issues.
The news came on the heels of a request from UMG for an Eminem-style cat rap to be deleted of YouTube for copyright infringement. But the music industry worries about more than AI copying a vocal performance; he also worries about the machines learning songs from their artists. Last year, the Recording Industry Association of America submitted A list AI scrapers to the US government, claiming that their “use is unauthorized and infringes the rights of our members” when they use copyrighted works to train models.
This argument is similar to that used by artists in a trial leveled against AI image generators earlier this year. As in this case, there are still many unanswered questions about the legality of AI-generated art, but Los Angeles music attorney Erin Jacobson notes that those who download AI-generated material ‘IA that clearly infringes copyright could be held liable. Whether streamers will be liable is more nuanced.
The new generative technology shows a tendency to mimicry. Earlier this year, Google announced that it had created an AI tool called MusicLM which can generate music from text. Enter a prompt asking for a “fusion of reggaeton and electronic dance music, with spatial and otherworldly sound,” and the generator delivers a clip. But Google hasn’t made the tool widely available, noting in its paper that approximately 1% of the music generated corresponded to existing recordings.
Much of this AI music could take precedence over mood-based genres, like ambient piano or lo-fi music. And it may be cheaper for streamers to create playlists using AI-generated music than paying even paltry royalties. Clancy says he doesn’t think AI is moving too fast, but people may be moving too slowly to adapt, which could leave human artists without the fairness they deserve in the industry. . Changing that means making clear distinctions between AI-created and human-created music. “I don’t think it’s fair to say ‘AI music is bad’ or ‘human music is good,'” Clancy says. “But one thing I think we can all agree on is that we like to know what we’re listening to.”
But there are many examples of artists working with AI, not in competition with it. Musician Holly Herndon used AI to create a clone of her voice, which she calls Holly+, sing in languages and styles she can’t. Herndon created it to keep sovereignty over her own voice, but as she says WIRED late last year, she also did so in the hope that other artists would follow her example. BandLab has a SongStarter feature, which allows users to work with AI to create royalty-free beats. It’s meant to remove some of the barriers to songwriting.
AI can become a perfect imitator, but it cannot, on its own, create music that resonates with listeners. Our favorite songs capture grief or speak and shape current culture; they innovate in times of political upheaval. AI will play a role in writing, recording and performing songs. But if people open music streamers and see too many AI-created songs, they might not be able to tune in.