Spotify & AI: Protecting Artists or Empty Promises?

by Sofia Alvarez

Spotify, Music Labels Unite to Combat AI-Driven Fraud and Protect Artists’ Revenue

Spotify is partnering with major record labels to develop artificial intelligence tools aimed at safeguarding artists from fraud and ensuring fair compensation in an era increasingly complicated by AI-generated music.

After years of battling scams and royalty abuse, the music streaming giant announced Thursday a collaborative effort with Sony Music Group, Universal Music Group, and Warner Music Group, alongside digital licensing company Merlin and distributor Believe. The initiative seeks to leverage AI to both connect artists with fans and identify malicious actors exploiting the platform.

The Rising Threat of AI-Fueled Fraud

The move comes as the music industry grapples with a surge in fraudulent activity enabled by artificial intelligence. Last year, federal prosecutors revealed a scheme where a fraudster allegedly siphoned at least $10 million in royalties from multiple streaming services using bots to inflate streams on AI-generated songs. This incident, and others like it, highlighted the vulnerability of streaming platforms to manipulation.

“Spotify is probably the worst thing that has happened to musicians,” Icelandic singer Björk famously stated, reflecting a broader sentiment among artists regarding the platform’s compensation model. While Spotify maintains it is committed to supporting artists, criticism persists over the relatively small share of revenue reaching the majority of musicians.

A $12 Billion Threat to Artist Revenue

The potential financial impact of AI-driven disruption is substantial. The International Confederation of Societies of Authors and Composers estimates that AI-generated music could divert nearly $12 billion in revenue from human artists over the next five years. This threat extends beyond outright fraud, as evidenced by the emergence of AI-generated bands like the Velvet Sundown, which garnered millions of streams with its 1970s-inspired music.

Spotify acknowledged the dual nature of AI, stating, “At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push ‘slop’ into the ecosystem, and interfere with authentic artists working to build their careers.”

Spotify’s Response and Future Plans

The company has already begun taking steps to address the issue, removing an unauthorized AI-generated song falsely attributed to the late outlaw-country singer Blaze Foley earlier this year. In September, Spotify announced it was strengthening its spam filters and clarifying rules regarding impersonation and the labeling of AI-generated content.

While Spotify has no plans to ban AI-generated music outright, it is focused on identifying and removing fraudulent activity and violations of its user agreement. The newly formed research team will work to develop tools that better protect artists and ensure a more equitable distribution of royalties.

Despite paying $10 billion in royalties in 2024, Spotify data reveals that only approximately 4% of the 225,000 artists on the platform earned a “comfortable living” from their music. This disparity underscores the urgency of finding solutions to ensure artists are fairly compensated in the evolving digital landscape.

“If the music industry doesn’t lead in this moment, AI-powered innovation will happen elsewhere, without rights, consent, or compensation,” a company release stated, emphasizing the need for proactive industry leadership.

The specifics of the new AI products remain undisclosed, but the partnership signals a significant commitment to addressing the challenges and opportunities presented by artificial intelligence in the music industry.

You may also like

Leave a Comment