Spotify has taken a significant step in combating the growing issue of AI-generated music abuse, removing a staggering 75 million spam tracks from its platform over the past 12 months. Announced on September 25, the company’s new measures aim to address the rise of unauthorized AI-generated content while protecting legitimate artists and listeners.
"At its best, AI is unlocking incredible new ways for artists to create music and for listeners to discover it. At its worst, AI can be used by bad actors and content farms to confuse or deceive listeners, push ‘slop’ into the ecosystem, and interfere with authentic artists working to build their careers. That kind of harmful AI content degrades the user experience for listeners and often attempts to divert royalties to bad actors", explained Spotify in its announcement.
Tackling AI abuse with a three-step plan
Spotify’s new policies seek to address the problem on multiple fronts. This effort is built around three key measures that target spammy uploads, unauthorized vocal impersonations, and a lack of transparency regarding AI-generated content.
The first step involves the rollout of a music spam filter designed to identify and block mass uploads of low-quality or duplicate tracks. By flagging bad actors, Spotify aims to prevent "junk" music from flooding listeners’ playlists.
Second, the company has updated its impersonation policy to explicitly ban AI voice cloning, deepfakes, and unauthorized vocal impersonations. According to Spotify, such impersonations will only be permitted if authorized by the original artist, providing a clearer framework for protecting artists’ rights.
Finally, Spotify is introducing transparency through standardized labeling. In collaboration with DDEX - a group working toward digital supply chain standards - the company is developing a metadata system to disclose AI involvement in music creation. This includes providing information on AI-generated vocals, instrumentation, and post-production. Fifteen labels and distributors are already on board with this initiative.
The scale of the AI music problem
While AI-generated music currently represents a small percentage of overall streams, the potential for abuse is enormous. Rival platform Deezer, for example, reported receiving over 30,000 fully AI-generated tracks daily, with up to 70% of streams for such content identified as fraudulent. The financial incentive for bad actors has grown alongside Spotify’s overall royalty payouts, which have risen from $1 billion in 2014 to $10 billion in 2024.
Sam Duboff, Spotify’s global head of marketing and policy, emphasized that low-quality, AI-generated tracks generally fail to gain traction. "It’s really a small percentage of streams. In general, when the music doesn’t take much effort to create, it tends to be low quality and doesn’t find an audience", Duboff said. However, the sheer scale of AI-related fraud remains a pressing concern for Spotify and the broader music industry.
Implications for artists and the music ecosystem
The measures have been welcomed by industry leaders like Universal Music Group, which called Spotify’s actions an important step toward combating spam, infringement, and misuse of royalties. The company has also reassured artists who use AI responsibly that they will not be penalized. Instead, Spotify’s efforts are aimed at fostering creativity while curbing malicious activity.
To further support artists, Spotify is streamlining its content mismatch review process, reducing wait times for disputes and enabling artists to report issues before a track’s release. This proactive approach is intended to prevent bad actors from exploiting legitimate profiles for their own benefit.
As Spotify continues to refine its spam filter and develop its transparency framework, the company hopes to safeguard the music ecosystem, ensuring that royalties go to deserving creators and audiences encounter authentic, high-quality content. If successful, these moves could represent a turning point in the fight against AI-generated music abuse.