Photo Credit: Andre Hunter
Far-right extremists have been using AI-generated earworms to spread hateful rhetoric on Spotify, TikTok, and anywhere else you consume music online. Now it’s dominating the Dutch Spotify charts.
If you thought the onslaught of AI-generated far-right extremism online was almost exclusively a U.S. problem, you’d be dead wrong. It’s not just memes, either. AI-generated music promoting far-right extremist views has been spreading like wildfire in places like the Netherlands and France, on platforms like Spotify and TikTok.
For months, Europeans have seen an increase in the amount of AI-generated far-right extremist views—perhaps most prominent during the French elections last year, but it certanly didn’t stop there. Now, users on Reddit have reported a “disturbing trend” of AI-generated right-wing songs spreading hate speech against immigrants climbing the Dutch Spotify charts.
“Currently, one of these AI anti-immigrant songs is in the Dutch Top 5, and about 8 out of 10 in the viral chart are in the same hateful category,” writes Reddit user EuroMEK. “These are not real musicians; they are AI-generated tracks spreading extremist messages.”
“What really worries me is that Spotify is allowing this, even as these songs go viral,” the post continued. “How can they justify platforming AI-generated hate speech or letting AI content out-compete real artists on their charts?”
Meanwhile, new research from Cornell University published over the summer found across “thousands of clips from German, British, and Dutch TikTok feeds” that over three-quarters of videos using extremist audio were still accessible four months after they were first posted. But what makes this particular trend so nefarious is the exploitation of TikTok’s “use-this-sound” feature “as a Trojan horse for hate speech.”
Marcus Bösch at Heinrich-Heine University in Düsseldorf describes “dozens of trends” in which seemingly harmless memes, such as users guessing what comes next in a song, masks “brutal, racist, misogynist and death fantasy lyrics” in the songs used in these trends. Though not all of these tracks are AI-generated, many of them attach “hateful messages” to ‘90s club hits from Aqua or Gigi D’Agostino.
“There’s Nazi techno, Nazi pop, Nazi folk—something for everyone,” Bösch explained, adding that the goal with injecting this content into TikTok videos is to direct users toward off-platform content “intended to indoctrinate them into Nazi ideology.”
TikTok’s moderation evidently struggles with audio-based hate content, whereas text-based hate speech is often removed right away. Even overtly offensive material, Bösch said—citing clips of a Hitler speech reused in over 1,000 videos—can avoid detection for months.
“You can’t argue that’s hard to see, hear, or feel,” he added. “It shouldn’t be too hard to actually find these.”
At the time, a TikTok spokesperson said the company employs a combination of technology and human moderation to detect and remove content that promotes hate speech or hateful ideologies. They added that 94% of such content is taken down before it’s ever reported.
As artificial intelligence becomes more sophisticated, the bad actors who use it to spread hateful rhetoric will almost certainly become better at evading detection. But can legislation tackling unethical AI actually do anything to address the creation and spreading of AI-generated hate speech? Unfortunately, probably not. While the law desperately needs to catch up to the technology, bad seeds will continue to spread. It is the internet, after all.

