(Hypebot) – First Spotify said it was pulling down music from neo-Nazi, explicitly racist or white supremacist bands in the wake of last weekend's violence in Charlottesville, VA. Next Deezer and CD Baby pledged to do the same. and Google Play and YouTube reaffirmed policies that "prohibit content like hate speech and incitement to commit violent acts.”
How did this music find a home on Spotify and other music streamers? Who is to blame for racist, white supremacist and Neo-Nazi music being available online?
Music by independent artists becomes available on Spotify, Deezer, Apple Music and other streaming and download sites through an aggregator. Many musicians use low fee, self-service digital music distribution platforms like TuneCore, CD Baby and DistroKid. Others are distributed by more full-service distributors like The Orchard, inGrooves and Believe.
“We carry over 8 million songs that hundreds of thousands of artists self-distribute on the CD Baby platform, and it is impossible to screen every song for objectionable content,” CD Baby CEO Tracy Maddux told Variety. “Our practice has been to encourage our community to let us know if there is content available on our site that violates these guidelines. Reports of hate-promoting music are taken very seriously and we are making every effort to flag and vet tracks of concern. In the event we find content in violation of these guidelines, we will take it down.”
"We only check someone's content if we've gotten complaints."
Off the record inquiries to several self-service and full service distributors confirmed that, unless there is a complaint, none of the music that is then uploaded to services around the globe is screened. "We only check someone's content if we've gotten complaints," a distribution company staffer told Hypebot, "and then we usually just kick them off the system."
So, the first line of defense against this offensive music – the digital music distributor – is really no defense at all.
Spotify, Apple Music and YouTube, where music videos are uploaded directly by the artists, could review each track before its posted. But with music services adding thousands or even tens of thousands of new tracks weekly and an hour of new video being uploaded to YouTube very second, that solution seems impractical.
All streaming music services have sophisticated analytics tools, however, like Spotify's Echo Nest and Pandora's Music Genome. They can deliver eerily accurate personalized music recommendations and playlists, and Google's search functions can ferret out almost anything. So, why aren't these tools being used to eliminate this offending music before, or at least very shortly after, it is upload? And couldn't the distributors develop similar tools to identify offending music?
Why Not Fix The Problem?
The reason that policing uploaded music has not been a priority would seem to lie in the same safe harbor, free-internet ethos that allowed Spotify and others make thousands of tracks available online without the proper licenses.
That policy just cost Spotify $43.4 million in a class action settlement on the heels of a much smaller $5 million settlement with the National Music Publishers Association (NMPA).
Allowing Neo-Nazi and white supremacists music to find a home online will likely cost these music services nothing.