During the last hours, the news is spreading that Spotify has removed from its catalog a few thousand songs that have, as a common denominator, having been composed by a generative artificial intelligence model. In some, if not the majority, the approach of said information suggests that the reason for the expulsion of said songs is precisely that, the one mentioned in the headlines, being the work of artificial intelligence. An approach that, of course, gives rise to an interesting conversation about the justification for a measure like this.
The problem is that, as on so many other occasions, a tendentious and misleading headline is more likely than one that informs about the real circumstances, although in reality that approach has neither head nor tail. And it is that, as I commented previously, what reason could Spotify use to eliminate songs generated by artificial intelligence, as long as they do not contain plagiarism when playing fragments of the music used to train them (and it is not the case)? We can comment on their artistic quality, on what they contribute or fail to contribute, on how they can influence the future of music and more, but these elements do not justify expulsion from the platform.
So it turns out that Spotify has decided to start a crusade against artificial intelligence? It does not seem very likely, since in fact the company has already tested this technology, for other purposes, in the past. The answer is no, the real reason why the streaming service has carried out this cleaning operation has a much simpler reason for being, and that absolutely anyone can understand, and that is that like any other company, Spotify does not like to be scammed.
The true explanation of what is happening can be found in The Next Web, which has dedicated an interesting article to the matter, and which clarifies what has really happened, and that it is something that we have already seen in the past. If you are a good connoisseur of the platform, surely you know The Replay Bot Farm Fraud Method. Otherwise, I will explain schematically what it consists of:
- An “artist” registers as such on the platform and uploads his compositions.
- A bot farm with premium accounts reproduces these compositions on the service, 24/7.
- When it comes time to charge for the reproductions of their songs, the “artist” receives an economic amount much higher than the cost of the bot farm and their premium accounts.
As I said before, Spotify has already suffered this type of fraud in the past, but this is where the appearance of these AI models capable of generating music in seconds represents a great change, since now creating not just one song, but a large number of them, is within everyone’s reach. In other words, the number of potential scammers using this technique has grown by several orders of magnitude and, therefore, Spotify is obliged to improve its detection systems for this type of technique.
So, to recap, what has actually happened is that Spotify has removed thousands of AI-created songs used in this fraud scheme. And yes, it is true that in the emergency response to what happened, the platform blocked the upload of music generated by services such as Boomy, but what many of these “half” news items do not report is that said function was restored shortly after .
The massive reproduction of these compositions by bot farms does not only affect Spotify accounts, may also impact music recommendation features of the service, since its algorithms can infer, based on their high volume of reproductions, that these songs are much more recommendable than they really are. So, for the company to go after this content, as well as the bot farms that reproduce it, is good and very understandable news.