Tech

Facebook promoted harmful content by mistake

Sometimes I think that Trapped in Time (yes, the groundhog day movie) describes my relationship with Facebook. With a terrible disadvantage for me in this case, and that is that if in the movie starring Bill Murray the protagonist had to change his attitude towards life, becoming a good person, in my case it is Facebook that has to change. And so many times he has promised to do it without the changes actually coming, that at this point I have little expectation about it. Thus, Bill Murray had in his hands to get out of the loop, something much more difficult in my case.

I could make a list listing everything reprehensible that Facebook has done in recent years, starting for example with the Cambridge Analytica scandal and its influence on Brexit, and ending (for now) their disinformation campaign to attack TikTok, but they would give us the grapes even though we just released April. Furthermore, luckily (for users) or unfortunately (for Meta) the scandals of Facebook and the rest of the company’s services are quite well known, and their impact on the company’s reputation is, without a doubt, pronounced.

But well, let’s go with the last one (until next week or the next, I imagine), and that is that The Verge tells us that, for a period of six months, which began last October, a flaw in the analysis system of the content published on Facebook has been promoting harmful posts and disinformation. According to the publication, the engineers of the social network detected the anomaly six months ago and decided to analyze it, but for some reason that I cannot understand, they did not proceed to eliminate said publications or put the filtering system on. standby until the cause of the problem is detected.

This way of proceeding, with Facebook sending the posts to independent verifiers but keeping them on the social network caused their visibility to increase by up to 30% globally. In these months, the engineers observed a decrease in their visibility a few weeks after detecting the problem. However, they subsequently saw a further increase in the visibility of these contents. Finally, according to an internal company report, the Facebook content classification problem was resolved on March 11.

Facebook promoted harmful content by mistake

When we talk about harmful content according to the Facebook rules this ranges from fake news to nude images. And no, I’m not going to get into the discussion about what should and shouldn’t be able to be posted on Facebook. The point is that it is the social network itself that determines these points and, in particular, when it comes to disinformation and fake news, it is constantly taking measures to eliminate them from the feeds of its users.

However, for at least six months in the social network they have been aware of this problem, and although I understand that the technical complexity of the algorithm has made it difficult to find the problem and solve it, what does not enter my head is that during this half-year have not taken additional measures to prevent disinformation and fake news from gaining so much visibility (remember, 30%) on the social network.

Thus, here we have one more sample, and I have already lost count of those that accumulate, that once and for all, Facebook should park algorithms and return to the chronological feed. And yes, it is true that what has failed in this case has been the filtering system, but I find it hard to believe that, having erroneously overcome it, the algorithm had nothing to do with, at least, part of the visibility extra earned for such harmful content.

Related Articles