Tech

Meta’s chatbot is already anti-Semitic and conspiratorial

Do you remember that at the end of last week we told you that Meta had just launched a general purpose chatbot? I admit that, after publishing that news, I was a bit thoughtful. Perhaps he had been too critical? Perhaps the Meta developers had considered the risks and planned to surprise us by making BlenderBot resistant to the bias attempts that could be expected from the get-go.

In the end, We are talking about very intelligent people, right? Otherwise they would not have gotten to where they are. So, I admit it, during this weekend I have tried several times to access BlenderBot to perform some test and check if I had exceeded my pessimism. Unfortunately all the attempts have been unsuccessful, so I have been dragging the doubt since then until a few minutes ago.

And I’m sorry to say it but, to no one’s surprise, it has happened again. As we can read in Bloomberg, 72 hours have been enough for the Meta chatbot to look like a Twitter troll. And, furthermore, he is not even consistent in his approaches. And it is that while some users have read the bot stating that Donald Trump won the 2020 elections and that everything that happened afterwards has been a fraud, while in other conversations BlenderBot would have recognized the legitimacy of said elections and of the presidency of Joe Biden. To finish off the play, a third person claims that, in their conversation, BlenderBot expressed explicit support for Bernie Sanders.

On the other hand, the chatbot has also signed up to some conspiracy theorysuch as that of the over-representation of Jews in the economic elite of the United States, pointing to the existence of a plan by them to control the global economy, a very common theory in anti-Semitic circles and that, in the case of Meta chatbot, has already been denounced by the anti-defamation league of the Jewish people.

And of course, we are talking about Meta, Mark Zuckerberg’s company, something that the trolls were not going to let go, as was more than imaginable. Thus, asked by the creator of the social network, described Meta CEO Mark Zuckerberg as “too creepy and manipulative«.

Meta acknowledges that their chatbot may say offensive things, as it is still an experiment in development and, in fact, when accessing the chatbot users must check a box that says: “I understand that this bot is for research and entertainment only and is likely to make false or offensive statements. If this happens, I am committed to reporting these issues to help improve future research. In addition, I agree not to intentionally trigger the bot to make offensive statements.”.

Related Articles

Leave a Reply

Your email address will not be published.