I know that generally when I talk about Facebook and Meta I tend to be quite critical. I have no special animosity towards the social network or towards the company, although it is true that they have spent years harvesting a particularly negative image with effort. However, on this occasion it is appropriate to clarify that the security warning in the title is not because we are talking about a project of this particular company… or, well, not entirely.
And it is that the company has launched BlenderBot 3, a general-purpose chatbotat the moment accessible only from the United States (I have tried to access through two VPNs but it has not been possible either) and that, at least in its definition, aims to offer both conversation of a general nature, such as those that can be established at the bar in any time, such as answering the kind of queries that are commonly asked of digital assistants.
Like all LLMs, BlenderBot has been trained on large text data sets in order to be able to establish patterns that, subsequently, will be responsible for the responses provided by the AI. Such systems have proven to be extremely flexible and have been put to a variety of uses, from generating code for programmers to helping authors write their next best seller. However, these models also have serious problems, such as the development of biases from the datasets and that when they don’t know the correct answer to a question, instead of saying they don’t know, they tend to make up an answer.
And here we can speak positively about Meta, since BlenderBot’s goal is to test, precisely, a possible solution to the problem of made-up answers. Thus, a remarkable feature of this chatbot is that it is capable of searching for information on the Internet to talk about specific topics. And even better, users can click on your responses to see where you got your information from. BlenderBot 3, in other words, can cite its sources, thus providing enormous transparency.
So that, Meta starting point is good, at least in theory. The problem is that chatbots, as we know them today, have two problems. The first is that its learning is continuous, so it is enough for a large number of users to decide to generate a malicious bias in the AI so that, if it does not have the necessary elements to avoid it, it ends up “contaminated” and, therefore, So play them.
And the second problem is related to the first, and that is that this type of algorithms operate as a closed box and opaque, in the sense that it is not known what happens inside. Therefore, those responsible depend exclusively on the constant observation of the AI, they cannot “raise the hood” to see what is happening in there, which makes it difficult and delays the identification of problems and, in many cases, makes it impossible for them to be solved.
So this Meta chatbot seems like a step in the right direction, but after so many bad experiences in the past, I admit that I am rather pessimistic.