OpenAI sued for a ChatGPT invention

Since the generative models of artificial intelligence, such as ChatGPT, DALL-E, Bing and company began to gain popularity, we have been able to verify how problematic hallucinations can be of these systems. This issue is well documented, there is a collection of best practices that substantially reduce the risk of these occurring, and given the impact they have on the reliability of these AIs, extensive research continues to this day to address this issue.

If you are not clear about the concept of hallucinations in generative artificial intelligence models, in addition to the guide that I have previously linked, you can see two specific cases. On the one hand, in this first test that we did of ChatGPT, Miguel Hernández’s invented poem is a great example of this. And much more recently, you have the case of the lawyer who has gotten into serious trouble for trusting a text generated by this chatbot.

This explains why OpenAi has configured the service so that, when we access ChatGPT, it shows us (in English) the following text:

«Although we have security measures in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.»

As a general rule, users of this type of service are aware of this problem, which it is even more pronounced in those in which the sources used are not indicated by the model to make the answer. That is why Bing stands out from many of its rivals, and that is that when you make a query, the answer is not always correct, but since it indicates the sources used, you can (and should) review them to confirm said source. information.

This, in a country so given to lawsuits for practically any cause, can be a gold mine for some. Thus, as we can read in The Verge, a radio host has decided to sue OpenAI after ChatGPT generated a fake text about himin which he was accused of defrauding and embezzling funds. The plaintiff, Mark Walters, was accused by an AI text of responding to a third-party inquiry, stating that he was believed to have embezzled as much as $5 million from a non-profit organization.

OpenAI sued for a ChatGPT invention

LThe response was obtained by a journalist named Fred Riehl, when asking ChatGPT to review a PDF (an action you cannot perform) and generate a summary of its content. ChatGPT responded by creating a false summary of the case that was detailed and convincing but was wrong on several counts, including the fraud allegations. Riehl, however, never made this response public, so it is unknown how it was possible that it came into the hands of Walters.

This is undoubtedly a somewhat complex situation, since on the one hand we have a service that explicitly states that it may generate incorrect or misleading information, which some will understand as an exoneration of responsibility. However, on the other side of the scale we have a person who could be affected by the fact that the chatbot is fabricating misleading information about him, and that such false information can be tremendously negative for his image.

Legal experts consulted by the newspaper affirm that this case does not seem to have a long run, and that OpenAI will probably be exonerated from Walters’ accusations. Nevertheless, it makes sense to try to put ourselves in the shoes of this radio host, and imagine how we would feel to find out that an AI like ChatGPT generates false, and also negative, information about us. It certainly shouldn’t be a pleasant situation, and perhaps it does make a lot of sense to force OpenAI to redouble its efforts in terms of avoiding this type of defamation of individuals and entities.

Related Articles