Tech

Don’t trust ChatGPT too much, especially at work

Those who read me regularly know that, in general, I have a fairly positive opinion of ChatGPT, the OpenAI chatbot that, with its launch, kicked off the integration, at the speed of light, of generative artificial intelligence models in all kinds of services. Had it not been for its launch at the end of last year, surely the new Bing, Google Bard, “copilots” and other elements would still be in an embryonic phase.

Now, that you have a positive opinion of the services does not mean that you are unaware of your problems and limitations. What’s more, the first time I wrote about ChatGPT in MuyComputer, it was precisely to focus on these problems, as you can see here. In that case, I mentioned three types of causes of errors in the chatbot’s responses, and I reserved the most worrisome, as I already indicated at the time, for last.

I am talking, of course, about hallucinations, a problem that is difficult to completely solve and that, to this day, forces us to review the answers provided by ChatGPT with an external and reliable source to confirm that we are not facing an algorithm that has decided to let its imagination run wild. Otherwise, if we completely trust an answer, and especially if we do it in the work context, we can expose ourselves to problems and very embarrassing situations.

Don't trust ChatGPT too much, especially at work

Such a circumstance is what he has experienced Steven A. Schwartz, a New York attorney who accidentally used false data created by ChatGPT in a lawsuit. Specifically, it was a lawsuit in which he defended the interests of a private accuser against Avianca for an incident that occurred on a flight between El Salvador and New York. During it, Schwartz’s client was accidentally hit in the leg by a trolley used by cabin crew to distribute food, drinks, offer shop service on board, etc.

In his complaint, the lawyer cited several judicial precedents for similar causes but, as you are surely imagining, the source of the same was none other than ChatGPT. And yes, surely you have guessed this too, the references to previous cases provided by the chatbot were either false or incorrect. So, now it is the lawyer who is having to give explanations to the court for having provided false information in his lawsuit, as we can read in Reason, which also extracts part of the communications between both parties.

It does not seem that Schwartz’s action was intentional, since the slightest process of verification of said sentences would have produced the result that has indeed been produced. So I think his statement that he over-relied on ChatGPT is honest, and I’m pretty sure he has learned his lesson that this will not happen again. And, without a doubt, this has a very beneficial function, and that is that it serves as a great reminder that a chatbot can’t be your only source…unless you’re not worried about ending up in trouble.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *