Tech

ChatGPT is great, but it’s not foolproof

In recent days, half the world has been revolutionized with the fashionable artificial intelligence, ChatGPT, an AI capable of answering all kinds of questions with answers that are generally quite accurate. From history to programming, from gastronomy to health, ChatGPT amazes with every answer and, not surprisingly, a lot of practical uses of the functions of this artificial intelligence have already begun to be formulated.

I clarify, therefore, that this article is not a criticism of the whole. On the contrary, I start from a sincere and unequivocal praise of what has been achieved. In addition, I have personally verified that in matters where caution is necessary, such as health, ChatGPT always shows relative answers, directing the user to their doctor, instead of giving advice that, in certain circumstances, could be dangerous. .

We have already seen, in the past, how some artificial intelligences have begun to reproduce biases induced by their social interactions, by bias in the sources used for their learning, or as the sum of both factors. The creators of ChatGPT seem to have taken these kinds of issues into account, and consequently their AI seems to have been properly trained to avoid such issues.

Having made these clarifications, and expressing again my opinion, more than positive, about ChatGPT, it is no less true that I am seeing some opinions that affirm that this tool can already be used as a source of universal information, something that seems inadequate to me because it is dangerous, since their answers are not always correct and, what is worse, the system does not indicate it to us when it is showing real information, and when what it tells us is the result of its “imagination”.

Thus, if we start from the erroneous premise that everything ChatGPT tells us is true, and we assume that its answers are always reliable, sooner or later we will find that, even if it is without bad intentions, this artificial intelligence “has slipped it into us”. ». Here are three errors caused by different causes.

Error due to ignorance

The most logical and understandable of all. ChatGPT doesn’t know everything, even if it tries, and the information it has may become obsolete over time… and may even be from the moment it is launched. Want a pretty clear example of it? Well, this is what happens if you ask about the latest version of Microsoft’s operating system:

ChatGPT is great, but it's not foolproof

Yes, as the AI ​​claims; Windows 11 does not exist, I wonder what operating system we have been reporting on at MuyComputer for more than a year.

Error due to wrong sources

The Valencian paella, the authentic Valencian paella, has a very specific recipe. And no, this is not a criticism of other excellent rice dishes, cooked using a similar procedure, which the most purists do not hesitate to describe as “rice with things”. Now, just like the true carbonara, it is not made with cream, to give an example, no, the Valencian paella does not contain chorizo, peas, chickpeas or, in general, many of the elements that we can find in this recipe:

ChatGPT is great, but it's not foolproof

Joking aside (I’m looking forward to when this ChatGPT quirk hits Twitter), the wrong sourcing causes the AI-supplied information to be incorrect in this case.

Unreal content error

As I said before, this is the case that seems most worrisome to me, and that is connected precisely with one of its main virtues, creativity. You will surely remember that a couple of months ago we tested DALL-E 2, the AI ​​capable of creating images from a text description. Well, ChatGPT also has creative potential, be it prose or verse.

The problem is that it makes use of it too when it shouldn’t. Something that I have been able to verify when asking him about one of my favorite poems, Tristes Guerras, by Miguel Hernández. This is what he answered me:

ChatGPT is great, but it's not foolproof

I admit that the first answer has surprised me, because when it affirms “that wars are only sad if you don’t fight for love” it comes close to the meaning of Hernández’s text. However, the article “Las” at the beginning of the title makes me suspicious. For this reason I ask, and it is when this poetic composition appears, with a somewhat childish flavor, which in no way resembles the real poem:

sad wars
If the company is not love.
Sad, sad.

sad guns
if not the words
Sad, sad.

sad men
if they don’t die of love
Sad, sad.

Miguel Hernández – Tristes Guerras (Songbook and ballads of absences)

The problem is not that ChatGPT decides to explore its creative potential, that sounds great to me. The problem is that, when consulted by a real work, the artificial intelligence, instead of reproducing it or giving a reference to it, invents a text based on the initial sentence, without alerting the user.

ChatGPT is a masterpiece of artificial intelligence, and I personally recommend you give it a try (registration is free). However, when doing so, keep in mind that, as you have seen here, it is not infallible, so you should take its answers with a minimum of skepticism, and confirm them with other sources before taking them for granted.

Related Articles