CNET has used artificial intelligence in its articles, it has gone wrong

The use of artificial intelligence for writing texts is a hot topic. Much of the leading role is taken, of course, by ChatGPT, but we must not forget that its developer, OpenAI, has already offered GPT-3 for some time. And as ChatGPT responded when asked about the difference between the two services, «GPT-3 is a much larger and more advanced language model that uses deep transformer architecture and large-scale pretraining to generate high-quality text.«.

In other words, in a much more human way, at least in theory the texts generated by GPT-3 can be used in any context, including the professional oneand that is something that those of us who dedicate ourselves to this of putting letters together, because it leaves us with a bit of a fly behind the ear, because the possibility that from one day to the next an artificial intelligence will be able to write the same or better that oneself is a possibility that, despite our regret, emerges on the horizon.

It seems, however, that the technology is not yet as mature as some thought. And yes, when I say they thought I really mean “they tried to sneak in without warning.” And it is that, as published by Futurism at the beginning of the month, CNET had started publishing articles written by an artificial intelligence, presumably CPT-3. Only when this became public did the outlet begin to identify such articles as such and announce that it was pausing the adoption of this technology. Yes, pause, which except for a change in plans indicates that it is a temporary measure, since the truth is that they plan to resume it in the future.

CNET has used artificial intelligence in its articles, it has gone wrong

It seems, however, that the “experiment” (yes, when I say experiment I really mean “covert use of artificial intelligence, passing those texts off as original and written by a human being, hence the quotes) nothing has come of it.” well then, as we can read in The Verge, more than half of the AI-written articles published by CNET contained errors. Thus, CNET has had to make corrections in 41 of the 77 publications written by the artificial intelligence solution used by the medium.

From inaccurate information to “unoriginal” phrases, which is a very nice and very likeable way of referring to plagiarism of texts used to train artificial intelligence, AI-written articles have garnered as much prominence as CNET would have liked. but not for the reasons they would have wanted. With an important addition: if they had indicated the origin of these texts from the beginning, the reaction would have been much less critical of the medium. However, it seems that the human being, at least for the moment, does not like that of a medium trying to pass off as a human an AI that optimizes texts for search engines as a human. Thank you very much, humanity, for the part that touches me.

And although in the MuyComputer team there are real “machines”, with whom I am lucky to share space, I guarantee that they are human… and any artificial intelligence would like to write like them, I assure you.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *