Tech

Google warns its employees about the use of Bard … and it is logical

I read, with some surprise, that some international media publish, with fanfare, that Google has warned its workers about the potential risks of chatbots based on generative artificial intelligence models, including, of course, his own service, Bard. Unlike other occasions in which I focus on the attitude of certain media, on this occasion it is completely true, and the origin of said information can be found in the Reuters Agency.

According to several internal sources cited by said note, there are two recommendations given by Google to its workers. The first is that they do not use confidential information in their conversations With chatbots, let’s talk about Bard, ChatGPT, Bing, Poe, etc. And the second is that if they use Bard to create programming code, never use it directlythat is, that they review, and if necessary edit and correct, the code generated by the Google chatbot.

“Google tells its employees not to use Bard”, “Google doesn’t trust its own chatbot”, “Google recommends its employees not to use the service it offers to the rest of the world”… in short, the collection of biased headlines is to cry. Even, and that as a general rule Reuters does a great job, begins this piece with the text «Alphabet Inc is warning employees about how they use chatbots, including its own Bard, while also marketing the program around the world.«.

Google warns its employees about the use of Bard ... and it is logical

That chatbots can give wrong answers is not something new, and that there is a risk that they will reproduce in other conversations, a posteriori, what we have previously discussed with these services, is also well known. What’s more, all the companies that offer this type of service, or at least the main ones, always alert all users about them. In other words, the warning issued by Google to its workers is very similar to the one received by any user of the service.

The fact that a technology company, like Google, encourages users to use a service like Bard, is perfectly compatible with the fact that it informs and warns about its imperfections, and that it extends said recommendation to its own workers is… something normal. So normal that, personally, I am somewhat surprised that Reuters would have considered it newsworthy. What does not surprise me, yes, is that the response of certain media has been to look for the most twisted formulas to share this news.

Surely, if it is our intention, we can find aspects that can be improved in relation to how Google is managing its services based on artificial intelligence, but of course, lIn the absence of prudence, rigor and ethics, they are, at least from what we have been able to see up to now, totally out of place.. What’s more, I think that if they have sinned at all, it is precisely that, of being scrupulously prudent, in a world that goes very, very fast, and in which some competitors have taken advantage of the circumstances to try to get ahead.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *