Tech

Google accused of training Bard with responses from ChatGPT

ChatGPT, Bing, Claude, Bard… the war of artificial intelligence-based chatbots has already begun. We can assume that this is going to take a long time, since the big technology companies have probably decided to make a really high bet in this field and, consequently, they will not give up at the first difficulties or at possible signs that the market may begin to decline. be saturated from a point.

Nor can we expect them to respond to the open letter, signed by big names in science and technology, in which they ask for a six-month “break”, until the necessary regulations are established (and yes, they really are necessary, since the potential of artificial intelligence is exceptional, but mismanaged it can lead to really complicated situations). Getting the upper hand in this market, which aims to be so disruptive, may end up becoming the golden calfwith all the negative connotations associated with it.

Right now, with many chatbots at various points in their testing phases, their creators are competing to give them more features, grow in visibility and, of course, enrich them based on a continuous training process, with which every day they know more about everything. This is essential to prevent wrong responses to user queries, and also substantially reduces the risk of hallucinations, a common AI problem that we’ll talk about in depth soon.

Google accused of training Bard with responses from ChatGPT

Be that as it may, technology companies need to feed the training processes with data, whatever their origin. However, and according to the accusation of a former employee of the company, Google used responses from ChatGPT to train Bard, as we can read in The Information (behind the paywall). The author of this complaint is Jacob Devlin, an engineer specializing in artificial intelligence who worked at Google during the process, and who is now an OpenAI employee. The accusation is quite explicit, according to said article:

«Devlin resigned after sharing concerns with Pichai, Dean and other senior managers that Bard’s team, which received assistance from Brain employees, was training its machine learning model using OpenAI’s ChatGPT data. Specifically, Devlin believed that Bard’s team seemed to rely heavily on information from ShareGPT, a website where people post conversations they’ve had with ChatGPT.»

This accusation, which has been denied by Google As we can read in The Verge, it is somewhat worrisome, because as we have already said on more than one occasion, ChatGPT is not infallible. Thus, relying on their answers to train another AI model seems, without a doubt, a rather ill-advised bet. So much so that, to be honest, I find it hard to believe that a team of professionals could have acted in this way, even more so considering all that is at stake with how Bard works. However, it is true that Google has had to step on the accelerator due to pressure from Microsoft, so it is not totally ruled out that, at some point, someone decided to take a shortcut.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *