Google search boss Prabhakar Raghavan gave an in-depth look at his new conversational technology Bard, the new competitor to Open AI’s acclaimed ChatGPT.
While Microsoft has just integrated the artificial intelligence behind ChatGPT in its Edge Internet browser as well as in its Bing search engine, it became urgent for Google to present its own alternative. Fortunately, the American giant gave a conference in Paris dedicated to artificial intelligence yesterday, during which it detailed the operation of its own solution.
Prabhakar Raghavan, SVP at Google, said the company will introduce “ the magic of generative AI directly into its main research product and use artificial intelligence to pave the way for ” the next frontier of our information products “. Here’s everything you need to know about the next-gen search engine.
How does Bard, Google’s new AI work?
A few days ago, Google first mentioned a new AI named Bard, which will have the heavy task to compete with ChatGPT and its recent integration into Microsoft products. Google had not detailed how it works, but we now know everything about the new artificial intelligence.
During a brief presentation, Raghavan showed slides with new examples of Bard’s abilities. The Google executive indicated that this technology would allow Google’s search engine tooffer more complex and conversational responses to queriesincluding providing bullet points indicating the best times of the year to see various constellations and presenting the advantages and disadvantages of buying an electric vehiclee. A slide also showed how Bard can be used to plan a trip to Northern California.
Google’s AI is wrong in full presentation
Unfortunately for Google, its AI doesn’t seem quite ready yet. As Google detailed its capabilities, the company asked ” What new discoveries from the James Webb Space Telescope (JWST) can I tell my nine-year-old about? “. Bard counters that the JWST had been used to take the first pictures of a planet outside Earth’s solar system.
The problem is that the information given by Google’s artificial intelligence is not true. Indeed, the first photos of such exoplanets were actually taken by the European Southern Observatory’s Very Large Telescope in 2004, as confirmed by NASA. A Google spokesperson defended its AI to Reuters: This underscores the importance of a rigorous testing process, which we’re launching this week with our Trusted Tester program. “.
We don’t know exactly what happened, but it is possible that the AI was mistaken due to the recent nature of the information. Sentences about the very recent past could conceivably be more riddled with errors than usual for an AI, because the information they contain simply hasn’t been released as many times as older facts.
The market’s reaction was immediate, since shares of Alphabet, the company’s parent company, quickly fell 8%, or $8.59 per share, to settle at $99.05. Pichai, the CEO, therefore told employees that the company would require all its employees to test Bard in a hackathon, to find any flaws in the AI. After a rigorous testing period, Google will make its new search engine available to the general public, but that is not expected to happen for several months. By then, Bing will have free time to become the smartest browser on the market.
Google unveils multisearch, which combines images and text in a single query
” The potential of generative AI goes far beyond language and text “, adds Raghavan, also bringing new details concerning his technology called “multisearch”, which makes it possible to search for information “ visually “. Google had mentioned the arrival of this feature last year, which leverages the Google Lens tool to allow users to search for objects they see, while adding a text query for more accurate and useful results.
” With generative AI, we can already automate 360-degree sneaker rotations from a handful of still photos, which previously would have required marketers to use hundreds of product photos and expensive technology. said Raghavan. ” Looking to the future, one can imagine how generative AI will allow people to interact with visual information in entirely new ways. “.
Multi-search was hitherto available in the US and India but is now coming for mobile users across the world. Google also mentioned a new option “ multisearch near me (multiple search near me) which allows users to open the app and search for local businesses and get information about them, such as ratings or attendance in real time. However, this feature won’t arrive globally for several months.
Google Translate was also in the spotlight with the announcement ofimprovements in contextual translation, including English, French, German, Japanese and Spanish, as well as translation from images.
Finally, note that Maps has also received new features. The app will soon have a new feature called “Overview of directions” on Android and iOS, which helps to understand your journey from the route overview or the lock screen and get the updated arrival time and the places to turn. Maps also got a feature we were talking about a few days ago: the ability for electric car drivers to find fast charging stations along their route.