Tech

Google warns its employees not to share sensitive information about chatbots

The Alphabet group, the parent company of Google, urges its employees to be vigilant when using chatbots like ChatGPT-4 and even Bard. For what ? For fear of seeing its AI recover and share confidential information.

google bard fear leaks
Credit: 123RF

Like the vast majority of companies launched in the AI ​​race, Google wants to move forward cautiously. Just like Alphabet, moreover, the parent company of the web giant.

According to a report shared by our colleagues from Reuters, the group calls on its employees to be vigilant when using chatbots like ChatGPT-4 and even Bard, Google’s conversational AI. More specifically, Alphabet invites its employees to not to use confidential information when interacting with these IAs.

Alphabet (Google) is afraid of chatbot leaks

Alphabet’s fears are quite legitimate, in the sense that these AIs are leveraging machine learning to improve over time. In fact, nothing would prevent them from reproduce and share sensitive data that they would have collected during previous conversations.

In addition, the group has also asked its engineers to do not use the computer code on which Bard is based directly in the chatbot. The idea again being to avoid possible leaks that could jeopardize the security of the chatbot. Note that Google isn’t the only company to warn employees about using publicly available chatbots. Several major Tech players such as Samsung, Apple or Amazon have also introduced safeguards.

Also read: Google is forced to postpone the launch of AI Bard in Europe, here’s why

Many professionals use chatbots without worrying about the risks

These measures are necessary, as proven by a study published in January 2021 by the professional social network Fishbowl. According to this survey of 12,000 people, 43% of professionals surveyed have used public chatbots as part of their mission without informing their superiors.

As we know, ChatGPT-4 and other conversational AIs are perfectly capable of writing emails, complex documents including software. If they are therefore presented as great tools to facilitate the work of many professionals, the contents generated may contain erroneous, confidential or protected information.

As early as June 1, Google took the lead in a shared memo to its employees which stated: “Do not include confidential or sensitive information in your conversations with Bard”. In the opinion of Matthew Prince, CEO of Cloudflare, entering sensitive data into public chatbots amounts to “entrust a bunch of PhD students with all your private files”. It’s a good summary.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *