News

Generative AI and the headaches it can bring to companies

Generative Artificial Intelligence, and all the innovation related to it, continues to advance by leaps and bounds. Both individuals and companies are already experiencing the advantages it provides in their day to day, both for their leisure and, in many cases, in the world of work. But not everything it brings with it is good. Generative AIdespite the fact that there has been little time available for companies to take advantage of it, it has quite a lot of chiaroscuro. and sometimes can cause more than one headache, and problems, who uses its models, applications and chatbots.

More and more companies, and also governments, are more concerned about the problems and risks of generative AI in relation to privacy and data security, among other things. Several regulators have already called for rules and regulations for AI tools to be passed, with technology experts and renowned researchers calling for a pause in the development of AI systems until their safety is assessed.

But the development of generative AI is not going to stop, or slow down. For this reason, both companies and institutions need anddevelop a strategy and regulations that address duly security and risk management of this type of systems, with the aim of fostering confidence in AI. Tools are also needed to manage the flow of data between users and companies that have AI foundational models, that is, duly secure and reliable.

In addition, the main problems that generative AI currently has must also be addressed. In many cases they will need to be addressed by those who develop and train the models. In others, the legislators of each country or region will have to intervene.

Main problems that generative AI brings with it

System users, and especially companies, need to have tools and measures with which to protect their privacy, or filter their interactions with models to exclude errors, hallucinations, confidential information or material protected by copyright . And first of all, it is necessary to know the problems that, according to Gartner, threaten companies that use generative AI.

  • hallucinations and inventions: These types of failures, which also include errors in the report of facts, are currently the most common problems that are appearing with chatbots that use generative AI. Data training can lead to biased, wrong, or wrong answers. Some are easily detected, but other failures are more difficult to locate. Especially if this type of solution is gaining credibility and trust.
  • deepfakes– These types of issues arise when generative AI is used to create graphic content with malicious intent. This is one of the most serious problems of these systems. Above all, for companies. The fake images, videos and voice recordings generated with them are already being used to attack personalities and politicians, as well as to spread false or misleading information. Even to create fake accounts and to hijack legitimate accounts and access their contents. An example of this is the false image of the Pope wearing a white puffer coat. In this case, it is a harmless false image, but it gives an idea of ​​the potential that this type of technology can have to harm third parties.
  • data privacy– In companies using generative AI, employees can easily expose sensitive company data when interacting with generative AI solutions through chatbots. These applications can store information captured from the instructions and data provided by its users indefinitely, which jeopardizes its confidentiality. Furthermore, this data could fall into the wrong hands if there is a security breach.
  • copyright issues: Generative AI chatbots are trained on large amounts of data from the Internet, which may include copyrighted material. As a consequence, some of your responses may violate copyright laws or other intellectual property protections. Without referral sources or transparency about how responses are generated, companies may run into problems with the responses they receive and their subsequent use. Also for finding that the chatbot may be using your intellectual property without permission, In any case, without references, the only way to reduce these problems is to examine very carefully the answers they give to make sure they do not violate copyright laws or intellectual property.
  • cybersecurity issues: In addition to threats related to phishing and social engineering, attackers can use generative AI tools to more easily generate malicious code. Vendors that offer foundational generative AI models reassure clients that they train their models to reject cybersecurity requests, but they don’t give users the tools they need to audit all of the security controls they’ve put in place. Quite simply, providers of generative AI systems ask their users and customers to trust what they do, almost blindly.

How to address these issues

Once the main problems that companies may have when using chatbots and generative AI tools have been identified, it is time to decide what can be done to avoid them as much as possible. The models are offered to their users as they have been developed, without customization. Therefore, one of the main measures to avoid problems with them is use prompt engineering systemswith which you can create, adjust and evaluate both requests made to these systems and the responses they offer.

In the event that companies are going to use generative AI systems and chatbots as offered, it is imperative put in place systems and protocols for manual review carried out by humans. And do it with all the outputs of this type of AI tools. In this way it will be possible to detect incorrect, biased or false results.

It is also important to establish a governance and compliance framework for the use by and within the company of these solutions. Among the most important measures is the prohibition for employees to ask questions that expose all kinds of sensitive data. Both personal and company.

In addition, it is advisable to monitor the use that employees make of ChatGPT and other similar tools in cases where it is not regulated. And do it in such a way that there are management systems that allow monitoring and checking event logs to identify violations of regulations. On the other hand, it is necessary to secure web gateways so that they can identify and monitor unauthorized API calls.

All of these risk mitigation measures need to be applied to prompt engineering. In addition, it is necessary to take other measures to protect internal data and other sensitive information that is used to generate prompts and requests on third-party infrastructure. Also create and store configured prompts to use them when they are needed, just as it is done with other assets. These are, for the moment, the measures that companies can take to avoid problems caused by misuse or malicious use of generative AI systems and chatbots.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *