News

Generative AI becomes a risk for companies

According to data from the consulting firm Gartner, the general availability of generative AIin systems like ChatGPT or Google Bard, has become one of the main concerns of company managers in what to to take risks it means. In fact, it is among the top five emerging risks most cited by enterprises in a May 2023 Gartner survey.

Then, the consultancy did a survey on the 20 main risks that they saw for the companies to 249 company executives. The report produced from it, called the Quarterly Report on Emerging Risks, includes detailed information on the potential impact of each of them, the level of attention, the time frame and the perceived opportunities for the risks.

Respondents cited third-party viability (67%) as the fastest emerging risk companies are watching. And generative AI was second with 66% of respondents concerned, appearing in the top ten for the first time. This highlights both the rapid growth of knowledge in this area of ​​Artificial Intelligence with its variety of use cases, and therefore, the risks it can generate.

Generative AI is followed by uncertainty in financial planning (62%), and fourth, cloud concentration risk (62%). Trade tensions with China (56%) ranked fifth in the survey, making the top five spots on the list reflect both concerns related to advances in technology and those arising from macroeconomic and geopolitical issues.

Gartner had already identified six generative AI risks, as well as four aspects of AI regulation relevant to security functions. On the other hand, with regard to business risk management, it is necessary to take three aspects into account, according to the consultant’s experts.

The first of them is the intellectual propertysince it is important to train business leaders to see the need to take precautions and act transparently about the use of AI tools, so that risks related to intellectual property can be adequately mitigated both in the provision of information as in obtaining results from generative AI tools.

The data privacy It is the second important aspect to take into account. Generative AI tools may in many cases share information with third parties. Among them, with system or service providers. They are also likely to do so without prior notice. This can potentially violate privacy laws in many jurisdictions. As a consequence, various regulations and laws have already been implemented. For example, in China and in the European Union. In addition, there are already several more proposals for related regulations in the United States, the United Kingdom, Canada or India.

Finally, it is necessary to take into account the cybersecurity. Examples of malware and ransomware code produced by generative AI after systems have been tricked into developing it have already been recorded. There have also been prompt injection attacks that can cause these tools to give information they shouldn’t.

Related Articles