News

The EU prepares a regulatory regulation on AI tools

Although years ago they have consisted of ethical indications and value codes, everything indicates that 2023 will be the year in which the European legislatures will present the new regulations that control the use of artificial intelligence (AI) in the production of images and text. All this comes as a consequence of the popularization of tools such as LaMDA, Lensa, Stable Diffusion and ChatGPT and of the recent cyber attacks that have recently suffered and that have endangered the security and privacy of users, companies and public institutions.

AI has been seen in recent years as a space unlimited, accessible for anyone without the need for great knowledge. But both US government agencies, such as the Federal Trade Commission, and the EU are reflecting on the matter and consider it necessary to implement urgent measures.

The EU will update its AI Laweverything indicates that in the second half of the year, with rules that They force companies to be more transparent in the explanation about the operation of its tools. And it is that as a consequence, how malicious content is generated through said tools also becomes a completely unknown field. From here they must decide if they prefer to follow the regulations to sell their products in the old continent, or if otherwise they prefer to face fines up to 6%.

However, it is still unpredictable to know how these AI models can be regulated with the new legislation, but of course what will be stopped is the use of broad language models such as GPT-3 for customer service chatbots to generate misinformation at scale, or stable diffusion to create pornographic images without consent.

companies like Open AI, Google Y DeepMind Until then, they have been reluctant to reveal the DNA of their tools, although everything indicates that they will be forced to do so in order to comply with the regulations and generate a more hopeful future for the sector.

The solution proposed by Europe is for companies to report their actions in detail, that is, monitoring your results and prohibiting users from abusing the technology to spread toxic content.

The origins of the initiative

The EU began to talk about the need to regulate AI at the end of 2018, presenting at that time a coordinated plan with all member states. This is how the European Commission would publish in 2019 the Ethical Guidelines for Trustworthy Artificial Intelligence and in 2020 the White Paper on Artificial Intelligence. Therefore, Europe’s leadership in the sector was promoted, but respecting the fundamental rights of the continent.

In March 2021, the Artificial Intelligence Law with the purpose of regulating the uses of AI, being a pioneer worldwide in the use of it in a safe way. In this way, the obligations of the companies were included in four types of actions: unacceptable risk (social punctuation, mass surveillance or behavior manipulation) being prohibited; high risk (access to employment, education and vehicle safety components), where a conformity assessment was carried out; limited risk (personality impersonation, chatbots, deep fakes, etc.), where there is an obligation of transparency and labeling when it comes to ‘deep fakes’; minimal risk (for all other uses) and must adopt voluntary codes of conduct and total transparency.

AI models that transform texts into images like Dall-e by Open AI have generated a revolutionary effect in the sector. Everything indicates that AI will evolve to create images from texts in several languages, control robots or produce new medicines.

The latest advances in this regard come from companies such as Microsoftwho wants to use OpenAI’s ChatGPT to power their searches and compete with Google. Manzana it has also launched a catalog of AI-voiced audiobooks. Therefore, it can be affirmed that, despite the regulations that arrive, The sector is going through its best moment and will find no limits.

Related Articles