Tech

Bells for cats and dragons: the challenge of regulating the AI

Although artificial intelligence (AI) is by no means a novelty, the dizzying acceleration of its development that we are experiencing in recent months has made it the protagonist of a unprecedented disruption. Governments around the world announce measures to limit and control the evolution of AI and its tools, butIs it possible to regulate artificial intelligence? We take a detailed look at what the difficult balance between fostering innovation and protecting our fundamental rights and values ​​entails.

When we study history we often resort to the linear abstraction of evolution and imagine a society where the technological innovations they are introduced little by little and the advantages are so clear that their adoption does not entail great trauma. The reality, as almost always, is quite different.

Before continuing, an illustrative example: in the 19th century, doctors did not recommend traveling by train. Freud, neurologist and “father” of psychoanalysis even said that it affected mental health and studies were published in prestigious media where it was warned that “The human body was not designed to travel above 45 kilometers per hour”. A search on the Internet (or in an AI) is enough to discover examples similar to the automobile, electricity, television or the printing press.

Perhaps we are not so far from what those who saw the mighty “iron horse” crossing the Midwestern plains thought in what was a technological leap of such incomprehensible and threatening magnitude. And surely, at that moment, they felt the cold sweat and the need to limit, control and regulate what was going to change their lives forever.

The possibilities of artificial intelligence are overwhelming and are being widely discussed. But, as in any disruption, and even more so in one of this magnitude, the movement is not without risk:

  • Risks associated with job: AI-powered automation will replace certain professions and millions of workers. This could generate economic inequalities and problems associated with job retraining.
  • biases and discrimination. An AI trained with data that contains biases, such as racial or gender discrimination, can result in discriminatory decisions in fields such as human resources, medicine or finance.
  • Privacy and security. Training an AI requires huge amounts of data that, if not handled properly, could have serious consequences for those affected.
  • Lack of transparency, especially with very complex models of deep neural networks that can be difficult for humans to understand. If an AI’s decisions cannot be explained, significant ethical and legal issues can arise.
  • control and superintelligence. In a scenario where AI advances enough, an intelligence capable of surpassing us could be developed. How would we be able to control it? What mechanisms should we put in place to prevent it from happening?
  • Handling, crowd control and disinformation. AI can be a powerful tool for generating false or confusing content with which to manipulate opinions, decisions or carry out cyber attacks.

Speed ​​and legislation are terms that are rarely linked, especially in structures as complex as the European Union. However, our legislators have been the first to get down to work preparing the basis for what will be the future regulation of Artificial Intelligence in the European Union.

Almost two years ago, when ChatGPT did not exist and the metaverse was going to revolutionize the world, the European Commission already proposed a draft regulation for a artificial intelligence law. Among other things, it establishes a classification of the available tools based on their level of risk, from low to unacceptable. The vote, which takes place at the time of publishing these lines, is already wet paper.

This is not, of course, the only EU initiative, but one of the most relevant. In what is self-defined as an active policy, strategies and ethical guidelines have been approved, millions of euros have been allocated in funds, and international cooperation is promoted for the development of AI initiatives.

Let’s put figures at this speed: According to data published by Intel, AI training in the last year has grown a hundred million times faster than Moore’s law, the mythical postulate that states that processing speed would double every two years. and that until not long ago it was the mark of what was expected.

Reaching these speeds require fuel and there are huge amounts of dollars, euros or yuan behind this exponential growth. IDC estimates that global spending on AI this year will reach 98.400 million dollars, with a compound growth rate of 28.4% from 2018 to 2023. According to Research and Markets, the global AI market is expected to reach 190,000 million dollars by 2025. Astronomical figures that explain why we are going so fast.

In an attempt to gain some perspective, a few months ago a group of 1,000 people, including scientists, engineers, intellectuals, businessmen, politicians and big names in world technology, signed an open letter requesting the suspension for six months. of the development of the largest Artificial Intelligence projects, given the “profound risks to society and humanity” that they can pose without adequate control and management.

If time is a problem, so is space: obviously an AI does not understand geographical borders and that a legal base applicable only in Europe not only does not solve almost any problem, but it can suppose a significant ballast in the innovation race. An aspect, let us remember, in which we are very far from the leadership.

In general terms, the proposed legislation is focused on issues such as data protection, transparency, human oversight capacity and the liability derived from the use of AI and the tools derived from it. Of course, there are also very complex ethical implications that escape the precision of 0 and 1 to delve into the social sciences, psychology or philosophy.

Politics is in a hurry. AI is perceived as an opportunity, but also as a threat to the establishment and, specifically, one that is going to be very difficult to control. The Artificial Intelligence Index 2023, produced by Stanford University and published a month ago, indicates that the West and some Asian countries (mainly those that contribute some data) are immersed in a race to regularize AI. Of 127 countries analyzed, 31 already have at least one law to regulate AI.

Two worlds, two bells, two ways to regulate AI

If we draw a line on the eastern border of the European Union we can draw two clearly differentiated scenarios:

To the west, on the other side of the Atlantic, they are aware of the importance of not hinder the development of a technology that can be key to the future of the US Washington published some principles that must be taken into account when developing tools and solutions based on AI.

The document expresses the need to develop safe, effective solutions that respect the privacy of the data they use, that explain how they work and that always allow the possibility of intervention if necessary.

In parallel, the US Congress is working on a comprehensive legislation on AI that is complex to develop and pass, since it would require full agreement between Democrats and Republicans.

in the halls of Brussels fragmentation is being opted for, adapting legislation to protect aspects such as data (the trigger for the ban on ChatGTP in Italy) or the regulation of intellectual property of images, videos and music that an AI can generate.

If we add the G7 to this bloc (with countries as relevant as the United Kingdom, Canada or Japan) we have a geopolitical scenario with many points in common and the protection of its citizens as a common denominator, but also with profound differences that make think big deal is still little more than a utopia.

In addition, this set of countries are some of the world’s leading economies and possess immense power and influence, but we must not forget that, in recent years, many things have changed: there is a dragon in the room or why nobody wants to talk about china.

The Asian giant, as has been the case for centuries, continues to their own rules. For China, the development of AI is an opportunity that they will not miss, but it is also a serious threat that they must control. In his case, the limits are clear: AI can go right where political power ends, but never beyond.

To achieve this, they are going to combine regulatory experimentation (carry out tests in certain areas or provinces, for example, for the development of autonomous driving), a commitment to standards and stricter regulation if necessary.

For now, China’s focus is not on comprehensive regulation but on regulations that solve specific problems, in an agile manner and with the necessary flexibility to adapt to changes in real time. Let’s not forget that we are talking about a country that took 48 hours to restrict access to ChatGTP to the entire population.

China can’t afford to be left out in the battle for the AI, but she is also aware that she cannot do it alone. She needs the innovation – and the hardware – necessary to achieve it and she doesn’t hesitate to go through the hoops that are necessary… for now.

Talking about a global regulation of AI without China (or Russia) does not make much sense. We also don’t know how far we are or what will happen in the future, but a simple analogy with the fight against climate change can help us refine the crystal ball.

At this point, we must take into account problems such as the diversity of approaches and priorities, how fast technology advances, the difficulties of actual implementation (especially in terms of cooperation) and, especially, the different economic and competitive interests of each territory.

Going back to our imaginary line, we have an eastern area where technological development prevails over any other factor, taking it as far as possible as long as it does not compromise the political. In addition, they start from societies with a greater acceptance of surveillance and control for obvious reasons. In the west, there is concern about how citizens can be protected at the same time that we do a “controlled” limitation of AI, guaranteeing its transparency and establishing clear sectoral regulations.

The European Parliament approves the Data Bill

Is it necessary to regulate AI?

Artificial Intelligence, still in the almost embryonic state in which it is today it’s not just any technology. As happened with the Internet, we are a long way from knowing what consequences its development will have in the coming years and how it will affect us, but we can only wait to find out when.

For the first time in the history of humanity, we are facing an advance that comes to improve or, who knows if to replace, our most precious attributethe one that has allowed us to get where we are: the intelligence.

Laws are a way of regulating our coexistence. External controls to our human will that makes life in society easier for us. If there are machines that are going to behave like people, does it make sense for them to be subject to a regulatory framework?

The challenge for the European Union is huge: establishing a regulatory environment that protects citizens but, at the same time, does not put us at a disadvantage compared to the rest of the world and allows competitiveness, innovation and attraction (or retention) of specialized talent . Perhaps the problem is not so much putting the rattle on but rather take care of the cat and control the dragon.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *