News

What are the technical – and ethical – limits of Artificial Intelligence?

Recently, Google researchers stated that they are close to developing an artificial intelligence similar to the human level and totally different from what has been seen so far in the technology universe. It was enough to unlock an old debate that has always permeated studies on robotics and digitization. If it is possible to build robots as or even more intelligent than men, what are the limits to this development?

Imagining conscious machines that rebel against human beings is the raw material for many science fiction classics. Isaac Asimov, one of the most famous writers of this genre, even proposed the creation of “three laws of robotics”, which proposes that a robot cannot inflict any harm on a human being, it must always obey the given orders and protect its own existence. , as long as one rule does not violate the other. However, what long ago seemed unimaginable is beginning to take shape with digitization.

Image: reproduction/Boston Dynamics

Today, robots already exist and perform several functions in the most different sectors of society. If they don’t have the humanoid figure we often see in movies and cartoons, it has to be recognized that many play a crucial role in the digital transformation we are currently experiencing. These solutions allowed people to expand their knowledge and perform new tasks and activities.

The point is that these machines had no consciousness. But what if Google researchers are right and in the near future Artificial Intelligence will reach the same level as humans? Could they develop a cognitive capacity of their own and, therefore, develop thoughts without the intervention of men?

What are the technical – and ethical – limits of Artificial Intelligence?

Image: Andres Urena/Unsplash

Even if this is out of the question and is a dystopian scenario, the simple fact that researchers and technology professionals break barriers in AI serves to draw attention to what really matters. It is necessary to debate ethical limits and understand the technical challenges that exist in relation to Artificial Intelligence technologies. Like any tool, they can bring many benefits, but also many harms to people. With rules, it is possible to direct them towards the promotion of human life.

Ethics and Artificial Intelligence

In this text, we will understand how ethics, the values ​​and moral principles that govern human action in life in society. In other words: they are the rules that allow people to live together harmoniously, enjoying their rights and performing their duties. The most attentive reader has already understood the paradox. Ethics concerns men and women. But what would be its role for actions derived from machines with Artificial Intelligence?

It is at this point that the main legal discussions around AI and robotics are focused – and for which it is necessary to find alternatives as soon as possible, since technological evolution does not stop. UNESCO, for example, has already determined some important guidelines. The entity recalls that human beings are responsible for the creation, planning and development of all stages of the operation of Artificial Intelligence. Therefore, there is the possibility of establishing ethical parameters to prevent a dystopian future.

What are the technical – and ethical – limits of Artificial Intelligence?

Image: Urko Dorronsoro from Donostia – San Sebastian, Euskal Herria (Basque Country), CC BY-SA 2.0 < via Wikimedia Commons

The only point of contention, of course, is in relation to culpability in the face of accidents, damages and losses caused by robots. Who will respond civilly and criminally? The companies, the professionals who operate the machines or who should supervise? In the United States, an accident with an Uber autonomous vehicle in the testing phase killed a pedestrian hit by a car – and the American justice had to peel this pineapple in the various stages of the trial.

What are the technical – and ethical – limits of Artificial Intelligence?

The Tesla Model S after the accident scene near Williston, Florida. National Transportation Safety Board, Public domain, via Wikimedia Commons

It’s a double-edged sword. The longer it takes to determine and resolve these pending issues, the more risks the authorities and competent bodies run. The lack of maturity on the subject is an obstacle, no doubt, but it cannot justify the delay in publicly debating a legal and ethical framework for Artificial Intelligence actions. Limits need to be imposed before everything gets out of hand.

And what are the technical challenges?

It is clear that the emergence of super-intelligent robots with their own consciousness and emotions will still be restricted to the universe of science fiction. But is it possible to reach this scenario, even in the long term? We know that technology evolves faster and faster with the consolidation of digital solutions. The fact is that even this will reach a limit that cannot be exceeded.

The Google researchers themselves cited in the first paragraph of this text even claimed that this level of artificial intelligence would be a kind of “game over”. In other words, it would be such a significant advance that it would end the current idea that we have of technology development as we know it. It would be a kind of new phase – and evidently with new rules in play.

Of course, technological advancement depends primarily on human creativity – and it really seems to have no limits. But the development of new solutions and tools needs to accompany theoretical and practical knowledge. There are situations that cannot occur because there is simply not enough structure for it! This is why many innovations take decades to actually occur.

Today, for example, the issue of the Metaverse and its countless possibilities is much debated. However, this ability to interact through avatars in immersive digital environments will still take several years to become practical in everyday life. With AI it will be the same. In addition to ethical limits serving as a barrier, many of the proposals will be shelved due to lack of resources.

Artificial Intelligence needs to be debated

In any case, it is evident the importance of authorities, companies and civil society entities to deepen the public debate around Artificial Intelligence. What we have experienced in the last two years of the covid-19 pandemic in terms of technology is unprecedented in the history of humanity.

New habits quickly emerged and consolidated, extinguishing old behaviors around the world. The sooner we structure ourselves and set these necessary limits, the lower the risks to life in society and the more security we will have to take advantage of the advantages and benefits that this tool can – and should – offer to everyone!

Alessandra MontiniAlessandra Montini Director of LabData at FIA – Laboratory of Data Analysis. Graduated from the University of São Paulo (USP), PhD in Administration from FEA (2003), Master (2000) and Bachelor (1995) in Statistics from the Institute of Mathematics and Statistics IME-USP. The executive is a Consultant in Big Data and Artificial Intelligence Projects, Professor of Big Data; Artificial Intelligence and Analytics, Professor in the Area of ​​Quantitative Methods and Informatics at the Faculty of Economics; Administration and Accounting at the University of São Paulo – FEA, Coordinator of the CNPQ Research Group: Study Center for Econometric Models and Big Data Study Center and also a referee for CNPQ and DA Fapesp. Over these 20 years of her career, Alessandra has already received 40 times the award for “Didactic Performance of the Department of Administration at FEA”, was awarded 4 times as “Best Professor in the Department of Administration” and received more than 20 times the award for “Professor best evaluation of undergraduate and graduate courses at FEA”.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *