News

OpenAI CEO and other experts warn of the “risk of human extinction” from uncontrolled AI

Regulating Artificial Intelligence should be a “global priority”assures a group of 350 experts, engineers, executives and researchers in an open letter in which they come to place an uncontrolled AI at the same level of risk to human survival as a global pandemic or nuclear war. Almost nothing…

Should we develop non-human minds that could end up outnumbering us, outsmarting us, making us obsolete and replacing us? The big question, as a plot of the film Terminator, has already been answered by numerous groups of experts who have been warning about the risks of Artificial Intelligence for some time. There is a huge debate and we are at a critical moment. The world must decide what to do with this technology before there is no going back and in a future -which some researchers consider too close- the machines will decide for us.

And this is not science fiction. The last decade has brought extraordinary advances in Artificial Intelligence. Some, such as those dedicated to medical research, the creation of new drugs and the cure of diseases, are hopeful for living longer and better. Others, such as those related to robot soldiers and autonomous weapons, are deeply disturbing and directly threaten human survival. Not to mention the ethics and intrinsic security of some developments and other sections as important as those related to employment. We have already discussed it. When machines are capable of doing -almost- all kinds of jobs, what will we humans do?

AI risks

The risks of AI: new alert

There are more and more voices calling for a regulation of all these technologies. Last March, a group of 1,000 experts requested a six-month moratorium on the development of large Artificial Intelligence projects, given the “profound risks to society and humanity” that they can pose without proper control and management. And it is that we have reached a point where the most advanced development teams are immersed in an unbridled race to get the most powerful digital minds. And it is not clear that they can control them.

Now we receive another open letter and it is to take it into account, since it is signed by some of the world’s leading experts in AI. One of them is Geoffrey Hinton, dubbed the “godfather” of AI and a former Google employee who, along with other signatories, won the 2018 Turing Prize, the Nobel Prize in Computer Science, precisely for his work on AI. Other important signatories are Demis Hassabis (Google DeepMind) and Dario Amodei (Anthropic), in the elite of AI development.

The CEO of OpenAI also signs the letter. A company that has made global headlines for the ChatGPT chatbot, which may seem like a game, but behind it is the largest neural network of its kind, growing non-stop with each interaction of the millions of users who use it and advancing what can come from the negative part. Today disinformation and propaganda, tomorrow the elimination of jobs of all kinds and worse.

The letter is not wasted: “Mitigate AI extinction risk should be a global priority along with other societal risks such as pandemics and nuclear war”they assure in the document published on the website of the non-profit organization Center for AI Safety. Hinton, in a separate interview, has stated that we are just a few years away from AI surpassing the brainpower of humans.

necessary regulation

At least it seems that these alerts are serving for governments to move. The European Union is working on regulating these technologies and this month the most notorious signatories of the latest alert (Altman, Hassabis and Amodei) met with the President of the United States to discuss the regulation of AI.

The same chief executive of OpenAI testified before the US Senate and warned that the risks of this technology were serious enough to justify government intervention.

It is clear that it is imperative to create a strong legal system around everything related to Artificial Intelligence and complete assurance that the effects of the largest projects are positive for humanity. And that its risks are manageable, because a Terminator real is closer than we think. And the world’s leading experts say so.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *