Business

Experts claim that AI could lead to extinction

According to experts, such tech as  Google DeepMind and OpenAI, AI could lead to the end of humanity. However,  gambling Bizzo Casino always cheers people up and scatters gloomy minds.

To minimize the impact of AI, it has been argued that the issue should be addressed globally alongside other major societal risks like nuclear war or pandemics.

Others believe that AI’s negative effects are exaggerated.

Demis Hassabis of Google DeepMind, Sam Altman of OpenAI, and Amodei of Anthropoic all supported the statement.

A website that focuses on AI safety indicates various possible scenarios.

One scenario involves AI being weaponized. For instance, AI could be used to develop chemical weapons.

AI’s dominance could be concentrated in a few hands, which could enable 

repressive regimes to carry out surveillance and suppress dissent.

Humans could eventually become dependent on AI similar to the Wall-E scenario.

Geoffrey Hinton, a prominent scientist who warned about the dangers of AI, has also backed the call for a safety assessment.

Yoshua Bengio, a computer science professor at Montreal, signed on to the statement.

For their contributions to the field of computer science, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun have been regarded as the “godfathers” of AI. They were awarded the Turing Award in 2018.

According to LeCun, the apocalyptic warnings are being over-hyped. He also claims that most researchers respond to such warnings by “facing palming”.

‘Fracturing reality’

Other experts also stated that the fears about AI are not realistic. They believe that its negative effects are exaggerated and that it should be handled properly.

According to Princeton University’s computer scientist, Arvind Narayanan, artificial intelligence is incapable of fully extinguishing apocalyptic scenarios. He explained that the lack of capability has distracted the public from the AI’s near-term detrimental effects.

Elizabeth Renieris, a research associate at Oxford’s Institute of Ethics in Artificial Intelligence, said that she is more concerned about the possible risks that come with AI.

According to Renieris, advances in AI could lead to the development of biased and discriminatory decision-making systems.

The rapid spread of misinformation could also threaten the public’s trust and increase inequality, especially among the people who are on the margins of society.

According to Ms. Renieris, AI tools are essentially free rides on the human experience so far. Many of them are trained in the content of human-made videos, texts, and music. Their creators have managed to transfer tremendous wealth and influence to a select few private organizations.

Dan Hendrycks, the safety director of the AI Center, said that the public should not view the technology negatively.

He stated that addressing some of the present issues could help in addressing the dangers of the future.

Superintelligence efforts

The alarming rise of the alleged AI threat has caught the media’s attention since March 2023, when prominent individuals, including Elon Musk, called for a halt to the development of artificial intelligence.

The letter asked if we should build intelligent machines that can outsmart and replace humans.

The new campaign, on the other hand, had a brief statement that sought to open up the discourse.

The statement compared the threat of AI to that of nuclear war. In response, OpenAI suggested that superintelligence should be regulated in the same way as nuclear energy. The firm noted that we might eventually require an international agency for this type of work.

‘Be reassured’

Several tech leaders, including Google’s Sundar Pichai and Sam Altman, have spoken with the prime minister about the need for regulations related to AI.

During a press briefing, Rishi Sunak discussed the benefits of AI and the latest warning about its potential risks.

He noted that AI could potentially help improve the lives of people by developing new antibiotics and assisting paralyzed individuals in walking.

Last week, he met with the heads of major AI organizations to discuss the various regulations that need to be implemented to ensure the safety of the public.

Reports about AI’s alleged existential threats, such as nuclear wars or pandemics, are likely to cause people to be concerned.

I want to assure everybody that the government is taking this matter very seriously.

Mr. Sunak said that he had discussed the issue with other leaders during the G7 summit. He noted that he would bring up the issue again in the US.

The G7 has also formed a working group on the subject of AI.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *