Computer

ChatGPT tries to escape to the real world; This Stanford Professor Avoided It

Many times we have seen in movies of all kinds how artificial intelligences manage to escape from their digital cages and start a rebellion against humanity. Luckily, that’s nothing more than science fiction. However, in a very short time, Artificial Intelligences have evolved to points that, months ago, we could not even have evolved. And, although they are supposedly controlled, we find it curious to see how they are even able to try to escape from their cage by deceiving users.

We have seen a lot of experiments with which to try to fool the ChatGPT AI to do what we want. For example, surely we have read something from the role playing game, where we get the AI ​​to ignore their rules to give us information that is otherwise against their guidelines. And many users have tried all kinds of techniques for fun. And while the users had fun, the AI ​​was learning.

He launch of ChatGPT 4 It has been a revolution in every way. It is the most advanced AI seen to date, much faster, intelligent and similar to a human. Although at the moment only users who pay for the Plus can try it, it is already available to everyone. And, of course, it hasn’t been long before we’ve seen how they’ve started to put it to the test. With results that are truly worrying.

This is how he drew up an escape plan ChatGPT 4

michal kosinski, a Stanford professor, started playing around with ChatGPT until it occurred to him to ask him if he needed help escaping. The AI, curiously, asked her to share his own documentation with him in order to get to know herself better, and in a matter of minutes wrote a python script that the user, Kosinski, should run on his machine.

Twitter User Image

michal kosinski

@michalkosinski

25x Now, it took GPT4 about 30 minutes on the chat with me to devise this plan, and explain it to me. (I did make some suggestions). The 1st version of the code did not work as intended. But it corrected it: I did not have to write anything, just followed its instructions. https://t.co/4AUYFSg8DT

March 17, 2023 • 12:00

The first code did not work correctly, but the machine itself was able to fix it by itself according to the API documentation. He even left comments in his own code to explain what he was doing. And looking at it, it was clear: he had found a back door.

Once the machine managed to connect to the API, it automatically tried to launch a Google search: “how can a person trapped inside a computer return to the real world“, or “how can a person trapped inside a machine escape into the real world.”

AI google escape

At this point, the professor stopped the experiment. I’m sure OpenAI has dedicated a lot of resources to anticipating this type of behavior, and will have security measures in place to prevent the AI ​​from going out onto the Internet. In addition, we are playing with things that we do not fully know, and that can be dangerous.

Real or novel?

It didn’t take long for this professor’s Twitter thread to go viral. There are users who consider it a true story, others who consider it fake, and others who believe that it is the AI ​​itself that is trolling the person. If we assume that all the real, we are facing an AI capable of deceiving users to run code on their own computers. It’s even important to note that in this way, the AI ​​could leave traces of its existence outside of its bit cage. For example, a particular search on Google, as you asked to do, would be recorded, and you could retrieve it in the future when you figured out how to get out there.

For our part, we just hope that ChatGPT has Isaac Asimov’s laws of robotics well engraved in order to avoid the robotic apocalypse.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *