News

The dark side of ChatGPT: generating functional malware

Open AI faces a new challenge that is difficult to stop; the malicious use of ChatGPT. A study developed by the security company Check Point Research reveals that the beta version of this AI chatbot began to be used on cybercrime photos to write both software and emails for spying, malware, ransomware and espionage purposes.

In this way, any user without being an expert could manipulate scripts to perform cyberattacks with ChatGPT. The script kids They emerged and roamed freely, and could even make the tool the favorite of the Dark Web.

The Malicious Pioneering Code of ChatGPT

A forum participant posted the script of the first code generated using ChatGPT. The python code it combined several cryptographic functions, encryption and decryption. In this way, one part of said script was in charge of generating a key using elliptic curve cryptography and the ed25519 curve, while another part used an encoded password to encrypt the system files using algorithms blowfish Y twofish. Similarly, a third party used RSA keys and digital signatures, message signing and the function hash blake 2 to compare different files.

As a result, a script was generated to decrypt a single file and add a message authentication code to it at the end. It also allowed encrypting an encrypted path and decrypting a list of files it receives as an argument.

In the same forum, according to the Check Point Research report, another participant posted two code examples written with ChatGPT. The first was a Python script for post-exploit information theft. He searched for specific files, copied them to a temporary directory, compressed them, and then sent them to a server in the hands of an attacker.

The second malicious code, written with Java, was based on the SSH download and telnet Puttand then run it using Powershell.

Therefore, those who use these types of forums for these purposes are only trying to indoctrinate new script kiddies so that they learn to use ChatGPT as malware, easily modifying the code if specific syntax and script problems are solved.

The data transfer market

The report also includes a third criminal use of ChatGPT, this time, to create a auto bazaar that would make it possible to buy and exchange passwords, bank card details and other services illegally. For this, a programming interface was used that allowed the current prices of cryptocurrencies to be safeguarded and, from that moment, the price of transactions to be set.

Everything indicates that in early November, Check Point researchers tested the use of ChatGPT to generate malware from a phishing email, hiding the script in an Excel file attached to it. As they repeatedly asked the chatbot to generate the code, its quality, and therefore its malicious effect, improved.

Later they incorporated Codexa more advanced artificial intelligence service, to develop other malware, including a reverse shell and scripts that allow it to scan ports, detect sandboxes, and compile its Python code into a Windows executable.

That is the question that users ask themselves, since the Open AI tool strictly prohibits its use for illegal and malicious purposes, although queries to VirusTotal to know the detections of a specific cryptographic hash.

The future is uncertain, and for now it is expected that ChatGPT will continue to be used for scientific studies and trials, but hacking is a latent threat.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *