News

The danger of stowaways in artificial intelligence

This is the fear of artificial intelligence editors: the contamination of a model by malware. But that’s what hackers could do their utmost to do, with malware becoming undetectable, according to a new study.

A study conducted by researchers at the University of California, Illinois and San Diego has had the effect of a bomb in the field of artificial intelligence. They have indeed succeeded in infecting an AI with malware that antiviruses cannot detect.

Malicious models

As part of this study, the researchers “trained” AlexNet, an artificial intelligence specializing in image recognition, with what is called an EvilModel, in other words a database containing malicious code. AlexNet’s capacity did not really suffer, as it declined by only 1%. On the user side, it’s impossible to tell the difference.

Security specialists then inoculated a bigger EvilModel: this time, the performance of artificial intelligence fell by 10%, which remains relatively painless for the user. The big problem is that these malicious models are currently undetectable by antiviruses …

Note that the malware’s viral load is only activated when the application that exploits the AI ​​is also infected with malware. However, more and more apps are based on artificial intelligence, such as photo apps or to recognize the voice.

As for hackers, one possibility of contamination could be to offer functional models, but discreetly infected, on platforms frequented by developers (GitHub, for example). Or by infiltrating the update of an application … This is work for antivirus companies.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *