News

Between deepfakes and real humans, the majority of observers do not make the difference

In a recent study, a panel of guinea pigs was unable to distinguish deepfakes from real humans. Worse: these even looked more realistic by some metrics!

In recent years, no one has been able to escape the phenomenon of deepfakes, those videos altered to make a character look like someone else. This technology continues to progress again and again, and the neural networks that take care of it are now capable of producing results that are more real than life…. and maybe even a little too much.

This is in any case what emerges from reading the work of American researchers. The latter tested 300 volunteers with the objective of determining whether or not they were able to distinguish a face entirely synthesized by an AI from a real photo of a human in the flesh.

Everyone had to view a set of 128 images drawn from the same pool of 400 synthetic images, some real, some fake and synthesized by the AI. They were distributed fairly in terms of diversity to avoid any racist or sexist bias. And the results are fraught with meaning: they determined that, on average, the participants managed to identify less than half of the synthetic faces, 48.2% to be precise.

Humans find it increasingly difficult to tell the difference.

This 1.8% deviation from the average might seem anecdotal, but it is far from negligible on a mathematical level. Because from a strictly statistical point of view, it represents a large-scale tipping point; this means that, as a rule, these artificial faces become so sophisticated and realistic that they can fool our brains the majority of the time.

These results are corroborated by two complementary experiments on the same subjects. During the second, the researchers subjected the panel to the same protocol, but only after giving them a course on the clues that can betray a deepfake. And even with this “cheat”, the participants only managed to identify 59% of the fake images; a figure still too low to hope to identify fakes reliably.

Finally, and this is perhaps the most impressive result, the participants had to assign a score of realism to each image. On this scale, 0 corresponded to an organism that would not look like a human at all, while 10 indicated a real human. On average, this realism score was 7.7% higher for deepfakes!

A gigantic risk of massive misinformation

This is a reality that can be approached in two distinct ways. On the one hand, researchers recognize that it is a “real success” in terms of artificial intelligence research. And knowing how interconnected different AI applications are also demonstrates the progress of this technology as a whole, and not just within this niche.

However, the first consequences that come to mind are far from encouraging. Indeed, if they can quite serve as simple entertainment, it is rarely the case in reality. In practice, this technology is also widely used in very difficult to defend contexts.

For example, there are countless examples of misappropriation of political discourse, whether good-natured, degrading, or used for the purposes of disinformation. We can also cite pornography based on deepfakes, which can do considerable damage to people whose identity has been usurped in this way.

For all these reasons, the researchers believe that the authors of these programs will have to judge the risk-benefit balance of these systems. This will involve answering an important question: is it really necessary to develop this technology simply because the possibility exists, especially before having identified a concrete interest which is not entertainment?

A risk multiplied by the context

Because as the researchers explain, the other side of the coin is that “anyone can now create compelling synthetic content without specialist knowledge”. It is therefore an open door to real disasters in terms of misinformation.

Just imagine the current context around Russia, where all the actors are on the alert and ready to react at the quarter turn; it would be enough for a smart guy to succeed in making viral a deepfake particularly convincing of a president in the middle of a warrior speech for set fire to the powder on the Internet until the information is debunked.

The researchers also point out that the level of realism of these deepfakes also tends to impact real photos. Indeed, in this context where it becomes more and more difficult to distinguish between things, one can also begin to doubt the veracity of documents that are nevertheless authentic, which can have effects that are just as disastrous as the reverse.

The research paper is available here.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *