Tech

Facebook has a new way to guess which tool was used to create a deepfake

From a single deepfake image, Facebook claims to be able to discover which tool was used to create it.

What if it was possible to find the secrets of making deepfakes to better detect them? Here is the ambitious idea of ​​Facebook, which published on June 16 the results of its research in partnership with Michigan State University.

The scientists have developed a machine learning model whose role will be to reverse-engineer deepfake, that is to say to guess which model (its architecture, its number of layers, etc.) was used to generate it. , even if it is a previously unknown model. In other words, the researchers want to discover the process of making the image. If successful, they will be able to both improve the detection of deepfakes generated by certain models, and collect deepfakes from the same ‘family’, which would help trace them back to the people who created them. ” This work will give researchers tools to better investigate cases of coordinated disinformation campaigns that rely on deepfakes », Promote the authors of the article.

Anything can be the subject of a deepfake. Even cats. // Source: Thesecatsdonotexist

To explain its initiative to the general public, Facebook draws a parallel between a deepfake and… a car: “ Different cars may look the same, but under their hood they will have engines with very different components. Our reverse engineering technique is sort of recognizing the components of the car based on the noise the vehicle makes, even if it’s a car we’ve never heard before. “.

From footprints, discover an architecture

Reverse engineering and machine learning do not seem to go well together, due to the black box phenomenon. The developers know how their model is built and what images are given to it to generate the deepfake. They also know the final product of the model: the deepfake itself. On the other hand, the details of what happens between the two remain relatively unknown – hence the term black box.

Facebook, when trying to detect a deepfake, only owns the end product, the image. This is where the company claims to differentiate itself from existing methods: in principle, its tool only needs this information to discover the architecture of the model used by the deepfake generator. Even if this architecture was previously unknown.

How? ‘Or’ What ? Well thanks to ” fingerprints Contained in the image. ” In digital photography, fingerprints identify which device was used to produce the image », Recall the researchers, who are trying to transcribe this principle to deepfakes.

The researchers have trained what they call a “fingerprint estimation network” (FEN) to accomplish this task. ” Imprints are subtle, but they leave unique patterns on each image due to imperfections in the manufacturing process »Detail the authors.

The project is still in its early stages

Once this model has discovered the fingerprints, a second model developed by Facebook will analyze them (“parsing”, in English) to predict the “hyperparameters” of the deepfake generator, that is to say its architecture, its number. number of layers, the type of operations performed at each layer …

From a single image, the tool will therefore estimate which model was used to create it, hoping that it is as close as possible to the one used by the creators of the deepfake. This project is only in its infancy, but researchers at Michigan State have already started initial tests using deepfake generators. ” Since we are the first to do model parsing, we have no basis for comparison », They conclude.

Sometimes exaggerated, the threat of the use of deepfakes in disinformation campaigns is the subject of regular declarations by the public authorities. Facebook, for its part, often devotes resources to it: last year, for example, it held the first edition of its deepfake detection contest.

Related Articles