the CNIL issues an ultimatum to ClearView AI

Accused of tirelessly sucking up photos and videos on the Internet to feed its facial recognition system, ClearView AI will have to report to the French regulator.

In just four years of existence, ClearView AI has already panicked European rights defenders several times. It must be said that the young company specializing in facial recognition has used the Internet to its advantage. Since its creation, it has already collected more than 10 billion faces to fuel its artificial intelligence. Photos and videos “Aspirated” on social platforms or simple websites, which are of great concern to the CNIL, guarantor of information technology and freedoms in France.

After a first action carried out in 2020 by Jumbo Privacy, then a complaint from the NGO Privacy International earlier this year, the regulator finally decided to crack down. This Thursday, December 16, the CNIL gave formal notice to the company to stop collecting and using data from Internet users residing in France, but also to delete all of this biometric information within two months.

What does the CNIL reproach ClearView?

On a perfectly technical level, ClearView does not use any backdoor or illegal means to recover the data of millions of Internet users around the world. It suffices for the company tomass extract public information from targeted sites. An approach called “Scraping”, and which conflicts with the European Regulation for the protection of personal data (GDPR). In its formal notice, the CNIL emphasizes in particular the collection and use of biometric data “Without legal basis”, as well as “The lack of satisfactory and effective consideration of the rights of individuals, in particular requests for access”.

It must be said that if ClearView is “satisfied” to aggregate data that can be viewed by anyone on the Internet, the size of the database created by the company, as well as the simple fact of marketing it is problematic. For the CNIL, “A biometric template is thus created without the consent of the persons concerned”. Particularly sensitive data reminds the French regulator, “Especially because they are linked to our physical identity (what we are) and they allow us to uniquely identify us”. What’s more, “The vast majority of people whose images are sucked into the search engine do not know they are affected by this device”.

Sale of data to law enforcement

In addition to violating the GDPR, ClearView’s huge database poses many ethical issues. Without even mentioning the excesses of these biometric data for unscrupulous advertisers, the company notably sells its services to the police in several countries. In the United States, for example, ClearView’s facial recognition system was used to identify participants in the violent riots on Capitol Hill last January. More generally, the software is also used to identify the faces of certain suspects or witnesses filmed by surveillance cameras as part of an investigation. The Swedish police are also said to have used the services of the company.

The CNIL may be the first European regulator to call on ClearView AI to stop its illegal activities, the regulator nevertheless indicates “To have cooperated with its European counterparts in order to share the results of the investigations, each authority being competent to act on its own territory due to the lack of establishment of the company in Europe”.

Nothing remains but two months at ClearView to get back on track. After this period, the company risks a financial penalty from the CNIL. At the European level, leaders are still working hard to provide an ethical framework for tools related to facial recognition and artificial intelligence.

Bitdefender Plus Antivirus

By: Bitdefender

Exit mobile version