The European Parliament adopted a resolution rejecting the principle of a social rating system piloted by the authorities. The European Commission also considers it unacceptable.
It is a resolution which has the merit of fixing the position of the European Parliament, with a view to possible future debates, if it does not have any binding legal value. During a vote in plenary session on October 6, 2021, MEPs overwhelmingly rejected the idea of a social rating of individuals on an algorithmic basis.
The resolution, adopted by 377 votes in favor, 248 against and 62 abstentions, declares that the European Parliament ” considers that any form of normative rating of citizens on a large scale […] results in a loss of autonomy, threatens the principle of non-discrimination and cannot be considered as complying with fundamental rights, in particular human dignity. “
Parliamentarians from the Old Continent particularly advocate the prohibition of any state social score, established by public authorities and in particular for areas as sensitive as law enforcement and the judiciary. In this sense, elected officials espouse the opinion of the high-level group of experts on artificial intelligence that Brussels set up in 2018 to reflect on the ethical issues of AI.
An unacceptable practice for Brussels
It appears that the Commission is on the same page. In April, it ranked systems that allow states to score socially as an unacceptable risk – the highest of the four levels of dangerousness, which also includes toys with voice assistants that induce minors to behave unsafe, the manipulation of behavior and the abolition of free will.
It should be noted that there are already rating practices in Europe. In the private sector, the public may already have to assess others – for example for a VTC driver or a delivery person. Customers too may be noted in certain circumstances. Uber, for example, provides that drivers can also give a quantitative assessment of the people they are transporting.
The resolution passed by Parliament actually addresses other issues with an equally harsh eye, such as private facial recognition databases (this refers to the Clearview scandal) and automatic recognition in public spaces and screening. borders. These practices are to be banned, judge elected officials, just like behavioral policing.
Inevitably, the European resolution is reminiscent of what anticipatory science fiction offers with an episode that has become famous from the series. Black mirror, in which we follow a person who struggles to save his personal rating, which entitles him to various benefits and facilities if it is high. But this device also has a real translation in China, with the social credit system.
This device piloted by the Chinese government is singled out for its consequences on the standardization of behavior and on the creeping exclusion of part of the population. It has thus been documented that millions of people could not take transport because of this system giving them too low a rating or to certain establishments or to carry out certain common acts.