This system presented as a virtual “prosecutor” would be able to be 97% accurate from a simple “verbal description” of the case.
Systems based on artificial intelligence are already doing wonders in so many areas, from routine gestures to basic research. But there are still aspects of our society, such as health or justice, that we are not yet ready to leave entirely to the machine … at least, in some parts of the world. China is not one of them; the country has recently developed an AI capable of “Replace a prosecutor in the decision-making process, to a certain extent”, according to its designers.
According to them, the machine is currently being tested by the Shanghai Pudong People’s Protectorate, the country’s largest judicial service. According to the first results, this virtual “prosecutor” would be able to be precise to 97% starting from a simple “verbal description” of the case. We don’t know exactly what that 97% refer to, but it appears to be a comparison between the legislative documents produced by this system and those produced by humans.
According to the South China Morning Post, China already has such technologies; some institutions use for example a program called System 206. The system is able to determine itself the “strength of evidence“, the “conditions necessary for an arrestation ”, and“the dangerousness of the suspect for the public”. But this is only a tool that should be used to assist humans in decision making.
Unclog the judiciary
However, this new system which can run on any standard computer goes further. He was trained from a set of 17,000 cases that took place between 2015 and 2020. This allowed him to extract around 1,000 parameters, most of which would be far too abstract to be seen by a real human.
On the other hand, they are perfectly interpretable by this program. Once the strength of the case has been validated by System 206, this new program uses these parameters to draft documents related to the charge. As it stands, it is capable of doing this for the most common offenses in the Shanghai area. The list includes, but is not limited to, credit card fraud, dangerous driving, theft, or public disturbances. According to the designers, this would allow human specialists to devote more time to more complicated questions.
AI applied to justice, a sensitive issue
But of course, it also raises a plethora of ethical questions, starting with the 97% accuracy figure. This is certainly an impressive figure, and there is nothing to say with certainty that humans necessarily do better; but this margin of error, even small, seems difficult to defend in a process where the slightest imprecision can have serious consequences for those concerned.
In addition, the SCMP specifies that this program is based on the data recorded by the humans who recorded the elements of the file. This system therefore does not make it possible to completely exclude the human component. It also means that he is unable to improvise to find the best answer to a still unknown case. However, we know from experience that an AI tends to offer completely aberrant results when it goes outside its preferred perimeter. There is also the risk that these technologies will be misused by unscrupulous actors to legitimize arbitrary court decisions. It is all the more worrying that this system is presented by the SCMP as a “prosecutor”, whose role is theoretically to defend the interests of the company on behalf of the State.
It will therefore be necessary to follow with great attention the evolution of this fascinating technology, but also and above all the evolution of the corresponding legislation which will determine its use, because similar systems are likely to find a place in many legislative bodies in the decades to come.