News

Researchers use Artificial Intelligence to predict crimes, amid questions about bias and racism

Several groups of researchers have been using Artificial Intelligence to predict the crimes that are going to be committed. But not a few experts doubt that, at least until now, it has not been done between socioeconomic and racist biases. For example, it happened with an AI crime prediction model that the Chicago Police Department tested in 2016. They selected it to also leave behind its racist biases, but it had the opposite effect. It used an Artificial Intelligence model to deduce who could be more at risk of being involved in a shooting. and among their results appeared 56% of black men between the ages of 20 and 29 in the city.

Despite this failure, scientists are still searching for a system to predict when and where crime may occur. And apparently, they have come up with a system that they claim is bias-free. Or so they claim. It is a system used by a group of researchers from the University of Chicago, which according to New Scientist has used an Artificial Intelligence model to analyze historical crime data from 2014 to 2016 in the city as a system to predict crime levels in Chicago. several weeks in advance.

The model predicted the probability of crimes in the city a week in advance and with almost 90% accuracy. When it was used in seven other large cities in the United States, it had about the same percentage of hits. In addition, the study carried out on the use of this model allowed the researchers to review the response to crime patterns.

Ishanu Chattopadhyay, Professor at the University of Chicago, and member of the research teampoints out that «These predictions allow us to study disturbances in crime patterns that suggest that the response to increased crime is biased by neighborhood socioeconomic status, draining police resources from socioeconomically disadvantaged areas, as has been shown in eight large cities. Americans. The resources of the state security forces are not infinite, so if you want to use them optimally, it would be great to be able to know where there is a high probability of homicides«.

Chattopadhyay has also pointed out that, although the data used by their model may also be biased, the researchers have worked to reduce this effect by not identifying suspects, and instead identifying the places where crimes are committed.

The group has also used the data to analyze police response and areas where human bias is affecting policing. To do this, they analyzed the arrests that followed the crimes in Chicago neighborhoods with different socioeconomic levels. And according to the results, crimes in wealthier areas led to more arrests than those in poorer areas, suggesting a bias in the police response.

However, there is still concern about racism in this research with Artificial Intelligence. Lawrence Sherman of the Cambridge Center for Evidence-Based Policyhas pointed out that due to the way crimes are recorded, either because people call the police or because the police look for crimes, the entire data system is susceptible to bias, and that «could be reflecting intentional discrimination by the police in certain areas«.

Chattopadhyay points out that AI predictions could be used more safely to inform high-level authorities, rather than being used directly to allocate police resources. For now, he has made public the data and the algorithm used by the researchers, so that others can work with the results.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *