AI predicts crime with 90% accuracy a week ago, but it can also perpetuate racist prejudice

AI predicts crime with 90% accuracy a week ago, but it can also perpetuate racist prejudice

RoboCop may have gotten a restart in the 21st century. This is because the algorithm is known to predict future crimes with 90% accuracy a week ago.

Artificial intelligence (AI) tools predict crime by learning time and geographical location patterns of violent and property crimes.

Data scientists at the University of Chicago trained computer models using public data from eight major cities in the United States.

However, this model has proven to be controversial because it does not take into account systematic biases in police enforcement and complex relationships with crime and society.

A similar system has been shown to actually perpetuate the racist bias in police that can actually be reproduced by this model.

However, these researchers argue that models can be used to reveal biases and should only be used to inform current policing strategies.

We also found that socio-economically disadvantaged areas could get disproportionately less police attention than wealthy areas.

A new artificial intelligence (AI) tool developed by scientists in Chicago, USA, predicts crime by learning time and geographical location patterns of violent and property crimes.

Violent crimes (left) and property crimes (right) recorded in Chicago within the two weeks of April 1-15, 2017. These incidents were used to train computer models.

Violent crimes (left) and property crimes (right) recorded in Chicago within the two weeks of April 1-15, 2017. These incidents were used to train computer models.

Accuracy of model predictions of violence (left) and property crime (right) in Chicago. The forecast is made one week in advance, and if the crime is recorded within ± 1 day from the forecast date, the event will be registered as a successful forecast.

Accuracy of model predictions of violence (left) and property crime (right) in Chicago. The forecast is made one week in advance, and if the crime is recorded within ± 1 day from the forecast date, the event will be registered as a successful forecast.

The computer model was trained using historical data from the city of Chicago from 2014 to the end of 2016.

Next, we predicted crime levels for the weeks following this training period.

The cases in which it was trained were divided into two broad categories of events that were less prone to execution bias.

These were violent crimes such as murder, assault and battery, and property crimes such as robbery, theft and car theft.

These cases were also likely to be reported to urban police due to historical distrust and lack of cooperation with law enforcement agencies.

How does AI work?

The model was trained using historical data from crime cases in Chicago from 2014 to the end of 2016.

Next, we predicted crime levels in the weeks following the training period.

The cases in which it was trained were classified as either violent crimes or property crimes.

It takes into account the time and spatial coordinates of individual crimes and detects those patterns to predict future events.

It divides the city into approximately 1,000 feet of spatial tiles and predicts crime in these areas

The model also takes into account the time and spatial coordinates of individual crimes and detects those patterns to predict future events.

It divides the city into approximately 1,000 feet of spatial tiles and predicts crime within these areas.

This is in contrast to seeing the area as a criminal “hotspot” that spreads to the surrounding area, as previous studies have done.

Hotspots often rely on traditional neighborhoods and political boundaries, which are also subject to bias.

Co-author Dr. James Evans said:

Transportation networks respect streets, sidewalks, train and bus routes, and telecommunications networks respect areas of similar socio-economic background.

‘Our model allows the discovery of these connections.

“We show the importance of discovering city-specific patterns for predicting reported crimes, which gives us a fresh perspective on the neighborhood of the city and allows us to ask new questions. , You can evaluate police behavior in new ways. ”

According to the results announced yesterday at Nature Human Behavior, this model worked as well with data from seven other US cities, just like Chicago.

A graphic showing the modeling approach of AI tools. The city is divided into small spatial tiles that are about 1.5 times the size of the average block, and the model calculates a pattern of continuous event streams recorded on individual tiles.

A graphic showing the modeling approach of AI tools. The city is divided into small spatial tiles that are about 1.5 times the size of the average block, and the model calculates a pattern of continuous event streams recorded on individual tiles.

These were Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland and San Francisco.

Researchers then used this model to study police responses to incidents in areas with different socio-economic backgrounds.

They found that when crimes took place in wealthier areas, they attracted more police resources and resulted in more arrests than people in disadvantaged areas.

This suggests Bias in police response and enforcement.

Senior author Dr. Ishanu Chattopadhyay said: ‘

Models also found that when crimes took place in wealthier areas, they attracted more police resources and resulted in more arrests than people in disadvantaged areas.

Models also found that when crimes took place in wealthier areas, they attracted more police resources and resulted in more arrests than people in disadvantaged areas.

Prediction accuracy of property and violent crime models across major US cities.  a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin.All of these cities show relatively high predictive performance

Prediction accuracy of property and violent crime models across major US cities. a: Atlanta, b: Philadelphia, c: San Francisco, d: Detroit, e: Los Angeles, f: Austin.All of these cities show relatively high predictive performance

The use of computer models in law enforcement has proven to be controversial due to concerns that it could further instill prejudice in existing police.

However, this tool is not intended to guide police officers to areas where crime is expected to occur, but is used to inform them of current police strategies and policies.

The data and algorithms used in this study are publicly available for other researchers to investigate the results.

Dr. Chattopadhyay said: Feeding data that happened in the past tells you what will happen in the future.

“It’s not magic, it has limitations, but we validate it and it works really well.

“You can use this as a simulation tool to see what happens if a crime occurs in one area of ​​a city or if enforcement is strengthened in another.

“If you apply all of these different variables, you can see how the system evolves accordingly.”

Can an AI “lie detector” that reads a face tell the police when the suspect isn’t telling the truth?

Forget the old “good cop, bad cop” routine. Soon police could turn to an artificial intelligence system that could reveal the suspect’s true emotions during a cross-examination.

Facial scanning techniques rely on microexpressions, small involuntary facial movements that betray the true feelings, and reveal even when people are lying.

Facesoft, a London-based startup, has trained AI on microexpressions found in real people’s faces and a database of 300 million expressions.

The company is discussing the potential commercialization of AI technology with police in both the UK and Mumbai.

Click here for details