‘AI police’ is slowly becoming reality
A machine learning algorithm from the University of Chicago makes 90 percent accurate predictions about expected crimes, one week in advance.
On historical Chicago crime data, artificial intelligence was trained. These were violent crimes and crimes against properties, because these had zero tolerance everywhere, unlike some drug related crimes. The data deliberately ignored political and district lines in order to prevent bias and divided the city into 300x300 metre squares, where it attempted to forecast crimes based on past trends.
Atlanta, Austin, Detroit, Los Angeles, Philadelphia, Portland, and San Francisco are just a few of the American cities where the technique has been attempted and proven effective.
We created a digital copy of the urban environment. If you feed it data about what happened in the past, it tells you what will happen in the future. It’s not magic, it has limitations, we checked it and it works really well — explained Professor Ishanu Chattopadhyay, who published the results of the development in Nature Human Behavior.
Previous crime prediction systems relied on models from epidemiology and seismology, but they weren’t effective enough. Meanwhile the artificial intelligence picked up on the connection between the social climate of various areas and the effectiveness of the police presence right away.
This type of modelling, according to Chattopadhyay, has demonstrated that when pressure is heightened, police show up in greater numbers and make more arrests in affluent regions, but the resources devoted there are missing in poorer areas, where there has been less action.
Chattopadhyay argues that the machine’s foresight can be more valuable in the fields of city administration and police strategic planning rather than being utilised to flood a particular area with police officers in an attempt to stop crimes the following week.
There is a really nice video about this topic, approaching more from the mathematics point of view with Hannah Fry.
AI is amazing tool and as I wrote about this in previous articles, it really can help individuals and humanity as a whole. But I also know that it has some weak points, most relevant in this case is bias. This can be costly at best and fatal at worst case.
And so my hope is that everyone involved in these programs around the world will learn about the risk and limitations of these systems before they start trusting them blindly (or best never trust them blindly).
AI, like any other tool, can be a great help and a great harm at the same time — the outcome only depends on the users and their intentions.