The Chinese are Looking to AI to Help Predict Crimes
The Chinese are researching ways to use artificial intelligence and facial recognition to help police predict crimes, and they aren’t alone.
In a story that would fit right in on Black Mirror, officials in China are considering new ways to stop crime – including how to stop them before they are even committed. By using rudimentary artificial intelligence capable of predictive analytics combined with facial recognition, the Chinese hope to be able to pull off a little Minority Report-style crime fighting, just with significantly less precogs, and far more possibility for true horror.
The idea is simple enough, even if the technology is incredibly advanced. The staggering number of CCTV cameras located throughout China would watch for people engaging in suspicious patterns. If someone, for instance, is wandering around Beijing following women until they are alone, it would raise alarms. It could also monitor where people go and what they do. If someone with a criminal record for making explosives goes to a store that sells industrial strength chemicals, it would probably raise flags. But it also goes far beyond that.
The project is being headed up by the Guangzhou-based company Cloud Walk, using software that is currently integrated into police databases in more than 50 cities and provinces. The software can identify people, even when they change clothes and alter their facial hair. It isn’t perfect, but it is improving all the time thanks to AI. The software can also identify when people are wearing masks or are deliberately obscuring their faces, which sets off an alarm.
“The police are using a big-data rating system to rate highly suspicious groups of people based on where they go and what they do,” a Cloud Walk spokesperson told Financial Times. That rating increases when someone “frequently visits transport hubs and goes to suspicious places like a knife store.”
While this might sound like a good idea to prevent horrible crimes, it might also be used to identify minor crimes, including littering and jaywalking. Basically, an AI designed without empathy might one day soon be watching a city, every corner of a city, determining what is legal and what isn’t. If you walk into a bar and stay there for a few hours, even if you aren’t drinking, the AI might decide that based on the law of averages you should be pulled over for a DUI check. And while some might actually appreciate that level of scrutiny, it could go deeper. If you are concerned now about privacy
If you are concerned now about privacy in modern society, this should have the hairs on the back of your neck standing it. While the tech to go to the extremes is still years out, one day it might be like having a police officer following you around all the time, day and night. And in an authoritarian state like China, the monitoring wouldn’t stop there. A citizen might come home and jump online, which would also be monitored – and not just for obvious signs like surfing extremist websites, but minor things like saying something negative about a local official, which could intensify scrutiny on you.
So far, the Chinese aren’t planning on using it to charge people before they commit crimes, but if it is accurate enough, it might lead to that.
To be fair, those are all extreme examples. The AI might limit itself to only predicting violent crimes. But the potential for creating an oppressive layer of constant observation is at least a little unsettling.
China is currently investing heavily in AI as a whole. It recently announced plans to invest $59 billion in the artificial intelligence industry by 2025, and that number might increase as the results begin to be seen. That money will, of course, go towards several projects, not just predictive analysis.
And while China may be leading the charge, it is far from the only country that is looking into using technology to predict crimes. The U.S. is also considering using AI for the same purpose, as are many others.