Police departments across the world have found a new way to keep crime in check — algorithms to predict crime before it occurs, and more importantly, predict who is likely to commit a crime. “Pre-crime” systems, as they are being referred to in the media, are being used to create profiles for every citizen with a score on their likelihood of committing crimes. They do this by feeding personal information, public data, and data gathered from monitoring your activities, into algorithms which predict how likely you are to commit a crime.
“Pre-Crime”, a documentary film made by German filmmakers Matthias Heeder and Monika Hielscher, shows how these systems are operating in different cities in UK and US in chilling details. The directors were quoted in an interview saying, “Forecasting software and algorithms that have been criticised as inaccurate and arbitrary are used to collect information, and monitor and flag people. But predictive tools are only as good as the data they’re fed. With scant evidence of the reliability of the data sources or the accuracy of the data crunching, misfires are a guarantee.”
If this seems like the science fiction film Minority Report, welcome to life imitating art. Or in this case, police drawing their inspiration from Hollywood!
Police departments are not performing these analyses themselves, but in partnership with various private data analytics firms.
These systems are not new. A pilot program in Kent, in the United Kingdom, has been running for four years. Chicago police has a “heat list” of 1,500 people: these people are under constant surveillance. An algorithm has “determined” that they are likely to commit a crime. One of the people on the list, Robert McDaniel, had a friend who was murdered in what was declared to be a “gang murder”. Robert had been arrested with this friend previously, and was thus put on the list of likely criminals.
With such systems monitoring us, the kind of acts which can be considered suspicious become vast and arbitrary. If you’re someone who has been put on a “heat list”, you will be under constant surveillance, including your social media activity. So if a sentiment analysis of your recent Twitter update indicates that you are angry, and thus more likely to commit a crime, will you be preemptively detained to prevent anything from happening?
In India, such predictive policing is currently done manually. Just a week ago, a visitor from another country was suspected of being a terrorist by the NIA (National Investigation Agency) as he was captured on CCTV footage taking pictures of Wankhede stadium in Mumbai. This triggered an alarm in security agencies. The photographer was traced back to his hotel in Colaba, a team of cops and officers swooped in to detain him, questioned him about his intent, and finally let him go at the insistence of the Embassy of his country of origin. If a manual examination of CCTV footage can lead to such erroneous security alarms, how many such alarms will be raised by an algorithm programmed to generate an alert as soon as any deviation from normalcy occurs? How much will our rights as citizens be challenged for behaving in a manner which is not acceptable to a surveillance state monitored by not humans, but computers?
The Indian government already has the power to shut down our communication services whenever it so wishes. A power which it has been using excessively, more than any other country in the world. These internet bans are justified citing law and order control: preventing spread of misinformation and rumours which could result in rioting and other criminal offences. There is no assurance that they will not be willing to take this preventative action further by implementing similar “pre-crime” systems and targeting basic civil rights.
The firms collecting and analysing this data are also quite notorious. Cambridge Analytica (CA), a data analytics firm which worked in Trump’s campaign and the Brexit campaign, devised a pre-crime system which was carried out in Trinidad in 2013. The system recorded phone conversations and browsing history of citizens to construct a national police database, which had scores for each citizen according to their likelihood of committing a crime.
Cambridge Analytica also uses Facebook and Twitter data to mine this information, and uses it in political campaigns, micro-targeting every citizen based on their personal preferences.
CA is linked to another data mining firm, Palantir, for the work on the Trinidad project. Palantir handles huge datasets for NSA in the United States, GCHQ in the United Kingdom, and other such agencies known for snooping on citizens’ personal data. Steve Bannon, who was Donald Trump’s Chief Strategist not too long ago, was the vice-president of Cambridge Analytica before he joined the White House. Both CA and Palantir have very large investments from ideologically aligned billionaires, Robert Mercer and Peter Thiel. This shows a disturbing trend in the ability of billionaires to purchase data and sway public mood to their advantage.
Jim Killock, Executive Director of Open Rights Group, a rights group based in UK, said about pre-crime systems: “Pre-crime detection systems undermine the underlying tenet of our judicial system – that we are innocent until proven guilty. They fail to meet any test of proportionality and threaten our privacy rights.”
These pre-crime systems can be used to monitor any kind of activity which the state deems as threatening. This not only includes the perceived threats of terror attacks, but could also include “thought-crimes”, something which is already on a rise in India. In this scenario, the simple act of criticising the state could be seen as potentially criminal, and justifying detention and/or arrest.
Some systems, such as PredPol in Kent, do not use personal data, but use geographical data. So if you are in an area which has a high rate of occurrences of crime, and you act in any manner which is seen as a deviation from “normal” behaviour, you can be arrested in order to keep you from committing a crime which, most likely, you had no intent to commit.
The question is, till what extent can we allow state surveillance to monitor us on the pretext of security? And can we allow there to be a single definition of “normal” and “permissible” behaviour?
Disclaimer: The views expressed here are the author's personal views, and do not necessarily represent the views of Newsclick.