In 2002, the sci-fi movie Minority Report was released. Fast-forward 15 years and you’ll see eerie similarities when watching the newly released Pre-Crime. But while Minority Report was an action-thriller, Pre-Crime is a documentary. Minority Report envisions a world where we can predict crime and stop perpetrators before they commit the acts. Pre-Crime cites real technology that cities are using to ‘predict’ potential criminals.
The line between reality and fiction is suddenly very blurry.
Pre-Crime premiered at Toronto’s Hot Docs Film Festival in May 2017 and it was unreal. At first, the movie seemed to focus on the philosophical arguments of security vs. privacy, the age of surveillance and how data from various social media services, including Google and Facebook, can be used to track us. This is common – advertisers will pay big money for your personal data in order to tailor ads to your interests, increasing the likelihood of making a sale. There is a famous quote that, “if you are not paying for a product, then you are the product!”, and it cannot be more true than with free email and social media services.
But the plot quickly thickens – what if governments started buying this private data and using it to predict crime patterns? What if police started analyzing this data, and combined it with other information that they may know about you – and then sent you a letter asking you to stop acting in a certain way, or to stop hanging around with the people you know? Amazingly, this is exactly what is happening in at least two cities: Chicago, Illinois in the US, and Kent, in the United Kingdom.
In Chicago, a ‘Heatlist’ has been developed of over 1,500 potential targets, who are at high-risk to being parties of a violent crime either as victims or perpetrators. Factors including gang affiliation and a number of arrest and crime-related metrics, are used to identify targets, who are then sent letters informing them of their place on the list and the reasons for being on it, along with a warning that they should not engage in any criminal activity.
Kent, on the other hand, uses a softer approach. Crime data is sent daily to a system for analysis, and a report highlighting ‘target’ areas is produced. Police use the list to prioritize their policing activities. The system utilizes a feedback loop, where new crimes are reported to the system, helping improve its accuracy.
While these efforts are designed to keep us safe, the movie raises a rather curious question: will we ever reach a point where we are tracked and scored for being good – or bad – citizens, as China is proposing to do by 2020?
On the other hand, we live in a world where ‘smart cities’ are touted as the solution for our future – a connected city that is filled with sensors, cameras, and self-driving cars, as the answer to today’s challenges. In a recent Smart Cities NYC conference, Google’s parent company, Alphabet, announced that one of its subsidiaries, Sidewalk Labs, may build a 12-acre digital district in Toronto. As with any surveillance solution, this will produce tons of data that can be used to maintain safety, but also to invade our privacy. Will we trust Google with that data? Would we trade our privacy for the convenience of an easy life?
These are all chilling thoughts, and they come at a time where societies struggle with recent revelations by Edward Snowden and Wikileak’s Julian Assange on mass-surveillance. In his novel 1984, George Orwell describes the ‘thought police’ – the secret police that can monitor your thoughts. It may seem ridiculous, but come to think of it, if you are putting your most intimate thoughts online to share or to keep private, aren’t private organizations and government agencies already reading your thoughts? There is a great quote that serves as a stern reminder: “1984 was a warning, not an instruction manual.”
At the end of the day, we realize that the role of police is shifting and new threats are appearing – from cybercrime to the use of encrypted communication. However, can we trust companies and governments to guard and protect, rather than abuse, our personal data? Can we trust that the more privacy we give up, the more secure we will be? This is a question that requires deep reflection, and active conversations from all parties involved, as we witness dramatic changes to what privacy means in our time.