Crime Fighting Algorithms

22 Aug 2016

Ethics

Causes of Death in the U.S.

As automation becomes more widespread, machines will have more and more opportunities to make a variety of situations safer for humans. IBM’s Watson is the most publicized attempt to revolutionize healthcare, and there are numerous other attempts to do so, which would potentially extend the life expectancies of billions of people in the future. Also, a lot of attention has been given to self driving cars recently, which is another area ripe for reducing the risk of death. According to a report by Newsweek, the National Safety Council estimates “38,300 people were killed and 4.4 million injured on U.S. roads in 2015, which saw the largest one-year percentage increase in half a century.” These important and much-talked-about areas of research will undoubtedly have a lasting impact on the future of humanity, but for now, I’d like to focus on another danger facing humans that has received a lot of attention lately: crime and crime fighting.

In 2014, the most recent year of available FBI data, an estimated 14,249 people were victims of homicide in the United States. Unfortunately, the number of people who are killed by law enforcement is not readily available. According to FBI director James Comey, “we can’t have an informed discussion, because we don’t have data.” Based on the amount of media coverage we’ve seen this year, though, the weight each of these deaths carry in the public perception is much larger than other kinds of deaths. Given these considerations, any way to reduce crime or to keep police officers accountable for wrongdoings and supported in cases of justified violence would be a great advancement for society.

Using Data to Help (and Monitor) Law Enforcement

Police officers’ duty is to protect and serve citizens and to keep them safe and secure. Can this duty be reliably aided by data or are there negative consequences of implementing such systems?

In Chicago, police are using an algorithm to create a “Strategic Subject List” based on people’s likelihood to be involved in violent crimes, either on the criminal end or the victim end. According to this New York Times piece, they hope that “knowing who is most likely to be involved in violence can bring them a step closer to curtailing it.” Programs like this are being used by police in other areas as well, such as Kansas City’s No Violence Alliance.

Whether these programs are making people safer is still being debated, and the data to answer that question of effectiveness hasn’t fully come in yet. Another question that these algorithmic initiatives raise, however, is how the police forces’ public perception will be impacted by the use of such tools. While some accounts report that most of the general public supports police, there is evidence that there is a growing suspicion of the public toward police officers. It’s likely that using algorithmic means to calculate citizens’ violence potential will further drive this suspicion. The public impression of such technologies are along the lines of Minority Report or George Orwell’s 1984.

In addition to the perception problem, there are problems with implementing automated scoring techniques within the justice system in general. ProPublica found that predictive algorithms used in some court rooms have racial biases baked into them, despite it not being explitly accounted for anywhere in the code or data. In this case, the data used prior criminal data, but since past arrests were not racially fair, the data was inherently biased anyway. Law enforcement, courtrooms, and the data and statistics communities have not found a way to address these ingrained racist traces yet, so putting too much force behind the results of an algorithm in these fields is dangerous toward minorities.

On the other side of the equation, there’s a long history of attempted implementations of predictive algorithms to keep police from doing harm, as well. As Charlotte-Mecklenburg, North Carolina, police Chief Kerr Putney puts it, these algorithms are a way of “balanc[ing] public need versus what officers may want”. Individual officers and their representative unions haven’t been so receptive of these technologies though, calling them ludicrous and saying it’s absurd to punish officers for crimes they had not yet committed. However, the article goes on to describe current efforts that are more effective and, therefore, have a better chance of being accepted by police forces.

Additionally, it’s easier to approve of a behavior predicting algorithm applied to someone who is working in a job they elected to do as opposed to being imposed on a person just for being a citizen. This is especially true when, in the latter case, it would be even harder to get out of the system as a “potentially risky individual” once labelled as such.

Our Data-Driven Future

While it’s fun to talk about a future without disease or a future in which cars do all the work of driving for us, the truth is that data and machine learning will introduce automation into many aspects of our lives, and we must keep the conversation going about all of these upcoming changes. In this conversation, we must be honest about the limitations of technology. We must also reevaluate what our basic human rights are in the face of an ever changing technological landscape.

comments powered by Disqus