ADVERTISEMENT

Artificial Intelligence in Policing: Advice for New Orleans and Palantir

Artificial Intelligence in Policing: Advice for New Orleans and Palantir

(Bloomberg View) -- The revelation that the New Orleans Police Department quietly used a Silicon Valley company to predict crime raises dilemmas similar to those emerging from artificial intelligence in other spheres, like consumer behavior, medicine and employment. But what's uniquely shocking about the story of New Orleans's partnership with the national security company Palantir is that it has remained largely unreported before now.

As an article in The Verge details, James Carville, the well-known Democratic strategist and Bill Clinton adviser, did actually mention the partnership on a radio program back in 2014. He knew about it for a simple reason: It was his idea (at least according to Carville). By his account, Palantir was looking for “pro bono” opportunities, which is often code for a corporate dry run for untested technology. Carville connected Palantir to New Orleans, and the relationship was established on a “philanthropic” basis -- thus effectively circumventing disclosure requirements.

It should go without saying that experimenting with predictive AI in real-world law enforcement demands public oversight and awareness. The debate that is now beginning should have been had before the technology was used to build indictments, not afterward. Nevertheless, it would also be a mistake if the only outrage is over the failure to make public disclosures. The more important conversation must address the deeper issues this case raises.

Law enforcement -- and criminal justice more broadly -- must be evaluated on two separate criteria: pragmatic effectiveness and legal justice. On the first criterion, it's important to note that there isn’t yet any clear evidence that the Palantir-New Orleans partnership works. Palantir would like to take credit for a New Orleans crime dip, but the data and the timing don’t necessarily support that. For now, the efficacy of machine-based crime prediction and protection must be treated as unproven at best.

Of course, as advocates of big data analysis would surely point out, it takes time for predictive technologies to be refined (or in the case of machine learning, to refine themselves). The more data, the better. Translating prediction into prevention isn’t necessarily simple either. Our conversation could proceed on the assumption that someday, predictive machine learning tools with access to enough data might indeed be able to predict crime better than existing police tools do. After all, crime is a form of human behavior just like any other, and algorithmic AI models are getting better and better at predicting plenty of human behaviors in other realms.

So that brings us to the question of justice: What, if anything, is inherently worrisome about machine predictions of crime? The most obvious worry is that computers could get their predictions wrong and therefore encourage police to target people for investigation and surveillance who aren’t in fact going to commit crimes.

This risk is real, and needs to be taken seriously. To be sure, in principle, machine predictions can’t and shouldn’t be used to charge anyone with a crime or convict them. The constitution still applies. Police and prosecutors would need non-statistical evidence of probable cause before violating suspects’ privacy or arresting them. If a case went to trial, the prosecution would have to provide evidence and the court would have to find guilt beyond a reasonable doubt to convict. For these reasons, the procedural safeguards that we have already put in place provide some comfort to concerns about an algorithmic police state.

Yet there are plenty of ways that police attention is undesirable even if it does not lead to a warrant, an arrest or criminal charges. The police are legally empowered to do all kinds of surveillance without a warrant, provided they are operating in public space. It's crucial in a democracy that the police choose their targets on the basis of reasonable suspicion -- not, say, racial bias or class prejudice.

Here is where things get very complicated. As we know, even without artificial intelligence, police use a range of statistical tools in identifying suspects and potentially dangerous locations. The tools popularized by former Boston, Los Angeles and New York chief of police William Bratton include, famously, a program called CompStat. That was a simple but powerful tool that enabled police to gather statistics of arrests and disturbance locations and use the aggregated data to deploy their investigative and preventive power. In other words, statistics have been used in crime fighting for at least a quarter century in many jurisdictions. Palantir's reliance on statistics is not what's controversial.

What Palantir’s tools presumably do is to scrape publicly available data about crime and other police encounters and use them to build a model that’s supposed to individualize predictions. Given that police already use tools like the existence of an arrest record or a prior conviction to aid their investigation, how bad is it if a computer does the same, just more accurately?

In fact, if a computer can make more refined predictions than a person, wouldn't it eventually reduce the number of mistaken investigations and interrogations?

Great questions. Exactly the sort that need to be answered in broad daylight, with full transparency before and during the deployment of AI in policing.

We don't yet know enough about the process and outcomes of AI policing to know whether the New Orleans experiment is a step toward justice or toward an overreaching police state, but we do know the experiment was conducted with too much secrecy and too little accountability. The next steps need more scrutiny, more discussion and more transparency.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Noah Feldman is a Bloomberg View columnist. He is a professor of constitutional and international law at Harvard University and was a clerk to U.S. Supreme Court Justice David Souter. His seven books include “The Three Lives of James Madison: Genius, Partisan, President” and “Cool War: The Future of Global Competition.”

To contact the author of this story: Noah Feldman at nfeldman7@bloomberg.net.

To contact the editor responsible for this story: Philip Gray at philipgray@bloomberg.net.

For more columns from Bloomberg View, visit http://www.bloomberg.com/view.

©2018 Bloomberg L.P.