ADVERTISEMENT

The Data Scientist Helping to Create Ethical Robots

The Data Scientist Helping to Create Ethical Robots

(Bloomberg Businessweek) -- As the lead statistician at the nonprofit Human Rights Data Analysis Group, Kristian Lum, 33, is trying to make sure the algorithms increasingly controlling our lives are as fair as possible. She’s especially focused on the controversial use of predictive policing and sentencing programs in the criminal justice system. When it comes to bias, Lum isn’t concerned only with algorithms. In a widely read December blog post, she described harassment she’d experienced at academic conferences when she was a doctoral student at Duke University and an assistant research professor at Virginia Tech. No longer in academia, she uses statistics to examine pressing human-rights problems.

What’s the relationship between statistics and AI and machine learning?

AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.

The Data Scientist Helping to Create Ethical Robots

You’re studying how machine learning is used in the criminal justice system. What drew you to that?

A few years ago, I read this really interesting paper published by a predictive policing company. We reproduced the model they had built and applied it to some real data to look at what the consequences would be. We applied our model to the police records of drug crime in Oakland [Calif.] and compared that to an estimate of the demographic profile of people likely to be using drugs based on public-health records. That comparison found that the police enforcement and recording of drug crimes was disproportionately taking place in communities of color. Then we applied the predictive policing algorithm to that data set and found it would perpetuate or perhaps amplify the historical bias already in that data.

The move toward using AI, or quantitative methods, in criminal justice is at least in part a response to a growing acknowledgment that there’s racial bias in policing. A selling point for a lot of people is that they think a computer can’t be racist, that an algorithm can’t be racist. I wanted to challenge the idea that just because it’s a computer making the predictions, that it would solve those problems.

Is that a tough sell, the idea that a computer can be biased?

I feel like I can’t open Twitter without seeing another article about the racist AI. What’s hard about this is there isn’t universal agreement about what fairness means. People disagree about what fairness looks like. That’s true in general, and also true when you try to write down a mathematical equation and say, “This is the definition of fairness.”

In your blog post, you wrote about a 2010 academic conference you attended where one prominent academic groped you. Another commented that you were dressed too sexy, then sent you innuendo-laced messages. You didn’t name the men, but people figured out who they were and one lost his job. Why did you bring these incidents to light now?

I’ve had a lot of what I wrote firmly in my head for years. Until recently, I didn’t really have a plan to put pen to paper. But I’d been growing increasingly frustrated with the fact that the inappropriate behavior was openly talked about and nothing was happening, even with the changing climate. I thought it was important for people to realize that even things that seem relatively innocuous, like jokes, matter. And it seemed like a cultural moment when people might hear me out more.

What I experienced had an outsize impact on my career, even though I didn’t fully realize it at the time. It’s not like the day after that conference I quit academia. It was one factor among many. I stopped going to certain conferences, which affects your ability to contribute. To be clear, I’m really happy with how my career has turned out. But things would’ve worked out differently if I’d had different experiences.

How did you feel when the article went live?

I was terrified. I still publish academic papers, and these people will be reviewers for me. I was like, “Oh God, what am I doing?” But I thought it was important. By and large, people have been really supportive. Tons of people reached out to me, some with similar stories, sometimes about the same people. At conferences now, people come up to me and say, “Thank you” and give me a hug.

What was it about statistics that first appealed to you when you decided to pursue it?

It’s really cool. You get to learn things from data. That’s really the crux of it, right? You get to either test your hypothesis about the world or, if you’re taking more of an ML approach, you get to try to predict something in the future. I also really appreciate rethinking the framing around data, not thinking of the data as some fixed thing. Honestly, I’m just a big nerd.

To contact the editor responsible for this story: Jeff Muskus at jmuskus@bloomberg.net, Dimitra Kessenides

©2018 Bloomberg L.P.