ADVERTISEMENT

Even Robot Recruiters Are Biased

Procaccia says significant progress has been made in tackling the issue of bias in machine learning.

Even Robot Recruiters Are Biased
Fingers type on a keyboard at the CeBIT trade show in Hanover, Germany on March 13, 2003. (Photographer: Axel Seidemann/Bloomberg News)

(Bloomberg) -- Financial institutions are looking to shake off their pale, male, and stale reputations. Will handing over hiring decisions to machines make workplaces more diverse? Here we ask experts in the field of artificial intelligence and recruitment whether bias in machine learning models is a problem and, if so, what’s being done about it. 

“I don’t think the goal should be to completely eliminate all possible biases in one fell swoop but to do better than the status quo and keep improving over time.”

Ariel Procaccia 
Associate professor in the computer science department at Carnegie Mellon University in Pittsburgh

Procaccia says significant progress has been made in tackling the issue of bias in machine learning but that a complete fix is still a long way off. Researchers have identified sources of bias, defined formal notions of fairness, and designed AI algorithms that are fair according to those ideas, he says. However, Procaccia says there are two obstacles to putting this work into practice. “First, ironically there is an embarrassment of riches when it comes to definitions of fairness and potential fixes, and it’s still unclear how to choose among them,” he says. “Second, researchers have identified inherent trade-offs between notions of fairness and other qualities of AI algorithms; it seems that pushing bias out of algorithms must come at some cost to their effectiveness.”

Even Robot Recruiters Are Biased

“It’s a multistep process.”

Ashutosh Garg
Chief executive officer and co-founder of Eightfold.ai, an AI-powered recruiting platform based in Mountain View, Calif.

Despite all the skepticism about the technology, Garg says it’s possible to train machines to be unbiased. You start by collecting data and models from thousands of sources, he says. You then remove “anything that can create division like gender, race, and ethnicity” from the data. Machine learning systems can be optimized for equal opportunity, Garg says, while analytics can be used to detect and measure bias. 

“You’ll think you’re being neutral because you’re using data, but our research shows that’s not always the case.”

Joy Buolamwini
Founder of Algorithmic Justice League and a research assistant at MIT Media Lab in Cambridge, Mass.

Buolamwini says researchers using virtual testing grounds known as “sandboxes” have been finding algorithmic bias in machine learning systems for years. “Now these systems are being sold and incorporated into the tools we use every day,” she says. “This is part of why we’re seeing the algorithmic bias.” Buolamwini says in some cases machine learning tools are still at an early stage and aren’t an appropriate foundation for commercial applications such as recruiting bots. “If you don’t have [the right] foundation for building these systems, you’re going to perpetuate discrimination, perpetuate inequalities,” she says.

“There’s emphasis on technical problems with AI, but that’s the misleading view. It’s the socio-technical issues.”

Rashida Robinson
Director of policy research at AI Now Institute at New York University in New York City

Robinson says AI hiring tools are only as unbiased as the people who feed the systems data and interpret the results. “Hiring is a multistep process,” she says. “If you’re not looking through the entire pipeline of that process and how this tool will interact with all of the other decision points, then you’re choosing to take a very narrow view on what you think that problem is.” Robinson says there’s already research showing women and people of color aren’t proportionately represented in higher-paying sectors. “If you apply an AI hiring tool in that environment, it’s only going to accelerate that problem, favoring whoever is currently benefiting from the power structure within a company,” she says.

“It’s important technology is made by diverse individuals.”

Mekala Krishnan
Senior fellow at McKinsey Global Institute in Boston

Krishnan says one potential way to reduce biases in machine learning is to have a more diverse workforce developing innovations like it. “Women make up about 20% or less of tech workers in developed economies, and so there’s a lot to be done to increase women’s participation in technological fields,” she says.

“We are looking beyond hard data.”

Ilana Weinstein
CEO of New York city-Based Recruiter IDW Group

Weinstein says she doesn’t think recruiting bots will help diversify certain white-male-led workplaces and it’s not necessarily because of in-built bias. In the hedge fund industry, for example, there just aren’t enough diverse candidates applying for senior-level jobs, she says. This is a shame, Weinstein says, because diversity is becoming a top priority for hedge funds. “Diversity means different ways of looking at things,” which adds value, she says. Weinstein says recruiting bots can’t go out and encourage more women to join the industry. “Hedge fund managers need to broadcast the importance of diversity the same way that banks, consulting firms, and other large institutions do at the entry level.” 

To contact the editor responsible for this story: Siobhan Wagner at swagner33@bloomberg.net

©2019 Bloomberg L.P.