ADVERTISEMENT

Can a Tiny AI Group Stand Up to Google?

Can a Tiny AI Group Stand Up to Google?

Artificial intelligence isn’t always so smart. It has amplified outrage on social media and struggled to flag hate speech. It has designated engineers as male and nurses as female when translating language. It has failed to recognize people with darker skin tones when matching faces.

Systems powered by machine learning are amassing greater influence on human life, and while they work well most of the time, developers are constantly fixing mistakes like a game of whack-a-mole. That mean's AI’s future impact is unpredictable. At best, it will likely continue to harm at least some people because it is often not trained properly; at worst, it will cause harm on a societal scale because its intended use isn’t vetted — think surveillance systems that use facial recognition and pattern matching.

Many say we need independent research into AI, and good news on that came Thursday from Timnit Gebru, a former ethical AI researcher at Alphabet Inc.’s Google. She had been fired exactly a year ago following a dispute over a paper critical of large AI models, including ones developed by Google. Gebru is starting DAIR (Distributed AI Research) which will work on the technology “free from Big Tech’s pervasive influence” and probe ways to weed out the harms that are often deeply embedded.    

Good luck to her, because this will be a tough battle. Big Tech carries out its own AI research with much more money, effectively sucking oxygen out of the room for everyone else. In 2019, for instance, Microsoft Corp. invested $1 billion into OpenAI, the research firm co-founded by Elon Musk, to power its development of a massive language-predicting system called GPT-3. A Harvard University study on AI ethics, published Wednesday, said that investment went to a project run by just 150 people, marking “one of the largest capital investments ever exclusively directed by such a small group.”

Independent research groups like DAIR will be lucky to get even a fraction of that kind of cash. Gebru has lined up funding from the Ford, MacArthur, Kapor Center, Rockefeller and Open Society foundations, enough to hire five researchers over the next year. But it’s telling that her first research fellow is based in South Africa and not Silicon Valley, where most of the best researchers are working for tech firms.

Google’s artificial intelligence unit DeepMind, for instance, has cornered much of the world’s top talent for AI research, with salaries in the range of $500,000 a year, according to one research scientist. That person said they were offered three times their salary to work at DeepMind. They declined, but many others take the higher pay. The promise of proper funding, for stretched academics and independent researchers, is too powerful a lure as many reach an age where they have families to support.

In academia, the growing influence of Big Tech has become stark. A recent study by scientists across multiple universities including Stanford showed academic research into machine learning saw Big Tech funding and affiliations triple to more than 70% in the decade to 2019. Its growing presence “closely resembles strategies used by Big Tobacco,” the authors of that study said.

Researchers who want to leave Big Tech also find it almost impossible to disentangle themselves. The founders of Google’s DeepMind sought for years to negotiate more independence from Alphabet to protect their AI research from corporate interests, but those plans were finally nixed by Google in 2021. Several of Open AI’s top safety researchers also left earlier this year to start their own San Francisco-based company, called Anthropic Inc., but they have gone to venture capital investors for funding. Among the backers: Facebook co-founder Dustin Moskovitz and Google’s former Chief Executive Officer Eric Schmidt. It has raised $124 million to date, according to PitchBook, which tracks venture capital investments.     

“[Venture capital investors] make their money from tech hype,” says Meredith Whittaker, a former Google researcher who helped lead employee protests over Google’s work with the military, before resigning in 2019. “Their interests are aligned with tech.”

Whittaker, who says she wouldn’t be comfortable with VC funding, co-founded another independent AI research group at New York University, called the AI Now Institute. Other similar groups that mostly rely on grants for funding include the Algorithmic Justice League, Data for Black Lives and Data and Society.    

Gebru at least is not alone. And such groups, though humbly resourced and vastly outgunned, have through the constant publication of studies created awareness around previously unknown issues like bias in algorithms. That’s helped inform new legislation like the European Union’s upcoming AI law, which will ban certain AI systems and require others to be more carefully supervised. There’s no single hero in this, says Whittaker. But, she adds, “we have changed the conversation.”

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Parmy Olson is a Bloomberg Opinion columnist covering technology. She previously reported for the Wall Street Journal and Forbes and is the author of "We Are Anonymous."

©2021 Bloomberg L.P.