ADVERTISEMENT

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus

(Bloomberg) -- Over the past few months, OpenAI has vacuumed an incredible amount of data into its artificial intelligence language systems. It sucked up Wikipedia, a huge swath of the rest of the internet and tons of books. This mass of text – trillions of words – was then analyzed and manipulated by a supercomputer to create what the research group bills as a major AI breakthrough and the heart of its first commercial product, which came out on Thursday.

The product name — OpenAI calls it “the API” — might not be magical, but the things it can accomplish do seem to border on wizardry at times. The software can perform a broad set of language tasks, including translating between languages, writing news stories and poems and answering everyday questions. Ask it, for example, if you should keep reading a story, and you might be told, “Definitely. The twists and turns keep coming.”

OpenAI wants to build the most flexible, general purpose AI language system of all time. Typically, companies and researchers will tune their AI systems to handle one, limited task. The API, by contrast, can crank away at a broad set of jobs and, in many cases, at levels comparable with specialized systems. While the product is in a limited test phase right now, it will be released broadly as something that other companies can use at the heart of their own offerings such as customer support chat systems, education products or games, OpenAI Chief Executive Officer Sam Altman said.

Trillions of Words Analyzed, OpenAI Sets Loose AI Language Colossus

OpenAI began in 2015 as a non-profit dedicated to artificial intelligence research and was presented as something of an open counterbalance to tech superpowers like Google, Facebook Inc., Apple Inc., Amazon.com Inc., Tencent Holdings Ltd. and Baidu Inc. In mid-2019, however, it took a $1 billion investment from one of those giants, Microsoft Corp., and formed a deep partnership in which OpenAI gets to use Microsoft’s supercomputers to create its AI systems. This move was part of a shift away from being a neutral non-profit and toward a money-making business.

The API product builds on years of research in which OpenAI has compiled ever larger text databases with which to feed its AI algorithms and neural networks. At its core, OpenAI API looks over all the examples of language it has seen and then uses those examples to predict, say, what word should come next in a sentence or how best to answer a particular question. “It almost gets to the point where it assimilates all of human knowledge because it has seen everything before,” said Eli Chen, CEO of startup Veriph.ai, who tried out an earlier version of OpenAI’s product. “Very few other companies would be able to afford what it costs to build this type of huge model.”

Software developers can begin training the AI system just by showing it a few examples of what they want the code to do. If you ask it a number of questions in a row, for example, the system starts to sense it's in question-and-answer mode and tweaks its responses accordingly. There are also tools that let you alter how literal or creative you want the AI to be. 

But even a layperson – i.e. this reporter – can use the product. You can simply type text into a box, hit a button and get responses. Drop a couple paragraphs of a news story into the API, and it will try to complete the piece with results that vary from I-kinda-fear-for-my-job good to this-computer-might-be-on-drugs bad. Or, ask it, “What’s the best car ever?” and you’ll get a response like, “Good question! Is it the ones that you have enjoyed the most or the most reliable? Is it the one that still looks good today or the one that gave the most pleasure in their day? The fact that some cars have been here for more than 100 years is something that's quite remarkable.”

At the end of May, OpenAI published a paper documenting the research that led to all this and said that its current language system (known as GPT-3) was, in effect, 100 times larger than one released last year (GPT-2). When it first began talking about GPT-2 in 2019, OpenAI warned that the technology might be too powerful and lead to the internet being flooded with even more spam and fake news stories. Similar caveats are at play with the API, which is based on GPT-3 and some additional language models.

The first customers have arrived via invitation only. The company wants to see how people use its technology and watch out for any bad actors. As time goes on, more organizations will gain access, and then the API will be public. “I don’t know exactly how long that will take,” Altman said. “We would rather be on the too-slow than the too-fast side. We will mistakes here, and we will learn.” The company has also yet to decide on exactly how OpenAI API will be priced. To date, Casetext has been using the technology to improve its legal research search service, MessageBird has tapped it for customer service, and education software maker Quizlet has used it to make study materials.

The startup Latitude makes a game called AI Dungeon, which has the API technology at its core. In the game, players are guided through a text-based role-playing scenario similar to something like Dungeons & Dragons where the AI does the job of the dungeon master. “We are not a huge company and don’t have the $10 million we would need to train our own natural language thing,” said Nick Walton, who created AI Dungeon and is chief technology officer at Latitude. The startup customizes OpenAI’s technology to give it more of a fantasy flair by feeding it extra sci-fi and adventure texts. According to Walton, the API has added new levels of complexity to the game. “The users have noticed the difference,” he said. “They have seen more coherent and interesting stories.”

OpenAI has its critics, particularly around the way it builds AI models. The company spends millions of dollars performing calculations on supercomputers and keeps using incredible amounts of data. It’s a sort of brute force strategy at a time when some researchers want to see AI systems learn off just snippets of information. “GPT-3 is a bit of a brawn-over-brain approach with clear downsides for both practical use and for AI research,” said Oren Etzioni, the CEO of the Allen Institute for AI. Etzioni also noted the irony that OpenAI previously cautioned its research technology was too dangerous for widespread use only to improve it 100-fold and turn it into a product.

While the API is very impressive, it’s still far from something like human intelligence, according to Veriph.ai’s Chen. “There are a lot of things wrong with the way we are building AI right now,” he said. “If you think of a system as storing huge amounts of information and pattern matching, it does not really sound like how humans operate.”

Greg Brockman, co-founder and chief technology officer of OpenAI, pitches the new product as a major advance in AI that is the first step toward building serious intelligence into just about every software product. He pledges that OpenAI will be cautious and watchful as companies begin using the technology and that it will not allow things that cause “harm” on the system. “It is hard to anticipate everything that might happen,” Brockman said. “We don’t think we can get everything right, certainly not up front.” Still, it’s better to play around with this type of technology now while it can still be controlled and learn lessons to be applied as AI gets ever more powerful, he added.         

©2020 Bloomberg L.P.