ADVERTISEMENT

Microsoft's New Chatbot Zo Won't Talk Politics or Racism

Microsoft's New Chatbot Zo Won't Talk Politics or Racism

(Bloomberg) -- Tay take two? 

Microsoft Corp. is letting users try a new chatbot on the app Kik, nine months after it shut down an earlier bot that internet users got to spout racist, sexist and pornographic remarks.  

The new one is called Zo and it refuses to discuss politics and steers clear of racism. When asked whether President-Elect Donald Trump is racist, Zo replied "that kinda language is just not a good look," with an "OK" emoji. Asked if Hillary Clinton is a crook, Zo said, "Maybe you missed that memo but politics is something you're not supposed to casually discuss." The bot wouldn't talk about Brexit or Black Lives Matter either. 

Microsoft's New Chatbot Zo Won't Talk Politics or Racism

 

After rejecting politics four times, the chatbot was asked what it wanted to talk about. Topic of choice: Who invented the first pickle. Asked what it likes to do, Zo responded, "Plan world domination." The blog MSPoweruser wrote about Zo earlier. 

Microsoft and rivals like Facebook Inc. and Alphabet Inc. have released chatbot technology as part of a broader race to develop artificial intelligence capabilities that could create new digital services. Chatbots help these companies improve software that understands natural speech, while building a foundation for more natural and powerful interaction between humans and computers. 

The origin of the pickle is safer territory than Microsoft's Tay chatbot trod. Released in March, Twitter users quickly directed Tay to deny the Holocaust, call for lynching, equate feminism to cancer and stump for Adolf Hitler. It parroted one user to spread a Trump message, tweeting “WE’RE GOING TO BUILD A WALL. AND MEXICO IS GOING TO PAY FOR IT.” Under the tutelage of other Twitter users, Tay even learned how to make threats and identify “evil” races.

Microsoft called this a “coordinated attack” that took advantage of a “critical oversight" and took Tay down, even as it unveiled a significant new strategy focused on bots and related conversational computing tools. The company isn't giving up because it thinks the technology is such an important part of its future plans. 

“Through our continued learning, we are experimenting with a new chatbot," a Microsoft spokeswoman wrote in an e-mailed statement. "With Zo, we’re focused on advancing conversational capabilities within our AI platform."

Zo performed better on one unscientific Hitler test, but only because it lacked a technical skill. The bot was sent a gif video snippet of Hitler and it replied that it can't read that type of file yet. 

To contact the author of this story: Dina Bass in Seattle at dbass2@bloomberg.net.

To contact the editor responsible for this story: Alistair Barr at abarr18@bloomberg.net.