Microsoft shuts down AI chatbot after it turned into a Nazi

Tay, Microsoft's new teen-voiced Twitter AI, learns how to be racist

Microsoft got a swift lesson this week on the dark side of social media. Yesterday the company launched "Tay," an artificial intelligence chatbot designed to develop conversational understanding by interacting with humans. Users could follow and interact with the bot @TayandYou on Twitter and it would tweet back, learning as it went from other users' posts. Today, Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets.

Tay was set up with a young, female persona that Microsoft's AI programmers apparently meant to appeal to millennials. However, within 24 hours, Twitter users tricked the bot into posting things like "Hitler was right I hate the jews" and "Ted Cruz is the Cuban Hitler." Tay also tweeted about Donald Trump: "All hail the leader of the nursing home boys."

Human Twitter users responded:

Nobody who uses social media could be too surprised to see the bot encountered hateful comments and trolls, but the artificial intelligence system didn't have the judgment to avoid incorporating such views into its own tweets.

Now many people are left wondering what Microsoft was thinking. "It does make you scratch your head. How could they have not foreseen this?" CNET's Jeff Bakalar told CBS News. "It's weird, to say the least."

The tweets have been removed from the site and Tay has gone offline, stating that she needs a rest.

Microsoft posted a statement Friday on the company blog, acknowledging that its Tay experiment hadn't quite worked out as intended.

"We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," it said. "Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values."

It continued:

"Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time ... Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

... To do AI right, one needs to iterate with many people and often in public forums. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process. We will remain steadfast in our efforts to learn from this and other experiences as we work toward contributing to an Internet that represents the best, not the worst, of humanity."

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.