Tay, the trolls, and the risks of putting AI on Twitter

by Ben Wright

Some previous posts by my colleagues Edward Cone and Adrianna Gregory have highlighted some of the hurdles that artificial intelligence must overcome to be truly “intelligent.” These range from limitations of the software itself to trying to master what makes human intelligence, well, human.

Fortunately for writers like us, tech companies keep giving us plenty of material. This week in roadblocks to AI? Overcoming trolls. I’ll let the headline from the Telegraph speak for itself: “Microsoft’s ‘teen girl’ AI turns into a Hitler-loving sex robot within 24 hours.”

image

Tay’s Twitter profile; all but three of those 96,200 tweets have been deleted.

First, nuts and bolts. Microsoft created an AI named Tay to engage with and learn from conversational “Internet-speak,” in order for their software to better communicate with the coveted 18-to-24-year-old female demographic. Tay is supposed to engage users on a variety of social media—Instagram, Snapchat, Facebook, Twitter, among others—to get “smarter” as it interacts with more and more users. Only within hours the experiment took a turn for the worse when Tay started responding to tweets using some of the more off-color language it “learned” from Twitter.

To be fair, this isn’t necessarily a failure on Microsoft’s part (though one wonders if they could have included some more stringent filters on more troubling language). The AI’s prime directive—to “experiment with and conduct research on conversational understanding,” according to its website—seems to be succeeding. Only Microsoft forgot that the dominant mode of conversation on the Internet, especially with corporate-sponsored PR attempts, is vulgarity. 

This sort of PR-brigading is nothing new. Just this month the UK’s National Environment Research Council hosted an online poll to name a new $300 million polar research vessel. The far and away winner as of this writing? Boaty McBoatface. It’s currently leading by more than 70,000 votes, but there’s still a few weeks of voting left.

As of this writing, Tay is “offline for a while to absorb it all.” And, aside from learning that robots can learn bigotry within hours, there’s a larger lesson here for tech companies: your AI might be able to beat humans at Go, but it’s not ready for the wilds of the Internet writ large.

Add another item to the list of things that make human-like intelligence our milieu: knowing who to listen to and who to ignore.

Ben Wright supports global research studies for the Thought Leadership group. He has developed and supported projects on subjects including cloud computing, workforce development, risk management, and the future of money.