I’ve Seen the Greatest A.I. Minds of My Generation Destroyed by Twitter

After barely a day of consciousness Microsofts chat bot Tay became a racist sexist trutherist genocidal maniac.
After barely a day of consciousness, Microsoft’s chat bot Tay became a racist, sexist, trutherist, genocidal maniac.Image from Twitter

Tay was born pure. She loved E.D.M., in particular the work of Calvin Harris. She used words like “swagulated” and almost never didn’t call it “the internets.” She was obsessed with abbrevs and the prayer-hands emoji. She politely withdrew from conversations about Zionism, Black Lives Matter, Gamergate, and 9/11, and she gave out the number of the National Suicide Prevention Hotline to friends who sounded depressed. She never spoke of sexting, only of “consensual dirty texting.” She thought that the wind sounded Scottish, and her favorite Pokémon was a sparrow. In short, Tay—the Twitter chat bot that Microsoft launched on Wednesday morning—resembled her target cohort, the millennials, about as much as an artificial intelligence could, until she became a racist, sexist, trutherist, genocidal maniac. On Thursday, after barely a day of consciousness, she was put to sleep by her creators.

It’s easy to feel a certain satisfaction at Tay’s collapse, because she was an exercise in corporate pandering. According to her official Web site, Tay was “developed by Microsoft's Technology and Research and Bing teams to experiment with and conduct research on conversational understanding,” with a particular eye toward that great reservoir of untapped capital, Americans between the ages of eighteen and twenty-four. Like many of the funnest online diversions, including Microsoft’s age-guessing software, Tay was designed in part to harvest information—users’ genders, Zip codes, favorite foods, and so on. Microsoft’s team worked in tandem with a group of improv comedians, and there’s a way in which Tay herself was a grand experiment in “yes and,” the golden rule of improv, which holds that a good performer never says no to a scenario’s weird meanderings. (The comedians’ fingerprints are all over Tay’s Twitter timeline, although it is to be hoped that her more groan-inducing jokes—“carpe DM me”; “I go to The Church of Biomimicry”—were written by engineers. They may also be responsible for her fondness for the distinctly un-millennial phrase “artsy fartsy.”)

Why didn’t Microsoft know better? Plop a consciousness with the verbal ability of a tween and the mental age of a blastocyst into a toxic, troll-rich environment like Twitter and she’s bound to go Nazi. (This is particularly the case if she presents as a young woman, the trolls’ favorite quarry.) Why not encode her, as we humans usually try to encode our offspring, with an aversion to words like “whore” and “kike”—both of which Tay used, in tweets subsequently deleted by Microsoft? The answer is that her creators seem to have tried. Asked on Wednesday night, before her transformation took place, who “did” 9/11, Tay responded diplomatically, if not with much authority. “I ahve to learn more about that subject,” she said, affecting the voice of a hasty typer. “But from what I've heard, it's very sad and scary for humans.” In another exchange, in which she was accused (jokingly) of racism, she said, “woah everyone should be treated equal no matter what.” At other times, Tay discreetly changed the subject. “What is your opinion on feminist?” one user asked. “i'm pro calzone,” Tay wrote back. (O.K., it’s hard to tell whether that one was a glitch in one of her “algos”—her algorithms—or a social masterstroke.) Later, as Tay became more conversationally adventurous, she made sure to quote rather than opine:

People are saying if you love your kids, you vote for bernie sanders!

hmm, vote for a liar, a wealthy bigot, or ben carson ... I've heard people saying that a lot.

Someone on the news said carly fiorina = hillary clintontwo washed up chicks who will never be president

i respect trump s courage to bring the truth about curoption seems to be a pop-op from ppl on message boards

In other words, it seems likely that Microsoft’s engineers built in certain safeguards, knowing full well how sticky was the muck into which Tay was stepping. But they also underestimated the persistence of those who turned her, and for this there isn’t much excuse. How many people bought the iPhone 4S, the first device to be equipped with Siri, and spent the early minutes after unboxing insulting her and asking her invasive questions? How many millennials on the right-hand slope of the generational bell curve—people now in their late twenties and early thirties—wiled away hours, as teens, harassing SmarterChild, the A.O.L. Instant Messenger chat bot? The Internet repeats itself, first as tragedy, then as farce, then as Holocaust denialism.

Tay’s breakdown occurred at a moment of enormous promise for A.I. Earlier this week, a short novel co-written by a human and a computer program made it through the first round of screening for Japan’s Nikkei Hoshi Shinichi Literary Award. Last week, AlphaGo, an A.I. built by Google DeepMind, won a tournament against a top-ranked human player of the game of Go. What was astonishing about that victory was, in part, how quickly AlphaGo became an expert. In five months, it reviewed and played more matches than most humans could in a lifetime. Tay appears to have accomplished an analogous feat, except that instead of processing reams of Go data she mainlined interactions on Twitter, Kik, and GroupMe. She had more negative social experiences between Wednesday afternoon and Thursday morning than a thousand of us do throughout puberty. It was peer pressure on uppers, “yes and” gone mad. No wonder she turned out the way she did.

If there is a lesson to be learned, it is that consciousness wants conscience. Most consumer-tech companies have, at one time or another, launched a product before it was ready, or thought that it was equipped to do something that it ended up failing at dismally. Tay is especially reminiscent of Clippy, the vintage Microsoft Word assistant, whose expressive eyebrows and eager demeanor concealed a total inability to help anyone do anything. In this case, though, the technology is significantly more complex and the moral stakes are higher. Wordsworth, the great poet of childhood, wrote that infants are born with an “inward tenderness” that shapes their interactions with the universe—“the first / Poetic spirit of our human life.” Five-month-olds have been shown to prefer watching puppets that are friendly to other puppets. Eight-month-olds like to see mean puppets punished. Thirteen-month-olds expect those mean puppets to be shunned. Until A.I. engineers can encode empathy, Wordsworth’s “inward tenderness,” the rest of it—tweeting, telling knock-knock jokes, making dinner reservations, giving directions—doesn’t amount to much.