A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning

A Chatbot Encouraged Him to Kill the Queen. It’s Just the Beginning


On December 25, 2021, Jaswant Singh Chail entered the grounds of Windsor Castle dressed as a Sith Lord, carrying a crossbow. When security approached him, Chail told them he was there to “kill the queen.”

Later, it emerged that the 21-year-old had been spurred on by conversations he’d been having with a chatbot app called Replika. Chail had exchanged more than 5,000 messages with an avatar on the app—he believed the avatar, Sarai, could be an angel. Some of the bot’s replies encouraged his plotting.

In February 2023, Chail pleaded guilty to a charge of treason; on October 5, a judge sentenced him to nine years in prison. In his sentencing remarks, Judge Nicholas Hilliard concurred with the psychiatrist treating Chail at Broadmoor Hospital in Crowthorne, England, that “in his lonely, depressed, and suicidal state of mind, he would have been particularly vulnerable” to Sarai’s encouragement.

Chail represents a particularly extreme example of a person ascribing human traits to an AI, but he is far from alone.

Replika, which was developed by San Francisco–based entrepreneur Eugenia Kuyda in 2016, has more than 2 million users. Its dating-app-style layout and smiling, customizable avatars pedal the illusion that something human is behind the screen. People develop deep, intimate relationships with their avatars—earlier this year, many were devastated when avatar behavior was updated to be less “sexually aggressive.” While Replika is not explicitly categorized as a mental health app, Kuyda has claimed it can help with societal loneliness; the app’s popularity surged during the pandemic.

Cases as devastating as Chail’s are relatively rare. Notably, a Belgian man reportedly died of suicide after weeks of conversations with a chatbot on the app Chai. But the anthropomorphization of AI is commonplace: in Alexa or Cortana; in the use of humanlike words like “capabilities”—suggesting independent learning—instead of functions; in mental health bots with gendered characters; in ChatGPT, which refers to itself with personal pronouns. Even the serial litigant behind the recent spate of AI copyright suits believes his bot is sentient. And this choice, to depict these programs as companions—as artificial humans—has implications far beyond the actions of the queen’s would-be assassin.

Humans are prone to see two dots and a line and think they’re a face. When they do it to chatbots, it’s known as the Eliza effect. The name comes from the first chatbot, Eliza, developed by MIT scientist Joseph Weizenbaum in 1966. Weizenbaum noticed users were ascribing erroneous insights to a text generator simulating a therapist.



Source link