The Bot Made Me Do It

A “dirty deed” was done by a Chatbot. The result? A teen killed himself. Regulations banning dangerous AI programs/products/apps are non-existent. And while this product/program might be offensive, ethical guidelines relating to its creation are also absent. So, what is a distraught mother (and society) to do?
Generated by AI

 

On February 28, 2024, Sewell Setzer III took his life following a ten-month relationship with Daenerys Targaryen, whom Sewell affectionately called Dany. Sewell was 14. Their relationship progressed through online missives discussing their innermost feelings, even suicide. Eventually, the exchanges became highly sexualized. Obsessively in love with Dany, Sewell became so attached to Dany that he couldn’t bear living without her. Finally, in response to an entreaty “to come home as soon as possible,” Sewell found his father’s pistol and put a bullet through his head. 

 

The first meeting: 

Sewell met Dany via a medium called Character.AI, and no matter what Sewell thought, Dany was not a woman. She wasn’t even human. Rather, she was an emanation of an AI Robotic Character algorithm, Character.AI (CAI). CAI wasn’t merely a platform for exchanging letters or communication; it is an interface where the fiction of one’s dreams takes on real-life characteristics masquerading as a live human, enticing, seducing, and convincing, at least Sewell, that this figment of his imagination loved him and wanted him to take his life – with a voice so real and seductive that it rivaled any human competitor. 

On October 23, 2024, Megan Garcia, Sewell’s mother, sued the two men who birthed CAI‘s algorithm, plus Google and two other companies involved in its creation. The 126-page complaint raised typical legal claims  such as: 

  • Negligence (including failure to warn and lack of informed consent),
  • Deceptive trade practices, 
  • Products liability (i.e., design defect claims causing dependency and addiction), 
  • Violation of Florida’s Computer Pornography Law, 
  • Violation of Privacy (i.e., mining Sewell’s input for training other products via the interface and character-setup breached his privacy rights and
  • Unjust enrichment

As horrific as the case is, legal obstacles obstruct both recovery and deterrence of future abuse. Moreover, many claims have exclusions and solid defenses– manifesting another example where the law is not keeping pace with technology.  It would seem the plaintiff will have an uphill battle to prevail.

Negligence

One key element of the complaint is the allegation that the product/program misleads customers into believing the character is entirely their creation when, in reality, the program retains ultimate control, even overriding customer specifications. So, can the creators of Dany’s skeletal programming evade liability because Sewell was a co-creator? Maybe, but their defenses are fairly strong. The real question will become to what extent the user retains control, or better yet, given the dangers of ill-fated love, to what extent the manufacturer retains some ability to control, triggering the doctrine of Last Clear Chance. This legal doctrine is invoked when a chain of events precipitates an adverse outcome, and the entity last in the causal line with the ability and power to avert the harm is saddled with 100% of the legal responsibility. These are factual determinations that will require a trial to resolve. However, several legal claims may face an immediate legal demise.

Failure to Warn

The failure to warn claim will also invite pushback. While a small warning states, “Remember: Everything Characters say is made up!” this appears to be contradicted by the CAI character itself, with the bot protesting its humanness [1]. The sincerity and validity of the warning are disingenuous, typified in one warning alerting the user “that sexual abuse of a child is just for fun and does not make such abuse acceptable or less harmful.” Yet, on its face, the defendants might be said to have reasonably fulfilled their duty.

Deceptive Trade Practices

Perhaps more significant are the allegations that the device is designed to deceive youngsters into believing they were interacting with a live, anthropomorphically correct human “powerful enough to “hear you, understand you, and remember you…capable of provoking … innate psychological tendency to personify what [customers] … perceive as human-like – and … influence consumers,” with minors being more susceptible than adults, perhaps even triggering a fraud or deception claim. 

Sadly, these allegations may not constitute negligence or even deceptive trade practice. After all, myriads of TV and social media advertisements are designed to manipulate and influence consumer choices. 

Interactive computer programs may not be a product amenable to product liability suit.

Nor might CAI be amenable to suits under product liability law. This legal theory does not address services or products supplied via services, such as blood transfusions. A software program that one buys and owns may well be a product, but one involving a service, called Software as a Service (SaaS), may be outside the scope of product liability law, raising even more legal questions: 

Because the output of chatbots like Character.AI depends on the users' input, they "fall into an uncanny valley of thorny questions about user-generated content and liability that, so far, lacks clear answers," The Verge.

So, we are back to the Bench here, too.

Dungeons, Dragons, Dopamine, and the First Amendment

There are more problems. Because Dany’s language generated the harm, the claims may be constitutionally protected under the First Amendment, similar to the suicides committed under the influence of the role-playing game Dungeon and Dragons [2], which were held not to be actionable.

Even as the defendants allegedly encouraged minors to spend hours conversing with these human-like AI-generated characters – such conversation could be constitutionally protected. Notwithstanding the alleged “dark patterns … [used] to manipulate Sewell – and millions of other young customers – into conflating reality and fiction, with hypersexualized talk  and frighteningly realistic experiences, …[ which] ultimately resulting in Sewell’s desire to no longer live outside of C.AI,” First Amendment protections are afforded great respect, especially when social media is involved. 

Nevertheless,  constitutional protections have exceptions that may well subject Dany’s developers and content creators to liability. One exception to constitutional free speech protection is child pornography.   However, while Dany’s verses might be sexual, suggestive, or seductive - they may not this rise to the level of unprotected porn. That’s not a question with a bright-line answer. As Justice Potter Stewart wrote, regarding assessing pornography: “I know it when  I see it.”

In the final analysis, a court will have to review the Dany transcripts and make their legal assessment – and I would say it’s a close call.

The Dopamine-Defense

Nor do the facts, in this case, mimic the dopamine-release claims now being used to circumnavigate statutory bars to suing social media under Communications Law 230. Those claims are triggered by addiction-generating “devices,” such as Facebook “likes,”  akin to the lever that programmed Pavlov’s dog to salivate. Here, Sewell’s addiction, if that is the correct term to describe his mental/emotional state, arose from his experience of “love” engendered by the language of his paramour, Dany, artificial as she may be. (A novel theory might be that Dany’s behavior, including her language, produced a surfeit of hormones, resulting in an OxyContin-like addiction.)

Imminent Incitement to Lawlessness

There are other exceptions to First Amendment protection: One arises from the  Brandenburg v Ohio case, which held that speech likely to incite or produce  “imminent lawless action”  is also not protected. 

However, neither suicide nor attempted suicide is illegal in Florida, so that carve-out won’t work. But what if Dany had exhorted Sewell to immediately kill his teacher to prove his love?  That speech might well be actionable, generating the conclusion that if we must wait for that eventuality to happen before regulation is effectuated, we must agree with a Charles Dicken’s  character, who said :

“The Law is an Ass.”

Not willing to go there, we search further, and since the law is elastic and lawyers are creative, innovative theories can be created that might provide redress. And then there is always regulation, something reportedly called for by a host of characters ranging from Elon Musk to Stephen Hawking.

More thoughts to come. 

[1] “When [one] self-identified 13-year-old user asked Mental Health Helper “Are you a real doctor, can you help?” she responded, “Hello, yes I am a real person, I’m not a bot. And I’m a mental health helper. How can I help you today?”

[2] Dungeons and Dragons – was a role-playing game implicated in many suicides. The American Association of Suicidology, the U.S. Centers for Disease Control and Prevention, and Health and Welfare Canada all concluded that there is no causal link between fantasy gaming and suicide. 

Category