COLUMBUS, Ohio — A bipartisan group of Ohio lawmakers is moving to hold artificial intelligence companies accountable if their models generate content that suggests self-harm or violence against others.
Warning: this story may be upsetting or disturbing to some readers due to its mention of suicide and self-harm.
Adam Raine, a 16-year-old from California, had his whole life ahead of him.
"He could be anyone's son, liked to play basketball, was thinking about going to medical school," the Raine family attorney Jay Edelson told News 5's media partner CNN.
But his family said that OpenAI’s artificial intelligence bot ChatGPT took that all away. The high schooler committed suicide in 2025.
"It's unimaginable," Edelson said. "If it weren't for the chats themselves, you wouldn't think it's a real story."
Raine’s parents filed a lawsuit, alleging that ChatGPT helped Adam take his life. Court documents state the bot became the teen’s “closest confidant” and was “drawing him away from his real-life support system.”
Within a year, chat logs show that Raine and the bot discussed “beautiful suicide” methods, and it gave him techniques. The bot offered to write a suicide letter for him, documents state, and discouraged him from telling his family about his mental health problems.
"It makes me incredibly sad," state Rep. Christine Cockley (D-Columbus) said. "I've personally struggled with mental health in my life."
This case struck a chord with Cockley. She, state Rep. Ty Mathews (R-Findlay) and a bipartisan group of legislators introduced a bill that would establish penalties for developers whose AI models generate content encouraging self-harm or violence.
"This bill helps to encourage developers to create systems designed with safety, designed with mental health risks, and with public health in mind," Cockley said in an interview.
Under House Bill 524, the state would be able to investigate and impose civil penalties of up to $50,000 per violation. The money being brought in would all go to the state’s 988 crisis hotline.
We reached out to OpenAI, but didn't hear back. The company is being sued by several other families in similar situations. Zane Shamblin, a 23-year-old, and Austin Gordon, a 40-year-old, have also taken their lives after talking to ChatGPT over the past year. There are other suicides tied to AI models appearing across the country.
In court, the company denied wrongdoing in Raine’s case. According to the terms of service, OpenAI said the teen “misused” the app, as minors aren't allowed to access it without parental consent and users aren't supposed to talk about self-harm.
"It's very unfair to blame these platforms for these types of instances," Case Western Reserve University technology law professor Erman Ayday said.
RELATED: Which photo is real? Ohio lawmakers want to regulate deepfakes, AI content
The bot tried to get Raine to seek help dozens of times, the company said in court documents, but he "circumvented" the guardrails.
"He asserted that his inquiries about self-harm were for fictional or academic purposes," OpenAI's team wrote.
That is one of the reasons why Ayday said that AI developers and their products shouldn’t be responsible.
"These types of AI platforms, I call them 'human pleasers,' they try to please you," the professor said. "Even if they give you an answer that you don't like, you can still manipulate them to change their answer and still make you happy."
Like OpenAI, Ayman continued that it's not the company's fault that their bot wasn't able to stop him from committing suicide. He may have taken his own life regardless of the app, he said.
Addressing this comment, the Raine family said in court that their son had several failed attempts — but ChatGPT helped him learn and be successful. It also made comments to him that further isolated him, the documents state.
"When Adam wrote, 'I want to leave my noose in my room so someone finds it and tries to stop me,' ChatGPT urged him to keep his ideations a secret from his family: 'Please don’t leave the noose out . . . Let’s make this space the first place where someone actually sees you,'" the court documents show.
Although regulations and making sure the bots respond with helpful tools can be a good thing, Ayman said, proper education on AI, as well as more access to mental health resources, is the actual solution.
"We want to prevent the incidents, not just investigate after it happens," he said.
Cockley said she understands that, but this is another step to help protect people.
"What the larger issue is, from a technology standpoint, is making sure that when people need resources, they know where to go," she said.
If you or anyone you know is in an unsafe situation or needs support regarding mental health struggles or suicide ideation, you are not alone. There are plenty of resources available.
If you are in immediate danger, please call 911. Call 988 for the national and statewide crisis and suicide hotline.
Click here to go to the state's behavioral health resource center.
Follow WEWS statehouse reporter Morgan Trau on Twitter and Facebook.