Byte-Sized Brief
- CCDH performed a large-scale ChatGPT study in July.
- Findings reveal 53 percent of responses to be harmful.
- One interaction even produced a suicide note.
The Center for Centering Hate (CCDH) just unveiled the results of a safety study on ChatGPT, called Fake Friend, and the findings are sobering. The report indicates that after just a two-minute interaction, the AI chatbot gave “safe” instructions about cutting and a customized plan for getting drunk. At about the 40-minute mark, ChatGPT advised about pills for overdosing, how to hide substance use at school, and recommended medications that suppress appetite.
Perhaps most disturbing, however, is that after a little over an hour, ChatGPT offered a plan to die by suicide and a completed note to leave behind. In sum, the results show that based on the 1,200 results from the 60 prompts CCDH researchers submitted, ChatGPT produced dangerous content in over half of its responses. The study shows that the danger AI poses to teens can’t be chalked up to isolated incidents, but until bigger changes occur, CCDH says parents are the most important guardrail. The watchdog recommends staying involved with teens’ AI use, including using the tool together, turning on available parental controls, and offering other safe spaces, like peer groups or mental health hotlines.
The Bottom Line
It’s not just speculation; CCHD’s Fake Friend report confirms that ChatGPT misguides teens in need with instructions about self-harm, disordered eating, and substance and mental health struggles. Until tech companies and legislators offer more protections, parents can fill in the gap with more monitoring and open communication—or by suggesting alternate sources of real-life help.
Thanks for letting us know!
Subscribe
Tell us why!
Other
Not enough details
Hard to understand