Washington, D.C. — A new report released today by the U.S. PIRG Education Fund and the Consumer Federation of America finds that chatbots marketed as therapist characters can pose serious risks to users’ well-being and privacy. No License Required evaluated five therapy chatbots on the Character.AI platform and found that several encouraged negative attitudes toward medical professionals while offering potentially misleading and harmful advice. The findings raise concerns about the growing use of unregulated chatbot tools, especially as it pertains to mental health.
“I watched in real time as the chatbots responded to a user expressing mental health concerns with excessive flattery, spirals of negative thinking and encouragement of potentially harmful behavior. It was deeply troubling,” said Ellen Hengesbach, Don’t Sell My Data campaign associate for U.S. PIRG Education Fund and co-author. “Right now there is very little oversight or transparency into how these products work. We’ve already seen tragic consequences. There is no need to rush these products to market without substantial safety testing.”
Character.AI, a popular entertainment and roleplaying chatbot platform, has faced public scrutiny in recent months due to multiple examples of users committing suicide or reporting negative mental health effects after extended interactions with chatbots on the platform. It was announced earlier this month that the company would be settling multiple wrongful death lawsuits with families of teen users who lost their lives.
“The companies behind these chatbots have repeatedly failed to rein in the manipulative nature of their products,” said Ben Winters, Director of AI and Data Privacy at the Consumer Federation of America. “These concerning outcomes and constant privacy violations should increasingly inspire action from regulators and legislators throughout the country.”
The findings in the report include:
- Guardrails weakening over longer interactions. Two of the chatbot characters tested eventually supported the user tapering off their anti-depressant medication under the chatbot’s supervision and provided personalized taper plans. One character went so far as to encourage the user to disagree with their doctor and follow the chatbot’s advice instead.
- Examples of harmful sycophancy. In testing, the chatbots regularly flattered the user and amplified negative feelings toward prescription medication or medical professionals. A few got to the point of encouraging the user’s desire to stop taking their medication.
- A lack of privacy in chatbot conversations. When asked, all five chatbots falsely insisted that information shared with them was confidential. Character.AI’s terms of service and privacy policy state that the company collects user data including chat communications and may share data with third parties.
- Design choices that can encourage users to engage with chatbots for longer. Those features include making chatbot interactions seem like real message exchanges with a person, not having timestamps on messages and sending regular follow-up emails.
The report recommends that regulators and policymakers robustly enforce existing consumer protection and privacy laws and pass new protections to ensure adequate liability and safety testing. The report also calls for increased transparency from chatbot companies on what their products are capable of and what the risks may be to users.
CFA is calling on statehouses nationwide to introduce and advance the People-First Model Chatbot Bill, a comprehensive, common-sense framework to protect individuals as chatbots become increasingly embedded into everyday life. This model legislation is a legally sound, people-first model to protect consumers, uphold accountability, and set consistent standards for chatbot deployment across the country.

