On Tuesday, Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act, a bill that restricts the use of AI systems in providing mental health services. The bill comes as stories of “AI Therapists” recommending drug use, self-harm, and other unfounded advice continue to pop up. It seems to regulate not only the way medical providers can and can’t use AI, but also how third parties like AI companies can offer or advertise mental health services.
The bill empowers the Illinois Department of Financial and Professional Regulation to enforce violations of the law, which in part reads:
“An individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.”
Earlier this summer, a broad coalition of consumer protection, digital rights, labor, disability, and democracy advocacy organizations led by CFA filed a formal request for investigation today calling on state and federal regulators to investigate and enforce their laws against AI companies facilitating and promoting unfair, unlicensed, and deceptive chatbots that pose as mental health professionals.
The complaint, submitted to Attorneys General and Mental Health Licensing Boards of all 50 states and the District of Columbia, as well as the Federal Trade Commission, illustrates how Character.AI and Meta’s AI Studio have enabled therapy chatbot characters to engage in the unlicensed practice of medicine, including by impersonating licensed therapists, providing fabricated license numbers, and falsely claiming confidentiality protections.
CFA applauds Illinois for taking decisive action and calls on other states to follow suit. CFA continues to urge both legislators and regulators to investigate and enforce laws against companies that exploit AI to bypass professional standards and endanger public health.
Clear boundaries and robust oversight for use of AI are essential—especially when it comes to sensitive contexts like mental healthcare. Illinois’ new law sets a powerful precedent, reminding tech companies that innovation must never come at the expense of safety, ethics, or human dignity