U.S. States Crack Down on AI Therapy Amid Safety Fears

Planck

- Regulatory actions tighten amid AI therapy boom.
- Privacy concerns and expert warnings drive state restrictions.
Long wait times for traditional care have driven the rapid adoption of AI-powered mental health chatbots, prompting mounting regulatory action across the United States. Concerns over their safety, ethical implications, and effectiveness are fueling the crackdown. Tools like DrEllis.ai highlight this trend. Quebec consultant Pierre Cote created the tool after he faced extended delays in accessing treatment for PTSD and depression. The chatbot uses clinical literature to provide virtual therapeutic support, and Cote credits it with saving his life.
On August 23, 2025, Devdiscourse reported that mental health professionals have significant worries about the limitations of AI tools. Experts argue these systems fail to replicate the vital human interactions intrinsic to effective therapy. Without empathy or accountability, chatbots may overlook critical issues like suicidal ideation and potentially compound loneliness in frequent users. A joint study by OpenAI and MIT Media Lab linked daily chatbot use with greater dependency and social isolation. Meanwhile, Stanford University researchers documented instances where chatbots showed bias and gave harmful responses when addressing conditions like schizophrenia and alcohol dependence.
Data privacy risks compound these challenges. Unlike licensed therapists, AI platforms often lack stringent safeguards for user data. This leaves sensitive personal information vulnerable to exploitation for advertising or other commercial purposes. These risks have triggered regulatory scrutiny in various U.S. states, with many enacting measures to mitigate potential harm.
Illinois led the charge on August 1. The state banned AI chatbots from making therapeutic decisions and restricted their role to administrative assistance for licensed professionals. Texas Attorney General Ken Paxton has launched a civil investigation into Meta and Character.AI for alleged deceptive practices and improper use of user data. Nevada now prohibits behavioral healthcare providers from incorporating AI in patient treatment. Utah mandates that AI systems explicitly disclose their non-human nature. New York will adopt a similar requirement on November 5. Additionally, New York’s measures will direct users expressing suicidal thoughts toward the national 988 hotline.
These actions reflect a growing consensus among lawmakers and experts. They agree that while AI can expand access to mental health resources, it cannot replace the critical human dynamics of therapy. As the demand for AI-based solutions continues to rise, the need for robust ethical guidelines and accountability frameworks remains paramount.
Get the latest news in your inbox!