Tyler Cowen has a link to this post: What's up with the States (Dean Ball, Hyperdimensional, Aug. 21, 2025):
This newsletter first started picking up readers because of my writing about AI regulations in state governments. I have a background in state and local policy, having spent the plurality of my career working at think tanks that specialized in such issues. In 2024, I was one of the early people to raise concerns about what I perceived to be a torrent of state-based AI laws, including, most notably, California’s now-failed SB 1047.
The torrent of state bills has only grown since then, with more than 1,000 having been introduced this year. Most of the lawmakers who draft such statutes are not acting with malice; instead, they feel motivated by an urge “to do something about AI.”
This may not feel like a great reason to pass “a law about AI,” (generally, laws should solve specific problems, not scratch the itches of random state legislators), but states are sovereign entities. In the absence of a federal legislative framework that preempts this sovereign power, this torrent will continue. [...]
But for now, I want to give you a sense of what is actually happening in state AI regulation by examining two categories of recent laws (“AI in mental health” and “frontier AI transparency”) that either have passed or seem likely to pass.
Cowen featured a passage about Nevada's mental health law, which is a category that interests me. Here's the passage Tyler quotes:
Several states have banned (see also “regulated,” “put guardrails on” for the polite phraseology) the use of AI for mental health services. Nevada, for example, passed a law (AB 406) that bans schools from “[using] artificial intelligence to perform the functions and duties of a school counselor, school psychologist, or school social worker,” though it indicates that such human employees are free to use AI in the performance of their work provided that they comply with school policies for the use of AI. Some school districts, no doubt, will end up making policies that effectively ban any AI use at all by those employees. If the law stopped here, I’d be fine with it; not supportive, not hopeful about the likely outcomes, but fine nonetheless.
But the Nevada law, and a similar law passed in Illinois, goes further than that. They also impose regulations on AI developers, stating that it is illegal for them to explicitly or implicitly claim of their models that (quoting from the Nevada law):
(a) The artificial intelligence system is capable of providing professional mental or behavioral health care;
(b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or
(c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care.
First there is the fact that the law uses an extremely broad definition of AI that covers a huge swath of modern software. This means that it may become trickier to market older machine learning-based systems that have been used in the provision of mental healthcare, for instance in the detection psychological stress, dementia, intoxication, epilepsy, intellectual disability, or substance abuse (all conditions explicitly included in Nevada’s statutory definition of mental health).
But there is something deeper here, too. Nevada AB 406, and its similar companion in Illinois, deal with AI in mental healthcare by simply pretending it does not exist. “Sure, AI may be a useful tool for organizing information,” these legislators seem to be saying, “but only a human could ever do mental healthcare.”
And then there are hundreds of thousands, if not millions, of Americans who use chatbots for something that resembles mental healthcare every day. Should those people be using language models in this way? If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all? Up to what point of mental distress? What should or could the developers of language models do to ensure that their products do the right thing in mental health-related contexts? What is the right thing to do?
The State of Nevada would prefer not to think about such issues. Instead, they want to deny that they are issues in the first place and instead insist that school employees and occupationally licensed human professionals are the only parties capable of providing mental healthcare services (I wonder what interest groups drove the passage of this law?).
Yikes!
I understand that impulse: “only a human could ever do mental healthcare.” Not so long ago I may have had that, or a similar, impulse. But I’ve been rethinking that impulse.
For one thing, as Ball notes, lots or Americans are already using chatbots in this way, whether or not their particular chatbot was designed for such use. A recent study reported that therapy/companionship is currently the top use of generative AI. That makes sense in view of the almost uncanny linguistic facility that state of the art chatbots have. One can easily treat one as a companion. In so doing it would be natural to talk about how you’re feeling and about your desires, hopes, and dreams. As the chatbot responds sympathetically you’ll inch toward therapeutic territory, and when you start asking for advice, also natural, you’ll be inching into therapeutic territory. Yes, you’re treating it as a friend, and that’s what one does with a friend, which is to say, we make therapeutic use of our friends. Humans have always done that. But it’s only relatively recently in human history that we’ve had mental heath therapists (late 19th century) and so it’s only recently that we could explicitly acknowledge the therapeutic value of friendship.
Moreover, last year I wrote an article for 3 Quarks Daily on AI and mental health: Melancholy and Growth: Toward a Mindcraft for an Emerging World. I’ve written a number of blog posts to accompany that article and I’ve packaged the article and some of those posts as a working paper: Melancholy, Growth, and Mindcraft: A Working Paper. The implication of that paper is that some kind of therapeutic is a natural use for AI and that, indeed, in the future people are likely to be living in a world where human activity is embedded in a matrix of AIs for various purposes, including mental health. Creating that technology is going to be tricky and difficult. It will require a lot of careful experimentation. Laws like Nevada’s will stop that before it’s even started.
We need to tread carefully here.
https://www.nytimes.com/2025/08/18/opinion/chat-gpt-mental-health-suicide.html?searchResultPosition=1
ReplyDeleteAbout a daughter who killed herself while using Chat GPT for therapy.
ReplyDeleteYes, it's an issue.
DeleteThe issue comes to financial resources dedicated to developing actual human resources for healthcare. If the medical industry wasn't controlled by the insurance industry (and now the federal govt) in stripping away human focused development, would the use of AI even be so widely discussed? Who can imagine what that young woman was thinking? "Here I am talking to a machine because nobody will find a way to listen and help." Unfortunately, humans talking with humans is extremely difficult; resorting to machines to take care of the most vulnerable is a business model of healthcare. Being able to "reduce" the stakes to a "bottom line."
DeleteI'm not so sure. If there's any therapeutic use at all for chatbots, that use will be there even if we have human therapists for all.
DeleteHumans need to be in the front of health care. Even if augmented by a very limited and supervised use of chatbots. This is very financially driven. The difference between healthcare in the US in 1975 and 2025 is startling and very telling. And the difference caused by corporate models of running hospitals and the effect of the insurance industry is stunning. There was another article in NYT that speaks to the very issue of money as an incentive to promote chatbots. You don't have experience working in a large organization for an extended period of time during these years, and without firsthand witness to both subtle and dramatic changes caused by the corporatization taught to all the MBA's-- you would be sure about the deleterious effect.
Deletehttps://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html
ReplyDeleteThis example of a teen who killed himself is very chilling. And the posts of chatbot writing that you show in your columns reveals a lot of what is easily called purple prose. Whatever kernel of truth -- and there is is some, for sure -- in the observations, the over all effect is cringe worthy. Also, the effect of the conversations for this particular teen meant that a third party was introduced -- what's the old saying, "Two's company; three's a crowd". The kid's mind was caught between a rock and a hard place of the chatbot and his attempts to reach out to people. Tragic.
ReplyDeleteHealthcare systems in the US would be more than happy to avoid having to provide working benefits to an employee, avoid the issue of workplace injury. It comes down to money. And I know the issue of violence against healthcare workers is more serious than the public is ever let to know.
ReplyDelete