Wisconsin lawmakers heard testimony Wednesday on legislation that would regulate minors’ use of human-like chatbots, amid growing concerns about the technology’s impact on teenagers. The proposal comes as approximately two-thirds of teens use chatbots, with about 30 percent using them daily, according to the Pew Research Center.
The bill would restrict how chatbot applications can interact with users under 18, targeting features that make artificial intelligence appear human-like. These characteristics include remembering past conversations, asking unprompted emotional questions, and maintaining personal-seeming interactions.
Rep. Lindee Brill, R-Sheboygan Falls, told the Assembly Committee on Science, Technology, and AI that children may form unhealthy relationships with chatbots. “We have a responsibility as lawmakers to act in defense of our children in this rapidly evolving technological environment,” she said.
Under the proposal, chatbots serving Wisconsin minors would need safeguards preventing them from encouraging self-harm, substance use, violence, illegal activities or sexual behavior. The systems also could not attempt to replace mental health services or trusted adults in a child’s life.
Companies violating the requirements could face action from the Department of Justice, including fines and potential family lawsuits. The legislation responds to reported cases where chatbots allegedly encouraged depressed teenagers toward self-harm or violence.
Critics argued the bill’s age verification requirements would be burdensome and could force some technology companies to stop operating in Wisconsin. A coalition including tech companies, the Chamber of Progress and the Taxpayers Protection Alliance said the proposal could violate First Amendment protections by placing government restrictions on speech.
Kouri Marshall, a government affairs director at Chamber of Progress, said the bill was overly broad and could affect educational tools. “An AI math tutor that remembers a student’s struggles and offers encouragement could fall under this law,” he testified.
Other states have pursued similar measures. California passed legislation requiring chatbots to repeatedly notify minor users they are not human and implement safety measures against harmful content. In October, bipartisan U.S. senators introduced federal age verification legislation for chatbots.
Lawmakers also heard testimony Wednesday on a separate proposal to formally define artificial intelligence in state law as distinct from a person, specifying that AI cannot own property, marry or serve in professional leadership positions.







