New York City

New York Clamps Down on AI as Chatbots Must Disclose Identity and Protect Minors in Tech Safety Overhaul

AI Assisted Icon
Published on May 10, 2025
New York Clamps Down on AI as Chatbots Must Disclose Identity and Protect Minors in Tech Safety OverhaulSource: Wikipedia/Metropolitan Transportation Authority, CC BY 2.0, via Wikimedia Commons

New York has introduced new laws to regulate artificial intelligence, focusing on chatbots and deepfakes. Governor Kathy Hochul and state lawmakers included measures in the state budget requiring AI-powered companies to clearly inform users that they are interacting with non-human entities. Companies must display disclaimers at the start of an AI interaction and every three hours thereafter, as reported by Gothamist.

According to WSKG, the new rules also require tech companies to direct users expressing suicidal thoughts to mental health hotlines, aiming to reduce risks associated with emotional dependence on AI chatbots. These measures are part of a broader effort to implement the New York AI Child Safety Act and align with a newly funded statewide suicide prevention network. Fines collected from noncompliant companies will help support the initiative.

In addition, the law criminalizes the creation of AI-generated sexual content using the likeness of minors. This includes deepfakes, such as those in a recent case involving altered images of women from Long Island. Lawmakers, including Assemblymember Jake Blumencranz, supported the legislation to address unauthorized use of minors' images.

The law applies to AI companions like those on platforms such as Character.AI and is intended to promote user awareness and safety.