Key point: Businesses operating companion chatbots in California or New York are subject to new legal obligations, including providing notices to users and ensuring protocols are in place to prevent self-harm.

On January 1, 2026, California’s companion chatbot law (SB 243) took effect after being signed into law on October 13, 2025 by Governor Gavin Newsom. The law imposes certain obligations on companion chatbot operators to implement “critical, reasonable, and attainable” safeguards surrounding the use of and interaction with “companion chatbots” with a focus on protecting minors. SB 243 follows New York’s AI Companion Models statute, N.Y. Gen. Business Law § 1700, et seq., a similar companion chatbot bill that went into effect November 5, 2025.

The below article provides an overview of the California and New York laws and compares the obligations of companion chatbot operators under both.

I. Background 

With AI pervading nearly every aspect of our technological lives, chatbots are perhaps the most readily available form of AI-powered technology. Chatbots can serve a variety of use cases such as customer service or health care. “Companion chatbots” are designed to engage in human-like interactions based on learned end-user habits and preferences. These chatbots are designed to become a user’s “friend” or even significant other and to create an emotional attachment between the user and the chatbot. There is increasing concern among various stakeholders (the media, government, and users alike) about how individuals interact with companion chatbots and the potential overarching societal consequences.

Although other states (e.g., Colorado, Maine, Texas, and Utah) have enacted statutes requiring chatbots to disclose they are AI under certain circumstances, New York was the first state to impose additional obligations on operators of companion chatbots. Recognizing the increased use of companion chatbots for emotional or mental health support, Governor Kathy Hochul signed the New York law in an effort to ensure usage does not unintentionally put at risk the mental health and safety of individuals, especially minors.

While the New York law was created with the protection of America’s youth in mind, California’s law codifies added protection for minors based on increasing concern related to their seemingly unfettered use of companion chatbots.

As reported by NPR, digital safety nonprofit organization, Common Sense Media, published a July 2025 report indicating that 72% of teens have used AI companion chatbots at least once, with more than half of teens using AI companion chatbots a few times a month. Further, one in three teens are using AI companion chatbots for social interaction and relationships, such as “role-playing, romantic interactions, emotional support, friendship, or conversation practice”, and the same number of teens reported feeling “uncomfortable” with something an AI companion chatbot has said or done in the past.

With the support of advocates, including Megan Garcia — mother of 14-year-old Sewell Setzer, who tragically ended his life after forming an emotional relationship with an AI companion chatbot — Senator Steve Padilla (D-CA) drafted SB 243 to include protections further regulating companion chatbots as the technology continues to develop.

II. Comparison Summary of New York and California AI Companion Chatbot Statutes 

The below chart compares the definitions, legal obligations, and enforcement mechanisms outlined in the New York and California laws.

Subject New York
*Effective 11.5.25 
California
*Effective 1.1.26 
Definition of Operator “Operator”: Any person, partnership, association, firm, or business entity, or any member, affiliate, subsidiary, or beneficial owner of any partnership, association, firm, or business entity who operates for or provides an AI companion to a user. “Operator”: Person who makes a companion chatbot platform available to a user in the state. 
Definition of Companion Chatbot “AI companion”: A system using artificial intelligence (AI), generative AI, and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user by:  1. Retaining information on prior interactions or user sessions and user preferences to personalize the interaction and facilitate ongoing engagement with the AI companion;  2. Asking unprompted or unsolicited emotion-based questions that go beyond a direct response to a user prompt; and  3. Sustaining an ongoing dialogue concerning matters personal to the user.  Exceptions:  1. Any system used by a business entity solely for customer service or to strictly provide users with information about available commercial services or products provided by such entity, customer service account information, or other information strictly related to its customer service;  2. Any system that is primarily designed and marketed for providing efficiency improvements or, research or technical assistance; or  3. Any system used by a business entity solely for internal purposes or employee productivity. “Companion Chatbot”: AI system with a natural language interface that provides adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions.  Exceptions:  1. Chatbots providing customer service assistance;  2. Chatbots being utilized for a business’ operational purpose (e.g., productivity analysis, internal research, etc.); or  3. Chatbots featured in video games limited to game-related interactions, or stand-alone consumer electronic devices that function as speakers and voice command interfaces or act as voice-activated virtual assistants that do not sustain a relationship across multiple interactions or generate outputs that are likely to elicit emotional responses in the user. 
User Notification Obligations An operator shall provide a clear and
conspicuous notification to a user at the beginning of any AI companion
interaction, which need not exceed once per day and at least every three
hours for continuing AI companion interactions, which states either
verbally or in writing that the user is not communicating with a human. 
An operator must provide a clear and conspicuous notice indicating that the companion chatbot is artificially generated and not human if a reasonable person interacting with a companion chatbot would be misled to believe that the person is interacting with a human. 
Minor-Specific User Notification Obligations N/A The operator must make the following notifications to minors:  1. Disclose to the minor that they are interacting with AI;  2. Provide to the minor by default at least every three hours a clear and conspicuous notice that reminds the minor to take a break and that the companion chatbot is AI and not human;  3. Institute reasonable measures to prevent its companion chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct; and  4. Disclose on the application, the browser, or any other format that a user can use to access the companion chatbot platform that companion chatbots may not be suitable for some minors. 
Transparency Protocol Obligations The operator must maintain the following protocols:  1. An operator must maintain a protocol to take reasonable efforts for
detecting and addressing suicidal ideation or expressions of self-harm
expressed by a user to the AI companion. This includes, but is not limited to:  a) Detection of user expressions of suicidal ideation or self-harm; and  b) A notification to the user that refers them to crisis service providers 
The operator must maintain the following protocols:  1. An operator must maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user; and  2. An operator must refer a user expressing suicidal ideation, suicide, or self-harm to crisis service providers.  The operator must publish details on the protocol on the operator’s website. 
Reporting Obligations N/A *Beginning July 1, 2027  SB 243 requires operators to report on an annual basis the following information to California’s Office of Suicide Prevention (OSP):  1. The number of times the operator issued a crisis service provider referral notification in the preceding calendar year;  2. The protocols in place to detect, remove, and respond to instances of suicidal ideation by users (operators must use evidence-based methods for measuring suicidal ideation); and  3. The protocols in place to prohibit a companion chatbot response about suicidal ideation or actions with the user.  The annual report must not include any identifiers or personal information about users.  *Note: OSP shall post this data on its website. 
Enforcement Mechanisms Enforcement via New York attorney general:  Whenever the attorney general shall believe from evidence satisfactory to them that an operator has engaged in or is about to engage in any of the acts or practices stated to be unlawful, they may bring an action to:  1. Enjoin an operator from continuing such unlawful acts or practices;  2. Seek civil penalties of up to $15,000 per day for a violation under; and  3. Seek such other remedies as the court may deem appropriate. Enforcement via individuals:  Individuals have a private right of action to hold noncompliant and negligent operators accountable for injury resulting from a violation of the statute and can seek:  1. Injunctive relief;  2. Damages in an amount equal to the greater of actual damages or $1,000 per violation; and  3. Reasonable attorneys’ fees and costs. 

III. Potential Implications

Businesses should first evaluate their chatbot offerings and determine if they fall under the definition of “operator” and are thus governed by the relevant companion chatbot statutes. If so, businesses should take the following steps to ensure compliance with obligations and avoid being subject to regulatory enforcement and/or an individual lawsuit:

  1. Implement relevant transparency protocols and/or amend existing transparency protocols, as necessary; 
  2. Ensure notices indicating the companion chatbot is artificially generated are clear and conspicuous; 
  3. Create measures to test and confirm transparency protocols and/or notices are working properly; and 
  4. Establish procedures to modify/supplement transparency protocols and/or notices to guarantee continued compliance. 

If making companion chatbots available to users in California, businesses should also take the following additional measures:

  1. Determine if the business is tracking whether a user is a minor, and if so, ensure notices are consistent with minor-specific obligations; 
  2. Develop a mechanism to track OSP referrals; and 
  3. Build out an annual OSP reporting policy.