Key point: With a private right of action and ambiguous and undefined terms, businesses deploying consumer-facing interactive AI will want to ensure they are not unintentionally triggering the bill’s provisions.
On March 11, 2026, the Washington legislature passed HB 2225, becoming the second state this session to pass a bill specifically aimed at regulating artificial intelligence (AI) companions. The bill is now with Governor Bob Ferguson for consideration. He has 20 days from receipt of the bill to either sign or veto it. If the governor takes no action within that timeframe, the bill will become law without his signature and will go into effect on January 1, 2027. The bill was filed at Ferguson’s request, so presumably, he will sign it.
Earlier this session, we wrote about Oregon’s SB 1546, another consumer-facing interactive AI bill focused on AI companions with a private right of action and statutory damages. Washington’s bill imposes similar requirements on businesses that deploy AI companion chatbots but arguably has an even broader applicability standard. The Washington bill also includes a private right of action, which is modeled on the private right of action in Washington’s My Health My Data Act (MHMD) and does not include statutory damages.
In the article below, we provide an overview of the Washington bill.
Applicability
Washington’s HB 2225 applies to operators, defined as any person, partnership, corporation, or entity that makes available or controls access to an AI companion chatbot for Washington users. Under the bill, a “user” is a natural person who interacts with an AI companion chatbot for personal use who is not an operator, developer, or agent thereof; as a result, AI used for an internal business purpose (e.g., where the end user is an employee) or in a business-to-business capacity does not fall under the purview of the bill.
The bill defines AI companion chatbot as “an artificial intelligence system with a natural language interface that provides adaptive, human-like responses to user inputs, including by exhibiting anthropomorphic features, and is able to sustain a relationship across multiple interactions.” Excluded from the scope of this definition are chatbots that are:
- Used only for a business’ operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service, if such a bot does not sustain a relationship across multiple interactions and generate outputs that are likely to elicit emotional responses in the user;
- A feature of a video game or gaming system or application and that is limited to replies related to the video game or gaming system or application that cannot discuss topics related to mental health, self-harm, or sexually explicit conduct, or maintain a dialogue on other topics unrelated to the video game or gaming system or application;
- A stand-alone consumer electronic device that functions as a speaker and voice command interface, acts as a voice-activated virtual assistant, and does not sustain a relationship across; and multiple interactions or generate outputs that are likely to elicit emotional responses in the user; and
- Narrowly tailored educational tools used in school or instructional settings that are designed solely to support specific, curriculum-aligned learning objectives and do not provide open-ended conversational companionship.
The Washington bill raises similar concerns identified in our analysis of Oregon’s bill: missing or overbroad definitions may unintentionally broaden the scope of applicability. While Washington lawmakers certainly intended for the bill to apply to companion chatbots, the bill’s broad definition of AI companion chatbot could arguably cover many commonly used chatbots so long as they recognize users in between sessions. Businesses may be able to point to the “only for business’ operational purposes, productivity and analysis related to source information, internal research, technical assistance, or customer service” exclusion to exempt their chatbot offerings from the scope of this bill. However, that exclusion is inapplicable if a chatbot sustains a relationship across multiple interactions and generates outputs that are likely to elicit emotional responses in the user.
The exclusion’s first requirement — sustaining a relationship across multiple interactions — is just a restatement of one of the requirements to constitute an AI companion chatbot. The second requirement — eliciting emotional responses — is undefined and potentially vague. The bill does not specify what is considered emotional or who and how emotional will be measured in the context of compliance.
Ultimately, businesses that deploy consumer-facing interactive AI should carefully review how AI offerings function and whether they could unintentionally trigger Washington’s bill, especially given the bill’s private right of action enforcement mechanism, which we discuss below.
Requirements
Transparency Requirements
An operator must provide users with a clear and conspicuous disclosure that the AI companion chatbot is artificially generated at the beginning of the interaction and at least every three hours of continued interaction.
The bill creates further disclosure requirements for minors if the operator knows a user is a minor (any person under 18 years of age) or the AI companion chatbot is directed to minors, the operator must provide the reminder that the AI companion is artificially generated every hour. In addition to this proactive disclosure, the operator must prevent the AI companion chatbot from claiming it is human when asked by the user or otherwise generating any output that refutes or conflicts with the disclosure.
The bill requires additional safeguards related to minors, including implementing reasonable measures to prevent its AI companion chatbot from generating or producing sexually explicit content or suggestive dialogue with minors and prohibiting the use of manipulative engagement techniques, which cause the AI companion chatbot to engage in or prolong an emotional relationship with the user. The bill provides a number of activities it considers “manipulative engagement techniques,” such as mimicking romantic partnership or building romantic bonds, or soliciting gift-giving, in-app purchases, or other expenditures framed as necessary to maintain the relationship with the AI companion.
Requirements for Detection, Response, and Reporting Protocols
Like Oregon’s bill, an operator must implement and maintain a protocol for detecting and addressing users’ suicidal ideation or expressions of self-harm before making an AI companion chatbot available to users in Washington. The protocol must:
- Include reasonable methods for identifying expressions of suicidal ideation or self-harm, including eating disorders;
- Provide automated or human-mediated responses that refer users to appropriate crisis resources, including a suicide hotline or crisis text line; and
- Implement reasonable measures to prevent the generation of content encouraging or describing how to commit self-harm.
The operator must disclose the details of this protocol on its website and within any mobile or web-based application through which the AI companion chatbot is made available, including the safeguards the operator uses to detect and respond to expressions of suicidal ideation or self-harm and the number of crisis referral notifications issued to users in the preceding calendar year.
Private Right of Action
The enforcement provision in Washington’s bill mirrors that of Washington’s MHMD, providing a private right of action. However, unlike Oregon’s bill, the Washington bill does not include statutory damages. When the Washington legislature passed MHMD, it was widely expected that the law would lead to many lawsuits; however, those lawsuits have yet to materialize.
Effective Date
If the bill becomes law, it will go into effect January 1, 2027.