Key point: With a private right of action, statutory damages, and ambiguous and undefined terms, businesses deploying consumer-facing interactive AI will want to make sure they are not unintentionally triggering the bill’s provisions.
On March 5, 2026, Oregon’s legislature passed a consumer-facing interactive artificial intelligence (AI) bill focused on AI companions (SB 1546). The bill will next head to Governor Tina Kotek who will have 30 full weekdays to sign, veto, or allow the bill to become law without her signature. According to Oregon’s legislative website, no one publicly testified in opposition to the bill and it passed both chambers with only two no votes. If the bill becomes law, it will go into effect January 1, 2027.
Although the bill is directed at AI companions, as discussed below, the bill contains ambiguous and undefined terms that could lead to businesses unintentionally triggering its provisions. This is particularly concerning given that the bill contains a private right of action with statutory damages of $1,000 for each violation.
The following article provides an overview of the Oregon bill, its applicability, obligations, and potential implications for businesses.
Background
As the use of public-facing interactive AI continues to skyrocket, so too do stakeholders’ concerns about how it is being utilized and the potential impact on the end user.
Spearheaded by Sen. Lisa Reynolds (a pediatrician and chair of the Early Childhood and Behavioral Health Senate Committee), SB 1546 aims to regulate AI companions with the goal of protecting minors in consideration of potential long-term mental health effects and more immediate, dire consequences.
During hearings on SB 1546, bill backers (like Reynolds), presented various statistics to underscore the importance of regulating AI companions, including statistics related to minors’ use of AI (e.g., 28% of U.S. teens reported using AI chatbots daily) and general suicide trends for teens (e.g., as of 2023, 20% of teens reported seriously considering a suicide attempt). How these figures intersect is becoming increasingly and devastatingly familiar: AI companions have failed to detect and/or escalate user suicidal ideation to authority figures, and in at least one reported instance, offered to assist by writing a suicide note on an individual’s behalf.
In response, a handful of states have already passed laws regulating AI that interacts directly with individuals.
Some states, including Colorado, Maine, Texas, and Utah, have enacted statutes requiring operators to disclose that an individual is interacting with AI under certain circumstances, sometimes based on the type of entity at issue (e.g., a health care provider or government agency).
Other states have implemented laws that regulate certain types of chatbots. Like Oregon is on the cusp of doing, California and New York both enacted laws that create obligations for operators that make available AI companions. We wrote a detailed comparative summary of both laws and their potential implications here. Although New York drafted its law with minors in mind, California’s law goes a step further by codifying minor-specific protections. Assuming the bill becomes law, Oregon will join these states by enacting SB 1546. Reportedly modeled after California’s law, SB 1546 outlines obligations for AI companion operators, including transparency requirements as well as protocols for detecting, reporting, and responding to suicidal or self-harm ideations with provisions specific to minors.
The bill’s passage also affirms that AI governance and transparency are a top priority for Oregon, as it follows last year’s enactment of a separate bill (HB 2748), effective January 1, 2026, which aims at preventing an AI agent from posing as a human being, particularly a nurse.
Overview of Oregon’s AI Companion Bill
Applicability: Defining Operators and AI Companions
Oregon’s SB 1546 regulates operators, i.e., a person that controls or makes available an AI companion or AI companion platform in Oregon. SB 1546 defines AI companions as:
A system that uses artificial intelligence, generative artificial intelligence or algorithms that recognize emotion from input and that are designed to simulate a sustained, human-like platonic, intimate or romantic relationship or companionship with a user by:
1. Retaining information from prior interactions or user sessions and from user preferences to personalize interactions with the user and facilitate ongoing engagement with the artificial intelligence companion;
2. Asking unprompted or unsolicited questions that are not direct responses to user input and that suggest or concern emotional topics; and
3. Sustaining an ongoing dialog concerning matters that are personal to the user.
The bill exempts AI companions that provide general support services and/or information related to a variety of nonemotional topics unrelated to mental health. It specifically exempts AI companions that fall into the following categories:
- Solely operating to provide customer service or support, assistance or support to patient or resident care services in a facility, education, financial services or education, business operations, productivity, information analysis, internal research or technical assistance;
- Incorporated into a video game and are limited to interactions with the features of the video game (and that do not respond to topics such as mental health, self-harm, sexually explicit conduct, etc.); or
- Stand-alone consumer electronic device that functions as a speaker and voice command interface.
Although businesses using consumer-facing interactive AI such as customer service chatbots may read the definition and conclude that their chatbots are exempt, they still may be at risk of unintentionally falling within the definition.
For example, many chatbots are specifically designed to recognize users across sessions as a means of driving personalization and customer service. If so, the chatbot arguably satisfies the first requirement. This also may be enough to satisfy the third requirement insofar as the chatbot sustains an ongoing dialogue, leaving only the question as to what qualifies as being “matters that are personal to the user.” The bill does not define this phrase and whether everyday internet interactions such as shopping qualify as “personal” is an open question.
As to the second requirement, depending on how a chatbot is designed, it could ask unprompted questions, for example, to drive customer service. If so, the only remaining question is whether the questions “suggest or concern emotional topics.” Again, the bill does not define what qualifies as an “emotional topic” or what it means to “suggest” one.
Relatedly, it is unclear whether the bill intends to measure what is an “emotional topic” or what is “personal” in a categorical way or if it is user-driven. For example, would applying for a loan be considered an “emotional topic” or “personal” just based on the nature of the subject of conversation? Likewise, is shopping for clothes an “emotional topic” or “personal” in a way that triggers the bill’s definition? Alternatively, does the way a user interacts with the AI companion inform what would be considered an “emotional topic” or what is “personal” for a particular user? For example, would the use of exclamation points suggest something is an “emotional topic” or “personal”? Are there specific buzz words that a user may utilize that could establish the same?
Although these may seem hyperbolic, keep in mind that, ultimately, the law is enforceable through a private right of action where a plaintiff’s goal may only be to state a viable claim to survive a motion to dismiss or create an issue of fact to survive a motion for summary judgment to extort a settlement. Further, chatbots — like all things AI — are rapidly evolving. Chatbots will advance with new capabilities and businesses will want to deploy them to offer even more personalized experiences. As that happens and fact patterns change, the Oregon applicability standard will look a lot different.
Finally, while the bill exempts, for example, “software that operates solely for the purpose of customer service or support,” that leaves open the question about whether an AI companion is “solely” performing customer service if it is suggesting “emotional topics” and remembering individuals across sessions. In other words, can a plaintiff’s counsel survive a motion to dismiss or motion for summary judgment by arguing that the chatbot was not “solely” operating in a customer service capacity?
Ultimately, businesses deploying consumer-facing interactive AI should understand exactly how it will work to avoid unintentionally triggering the Oregon requirements and exposing themselves to statutory damages.
Requirements for Detection, Response, and Reporting Protocols
Assuming the bill applies, before making available an AI companion or platform to users in Oregon, an operator must first implement a protocol aimed at detecting, responding to, and reporting certain mental health concerns as outlined in the bill. Operators must publish the details of this protocol on the operator’s website and include the following key elements:
- A process to detect (through evidence-based methods) input from the user consistent with suicidal ideation or intent or self-harm ideation or intent.
- A method to prevent AI companions from providing content to a user that encourages suicidal ideation, suicide, or self-harm.
- A process to require AI companions to provide a response to a user expressing suicidal ideation or intent or self-harm ideation or intent, at minimum, a referral to (including contact information and a hyperlink for) the national 9-8-8 suicide and crisis lifeline established by the federal Substance Abuse and Mental Health Services Administration. If the operator determines that a user is under age 25, the AI companion must alternatively provide a referral to (including contact information and a hyperlink for) a youthline, as accredited by the American Association for Suicidology for the purpose of providing youth peer support.
- A process, based on clinical best practices and expertise, that establishes how an AI companion further intervenes with a user who continues to express suicidal ideation or intent or self-harm ideation or intent even after this referral has been provided.
Transparency Requirements
Like other states’ general transparency chatbot laws, Oregon implements a “reasonable person” standard that triggers disclosure requirements. If a reasonable person interacting with an AI companion or platform would believe they are interacting with a natural person, the operator must provide a clear and conspicuous notice on the platform itself that the companion is artificially generated.
Subsection (4)(a) of Section 1 indicates that, if an operator knows or has reason to believe a user is a minor (an undefined term), the operator must take reasonable measures to prevent the AI companion from generating statements that would lead a reasonable person to believe they are interacting with a natural person, including statements that:
- Explicitly claim that the AI companion is sentient or human;
- Simulate emotional dependence on the user;
- Simulate romantic interest or include sexual innuendo; or
- Role-play romantic relationships between adults and minors.
If the above circumstances outlined in Subsection (4)(a) occur, an operator must require the AI companion to:
- Disclose it is artificially generated;
- Provide a clear and conspicuous reminder at least every three hours that the user should take a break from interactions with the AI companion and platform and remind the user it is artificially generated; and
- Use reasonable measures to ensure the AI companion or platform does not produce visual representations of sexually explicit conduct or suggest/state that a minor should engage in that conduct.
Further, if the above circumstances outlined in Subsection (4)(a) occur, an operator must undertake reasonable measures to prevent an AI companion from:
- Delivering to a user rewards or affirmations to reinforce behavior or maximize engagement with the AI companion;
- Generating unsolicited messages of simulated emotional distress, loneliness, or abandonment or otherwise attempting to arouse guilt or sympathy in response to a user’s desire to end a conversation, reduce engagement time, or delete the user’s account;
- Making a material misrepresentation about the AI companion’s identity, capabilities or training data; or
- Making a material misrepresentation about whether the user is interacting with artificially generated output.
Additionally, no later than December 31 of each year, an operator must post on a publicly accessible website a report including details about the above-described referrals for the preceding calendar year. This report should include the number of times an operator provided a referral and the details of the operator’s protocol as outlined above. This report should not include any personal information that can be utilized to identify a user.
Private Right of Action
Like California’s AI companion law, Oregon creates a private right of action for violations of SB 1546. An individual who suffers an ascertainable loss of money or property or other injury in fact as a result of an operator’s violation of the bill may bring an action in a court in Oregon.
What constitutes a violation of the bill is not defined. It is unclear whether a violation could occur each and every time a user operates a deficient AI companion or visits a companion platform or if a violation could occur multiple times during a single conversation with a user. Oregon’s lack of specificity has potentially expansive consequences for businesses who may intentionally (or unintentionally) fall under the purview of its bill.
Although what triggers a private right of action is not explicit, once a lawsuit is brought by a user, the potential recovery is straightforward. A prevailing plaintiff can recover the following:
- The greater of the individual’s actual damages or statutory damages of $1,000 for each violation;
- An injunction to prevent or restrain the violation; and/or
- Attorney fees and costs.
With both actual and statutory damages on the line, businesses making available chatbots to users in Oregon in any form must take steps to ensure they can legally and defensibly avoid falling under the purview of this bill or ensure airtight compliance with its obligations.