Key point: With a private right of action and ambiguous and undefined terms, businesses deploying consumer-facing interactive AI will want to ensure they are not unintentionally triggering the bill’s provisions.

On March 11, 2026, the Washington legislature passed HB 2225, becoming the second state this session to pass a bill specifically aimed at regulating artificial intelligence (AI) companions. The bill is now with Governor Bob Ferguson for consideration. He has 20 days from receipt of the bill to either sign or veto it. If the governor takes no action within that timeframe, the bill will become law without his signature and will go into effect on January 1, 2027. The bill was filed at Ferguson’s request, so presumably, he will sign it.

Earlier this session, we wrote about Oregon’s SB 1546, another consumer-facing interactive AI bill focused on AI companions with a private right of action and statutory damages. Washington’s bill imposes similar requirements on businesses that deploy AI companion chatbots but arguably has an even broader applicability standard. The Washington bill also includes a private right of action, which is modeled on the private right of action in Washington’s My Health My Data Act (MHMD) and does not include statutory damages.

In the article below, we provide an overview of the Washington bill.

Key point: Last week, the Washington and New York legislatures each passed two bills; chatbot bills advanced in Georgia, Hawaii, and Tennessee; the Hawaii House passed a pricing bill while Colorado and Massachusetts committees advanced pricing bills; health care-related AI bills advanced in Missouri and Vermont; and New Hampshire advanced a deceptive AI bill.

Below is the ninth update on the status of proposed state AI legislation in 2026. These posts track state AI bills that can directly or indirectly affect private-sector AI developers and deployers. These posts do not track AI bills that focus on government use of AI; insurance; workgroups; education; legal settings; name, image, and likeness; deepfakes; CSAM and sexual material; and election interference. As always, the contents provided below are time-sensitive and subject to change.

Key point: If enacted, the bill will require GenAI systems to provide a conspicuous warning that GenAI outputs may be inaccurate.

On March 9, 2026, the New York legislature passed A 3411, which requires generative artificial intelligence (GenAI) systems to notify users that the system’s outputs may be inaccurate. The bill will next move to Governor Kathy Hochul for consideration. If it becomes law, the bill will go into effect 90 days from enactment. The bill is short (it contains only 30 lines of text) but has broad implications.

In the article below, we provide background on the bill, an overview of its requirements, and potential implications should it become law.

Key Points: California Attorney General Rob Bonta announced a sweep concerning so-called “surveillance pricing” or “algorithmic pricing” The AG highlights potential CCPA privacy violations tied to the use of individualized pricing models based on a lack of transparency and failure to comply with the CCPA’s “purpose limitation” principle. Other regulators are likely to follow suit — now is the time to assess and mitigate potential compliance and enforcement risks.

On January 27, 2026, California Attorney General (AG) Rob Bonta announced an investigative sweep focused on businesses that use consumer data to individualize prices for their goods or services. Bonta framed the issue as follows:

Consumers have the right to understand how their personal information is being used, including whether companies are using their data to set the prices that Californians pay, whether that be for groceries, travel, or household goods. We need to know whether businesses are charging people different prices for the same good or service — and if they’re complying with the law.”

The California Department of Justice (DOJ) is issuing written inquiries to businesses with substantial online operations in the retail, grocery, and hotel industries that leverage individualized pricing. It is requesting certain information on this issue, including details about:

  • Companies’ use of consumer personal information to set prices.
  • Policies and public disclosures regarding personalized pricing.
  • Any pricing experiments undertaken by companies.
  • Measures companies are taking to comply with algorithmic pricing, competition, and civil rights laws.

This post summarizes the basis for the California DOJ’s investigatory sweep, how it intends to apply California Consumer Privacy Act (CCPA) requirements, and how businesses can prepare for and mitigate the risk of these inquiries and potential enforcement actions.

Key Point: In a significant win for electronic communication providers that utilize artificial intelligence (AI) as part of their core functions, the Northern District of Illinois held that a defendant’s AI transcription and analytics service operated in the ordinary course of its electronic communications business and therefore did not violate the Electronic Communications Privacy Act (ECPA). The ruling may provide a powerful defense to federal and state law wiretap claims targeting AI call technologies.

Key point: Kentucky attorney general files a lawsuit against an artificial intelligence chatbot company, eight days after the Kentucky Consumer Data Protection Act went into effect.

On January 8, the Kentucky attorney general (AG) announced its first lawsuit for violations of the Kentucky Consumer Data Protection Act (KCDPA) against an artificial intelligence (AI) chatbot company. The complaint alleges that the defendant violated the KCDPA with unfair, false, misleading, or deceptive acts and practices, and through unfair collection and exploitation of children’s data. Among other claims, the complaint also states claims under the state’s consumer protection law and data breach law.

The complaint is the latest in a growing trend of states regulating AI chatbots, including companion chatbots. As we recently discussed, New York and California passed laws last year specifically regulating companion chatbots. Lawmakers in other states have already proposed numerous bills this year. This comes notwithstanding the recent executive order, which seeks to preempt “onerous” state AI laws. As we foreshadowed in our analysis of that order, the instant complaint also reinforces the difficulty in defining what constitutes a state AI law, as the complaint is brought under existing state laws that are not specifically written to cover AI.

In the article below, we provide a summary of the allegations in the complaint.

Key point: Businesses operating companion chatbots in California or New York are subject to new legal obligations, including providing notices to users and ensuring protocols are in place to prevent self-harm.

On January 1, 2026, California’s companion chatbot law (SB 243) took effect after being signed into law on October 13, 2025 by Governor Gavin Newsom. The law imposes certain obligations on companion chatbot operators to implement “critical, reasonable, and attainable” safeguards surrounding the use of and interaction with “companion chatbots” with a focus on protecting minors. SB 243 follows New York’s AI Companion Models statute, N.Y. Gen. Business Law § 1700, et seq., a similar companion chatbot bill that went into effect November 5, 2025.

Key point: New York becomes the second state — after California — to enact an AI frontier model law, while the governor’s veto of the New York Health Information Privacy Act will be a welcome result for organizations that criticized the bill as unworkable.

In the last two weeks, New York Governor Kathy Hochul took action on numerous bills the New York legislature passed before it closed in June. Among those actions, Hochul signed four AI-related bills — including a bill regulating AI frontier models — and vetoed a controversial health data privacy bill. We discuss each of those bills in the article below.

In addition to these bills, earlier this year, New York lawmakers enacted three other AI-related laws — the Algorithmic Pricing Disclosure Act, a companion chatbot law, and a law regulating the use of algorithmic pricing by landlords.