Key point: Kentucky attorney general files a lawsuit against an artificial intelligence chatbot company, eight days after the Kentucky Consumer Data Protection Act went into effect.

On January 8, the Kentucky attorney general (AG) announced its first lawsuit for violations of the Kentucky Consumer Data Protection Act (KCDPA) against an artificial intelligence (AI) chatbot company. The complaint alleges that the defendant violated the KCDPA with unfair, false, misleading, or deceptive acts and practices, and through unfair collection and exploitation of children’s data. Among other claims, the complaint also states claims under the state’s consumer protection law and data breach law.

The complaint is the latest in a growing trend of states regulating AI chatbots, including companion chatbots. As we recently discussed, New York and California passed laws last year specifically regulating companion chatbots. Lawmakers in other states have already proposed numerous bills this year. This comes notwithstanding the recent executive order, which seeks to preempt “onerous” state AI laws. As we foreshadowed in our analysis of that order, the instant complaint also reinforces the difficulty in defining what constitutes a state AI law, as the complaint is brought under existing state laws that are not specifically written to cover AI.

In the article below, we provide a summary of the allegations in the complaint.

Key point: Businesses operating companion chatbots in California or New York are subject to new legal obligations, including providing notices to users and ensuring protocols are in place to prevent self-harm.

On January 1, 2026, California’s companion chatbot law (SB 243) took effect after being signed into law on October 13, 2025 by Governor Gavin Newsom. The law imposes certain obligations on companion chatbot operators to implement “critical, reasonable, and attainable” safeguards surrounding the use of and interaction with “companion chatbots” with a focus on protecting minors. SB 243 follows New York’s AI Companion Models statute, N.Y. Gen. Business Law § 1700, et seq., a similar companion chatbot bill that went into effect November 5, 2025.

Key point: New York becomes the second state — after California — to enact an AI frontier model law, while the governor’s veto of the New York Health Information Privacy Act will be a welcome result for organizations that criticized the bill as unworkable.

In the last two weeks, New York Governor Kathy Hochul took action on numerous bills the New York legislature passed before it closed in June. Among those actions, Hochul signed four AI-related bills — including a bill regulating AI frontier models — and vetoed a controversial health data privacy bill. We discuss each of those bills in the article below.

In addition to these bills, earlier this year, New York lawmakers enacted three other AI-related laws — the Algorithmic Pricing Disclosure Act, a companion chatbot law, and a law regulating the use of algorithmic pricing by landlords.

Key point: Although the executive order seeks to bring regulatory certainty in the development and deployment of AI in the U.S. — at least in the short term it is unlikely to alleviate compliance burdens for businesses and may only create more uncertainty.

On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The purpose of the executive order is twofold.

First, the order seeks to create a legal structure to stop states from enacting new state AI laws and from enforcing existing ones. According to the order, the goal of the U.S. must be “[t]o win” a “race with adversaries for supremacy” in AI. However, to do so, “United States AI companies must be free to innovate without cumbersome regulation.” Therefore, the order seeks to prevent “a patchwork of 50 different state regulatory regimes that makes compliance more challenging, particularly for start-ups.” Importantly, the order itself does not attempt to preempt state AI laws. Rather, as discussed below, it just creates a structure for the federal government to try to preempt some of them.

Second, the order states that the Trump administration will work with Congress to enact a “minimally burdensome national standard” that preempts state law and “ensure[s] that the United States wins the AI race, as we must.”

The order follows two prior attempts in Congress to pass a moratorium on states enacting AI laws. Most recently, an attempt to include a moratorium in the National Defense Authorization Act of 2026 failed, creating the impetus for the president to sign the order.

Although the executive order seeks to streamline and reduce AI regulation, it leaves open many questions, including the scope of laws that will be challenged and the likelihood if not certainty that states will challenge the order’s legality. It also remains to be seen whether the order slows the passage of new state AI laws and enforcement of existing ones. Indeed, it could ultimately have the unintended consequence of resulting in even more state AI laws. In the article below, we discuss the scope of the order, the state AI laws that could be targeted by the administration, how states have reacted to the order, and takeaways for businesses that are trying to comply with existing and forthcoming state AI laws.

Key point: California’s expansion of its antitrust law — targeting algorithmic pricing and lowering the bar for litigation — signals a major shift in how companies must approach algorithmic pricing tools and compliance.

On October 6, 2025, Governor Gavin Newsom signed into law two significant amendments to California’s Cartwright Act: AB 325 and SB 763. These amendments to the Cartwright Act are the most significant updates to the law in recent years. AB 325 addresses algorithmic price-fixing by prohibiting the use or distribution of pricing algorithms among two or more entities to coordinate prices or commercial terms. SB 763 substantially increases corporate and individual criminal fines for violations. The new laws take effect on January 1, 2026.

Key point: California enacts first-in-the-nation law focused on regulating frontier artificial intelligence models.

On September 29, 2025, California Governor Gavin Newsom signed SB 53 — the Transparency in Frontier Artificial Intelligence Act (TFAIA) — into law. As explained in the Senate floor analysis, the law “requires large artificial intelligence (AI) developers . . . to publish safety frameworks, disclose specified transparency reports, and report critical safety incidents to the Office of Emergency Services (OES).” The law also “creates enhanced whistleblower protections for employees reporting AI safety violations and establishes a consortium to design a framework for ‘CalCompute,’ a public cloud platform to expand safe and equitable AI research.” The law was hailed by both Newsom and its primary sponsor, Senator Scott Wiener, as striking a proper balance between innovation and placing sensible guardrails on frontier AI models.

Key point: The California legislature closed its 2025 legislative session by passing 14 privacy and AI-related bills.

The California legislature closed for the year by passing numerous privacy and AI-related bills. The bills will next head to Governor Gavin Newsom, who will have 30 days to sign, approve without signing, or veto the bills. That is still a significant hurdle for the bills to clear, as last year, Newsom vetoed multiple privacy and AI bills. Below, we identify which of the bills passed and failed, and provide a summary for each of the bills that passed.