Photo of Marlaina Pinto

Marlaina advises clients on a broad range of privacy and data protection matters, drawing on experience in marketing technology. She provides strategic counsel on consumer data use and regulatory obligations under both U.S. state privacy laws and international data privacy laws, such as the GDPR.

Key point: If enacted, the bill will require GenAI systems to provide a conspicuous warning that GenAI outputs may be inaccurate.

On March 9, 2026, the New York legislature passed A 3411, which requires generative artificial intelligence (GenAI) systems to notify users that the system’s outputs may be inaccurate. The bill will next move to Governor Kathy Hochul for consideration. If it becomes law, the bill will go into effect 90 days from enactment. The bill is short (it contains only 30 lines of text) but has broad implications.

In the article below, we provide background on the bill, an overview of its requirements, and potential implications should it become law.

Key point: With a private right of action, statutory damages, and ambiguous and undefined terms, businesses deploying consumer-facing interactive AI will want to make sure they are not unintentionally triggering the bill’s provisions.

On March 5, 2026, Oregon’s legislature passed a consumer-facing interactive artificial intelligence (AI) bill focused on AI companions (SB 1546). The bill will next head to Governor Tina Kotek who will have 30 full weekdays to sign, veto, or allow the bill to become law without her signature. According to Oregon’s legislative website, no one publicly testified in opposition to the bill and it passed both chambers with only two no votes. If the bill becomes law, it will go into effect January 1, 2027.

Although the bill is directed at AI companions, as discussed below, the bill contains ambiguous and undefined terms that could lead to businesses unintentionally triggering its provisions. This is particularly concerning given that the bill contains a private right of action with statutory damages of $1,000 for each violation.

The following article provides an overview of the Oregon bill, its applicability, obligations, and potential implications for businesses.

Key point: The California attorney general announced a $2.75 million fine against a company for CCPA violations for failing to honor requests to opt out of the sale or sharing of personal information across all devices and services associated with consumer accounts.

On February 11, 2026, the California attorney general (AG) announced a settlement with a multiplatform entertainment company, resolving alleged California Consumer Privacy Act (CCPA) violations based on gaps in the company’s opt-out procedures. This is the second public CCPA enforcement settlement arising from the California Department of Justice’s 2024 investigative sweep of streaming services. This also is the largest CCPA settlement amount to date, and is roughly five times the amount of the first enforcement action and more than $1 million more than the prior largest settlement by the AG. These actions reflect an escalating enforcement trajectory as the AG and the California Privacy Protection Agency develop a body of precedent that increasingly functions as operational compliance guidance for businesses. Notably, every CCPA enforcement action to date has involved, in some way, the right to opt out and demonstrates that the AG’s expectations for what constitutes compliant opt-out implementation are becoming both more granular and more demanding with each successive action.

Key point: The Connecticut Office of the Attorney General issued the third annual enforcement report under the Connecticut Data Privacy Act, focusing on the office’s privacy and security efforts, consumer complaints, data breaches, and enforcement priorities.

The Connecticut Office of the Attorney General (OAG) issued its 2025 enforcement report under the Connecticut Data Privacy Act (CTDPA) last week. This is the third report since the CTDPA went into effect in July 2023. The report provides an update on (1) privacy-related consumer complaints, (2) data breach notice review and enforcement, and (3) enforcement efforts and priorities. Importantly, the OAG emphasized that protecting “kids online remains a topmost priority” and that it would continue to pursue investigations and enforcement actions focused on companies that offer online services, products, or features to consumers under 18.

In the report, the OAG also outlined recent amendments to the CTDPA, which will take effect on July 1, 2026. For more information regarding these amendments, see the recording of our webinar on 2025 Key Updates on State Privacy and AI Laws.

This article summarizes the OAG’s report and the positions the OAG takes on various issues. While the report highlights the OAG’s strong pro-consumer stance and illustrates the OAG’s expansive view of the CTDPA and its provisions, in breaking down the report, this article takes no position on the substance of those positions.

Key point: Kentucky attorney general files a lawsuit against an artificial intelligence chatbot company, eight days after the Kentucky Consumer Data Protection Act went into effect.

On January 8, the Kentucky attorney general (AG) announced its first lawsuit for violations of the Kentucky Consumer Data Protection Act (KCDPA) against an artificial intelligence (AI) chatbot company. The complaint alleges that the defendant violated the KCDPA with unfair, false, misleading, or deceptive acts and practices, and through unfair collection and exploitation of children’s data. Among other claims, the complaint also states claims under the state’s consumer protection law and data breach law.

The complaint is the latest in a growing trend of states regulating AI chatbots, including companion chatbots. As we recently discussed, New York and California passed laws last year specifically regulating companion chatbots. Lawmakers in other states have already proposed numerous bills this year. This comes notwithstanding the recent executive order, which seeks to preempt “onerous” state AI laws. As we foreshadowed in our analysis of that order, the instant complaint also reinforces the difficulty in defining what constitutes a state AI law, as the complaint is brought under existing state laws that are not specifically written to cover AI.

In the article below, we provide a summary of the allegations in the complaint.

Key point: Businesses subject to the CCPA now must conduct risk assessments for certain types of processing activities and, starting in 2028, must certify to California regulators that they completed the assessments.

The California Consumer Privacy Act’s (CCPA) new regulations went into effect on January 1, 2026. Although the new regulations bring many changes for businesses subject to the CCPA, one of the biggest changes is a new requirement to conduct risk assessments for processing activities that present “significant risk to consumers’ privacy.” This can encompass many types of common data processing activities such as the use of third-party cookies and tracking technologies, processing of sensitive personal information (e.g., biometric data), and the use of AI for certain employment-related activities. Like the CCPA, the risk assessment requirement applies to consumer, employee, and commercial personal information.

Importantly, on April 1, 2028, businesses subject to the CCPA must file a certification with the California Privacy Protection Agency (CalPrivacy) attesting — under penalty of perjury — that they conducted the required risk assessments. The certification must be signed by a member of the business’s executive management team.

In the below article, we provide an overview of this new risk assessment requirement.

Key point: The California AG’s fifth CCPA-related enforcement action focuses on the CCPA’s right to opt out of sales/shares and on children’s privacy provisions and, with respect to the right to opt out, it should trigger businesses to reevaluate their procedures, especially as it relates to the treatment of account holders and mobile apps.

On October 30, 2025, the California attorney general (AG) announced a settlement with a streaming services provider[1] over violations of the California Consumer Privacy Act (CCPA). Pursuant to the proposed final judgment and permanent injunction, the company will pay a $530,000 fine and implement several injunctive relief requirements. According to the press release, the settlement arose from a 2024 investigative sweep of streaming services.

The complaint alleges two CCPA violations: (1) failure to provide easy-to-execute methods for consumers to opt out of the selling and sharing of their personal information; and (2) failure to provide sufficient privacy protections for children. Given that these are distinct issues, we will address them in two separate articles. This first article provides a brief background of the enforcement action, an analysis of the right to opt-out violations, and a summary of the injunctive relief requirements. The next article will analyze the children’s privacy violations.

Key point: A federal district court judge rejected the claim that the disclosure law violates the First Amendment.

On October 8, 2025, a judge for the U.S. District Court for the Southern District of New York granted the New York attorney general’s (AG) motion to dismiss a lawsuit filed by a retail trade association claiming that New York’s Algorithmic Pricing Disclosure Act violates the First Amendment. Below, we provide a brief history and summary of the law and analysis of the court’s decision.