Key point: Businesses subject to the CCPA now must conduct risk assessments for certain types of processing activities and, starting in 2028, must certify to California regulators that they completed the assessments.

The California Consumer Privacy Act’s (CCPA) new regulations went into effect on January 1, 2026. Although the new regulations bring many changes for businesses subject to the CCPA, one of the biggest changes is a new requirement to conduct risk assessments for processing activities that present “significant risk to consumers’ privacy.” This can encompass many types of common data processing activities such as the use of third-party cookies and tracking technologies, processing of sensitive personal information (e.g., biometric data), and the use of AI for certain employment-related activities. Like the CCPA, the risk assessment requirement applies to consumer, employee, and commercial personal information.

Importantly, on April 1, 2028, businesses subject to the CCPA must file a certification with the California Privacy Protection Agency (CalPrivacy) attesting — under penalty of perjury — that they conducted the required risk assessments. The certification must be signed by a member of the business’s executive management team.

In the below article, we provide an overview of this new risk assessment requirement.

In today’s rapidly evolving digital landscape and expanded threat landscape, financial institutions feel at war and are under increasing pressure to balance innovation, data privacy, and regulatory demands. AI is accelerating that complexity, reshaping how organizations manage sensitive information and comply with a rapidly shifting legal environment.

Key point: Set to take effect on January 1, 2026, court blocks the Texas App Store Accountability Act on constitutional grounds.

A Texas federal district court granted a preliminary injunction enjoining the Texas App Store Accountability Act today, stating that the law likely violates the First Amendment and is unconstitutionally vague. In October, an internet trade association sued the state of Texas over the act, and this month the case was consolidated with another case stating similar claims. The law was scheduled to take effect January 1, 2026, and imposed obligations on both app stores and developers providing mobile applications to Texas users. Texas will be unable to implement or enforce the act while the litigation is ongoing.

Key point: New York becomes the second state — after California — to enact an AI frontier model law, while the governor’s veto of the New York Health Information Privacy Act will be a welcome result for organizations that criticized the bill as unworkable.

In the last two weeks, New York Governor Kathy Hochul took action on numerous bills the New York legislature passed before it closed in June. Among those actions, Hochul signed four AI-related bills — including a bill regulating AI frontier models — and vetoed a controversial health data privacy bill. We discuss each of those bills in the article below.

In addition to these bills, earlier this year, New York lawmakers enacted three other AI-related laws — the Algorithmic Pricing Disclosure Act, a companion chatbot law, and a law regulating the use of algorithmic pricing by landlords.

On December 10, attorneys from Troutman Pepper Locke’s Privacy + Cyber + AI team hosted a webinar providing an overview of existing state AI laws and regulations. To drive understanding and comprehension, the webinar used a use-case-based approach, breaking the laws down across topics, including consumer-facing applications, pricing algorithms, employee-specific

Key point: Courts are split over whether use of the Meta Pixel to share URLs of videos users watch qualifies as disclosure of PII under the VPPA, even when they apply the same “ordinary person” test to nearly identical allegations.

Earlier this year, the Second Circuit joined the Third and Ninth Circuits in adopting an “ordinary person” standard to determine whether a defendant’s disclosure of information constitutes disclosure of personally identifiable information (PII) prohibited by the Video Privacy Protection Act (VPPA). Although this standard initially appeared more restrictive — and thus more favorable to defendants — than the “reasonable foreseeability” standard the First Circuit adopted in 2016, recent decisions by courts within the Second and Ninth Circuits have instead revealed a split in how district courts apply this test to nearly identical allegations, resulting in different outcomes on motions to dismiss.

In this episode of our special 12 Days of Regulatory Insights podcast series, Ashley Taylor, co-leader of Troutman Pepper Locke’s State AG team, sits down with Privacy and Cyber chair Ron Raether to discuss how state attorneys general (AGs) are shaping the regulatory landscape for social media and the broader ad tech ecosystem.

Key point: Although the executive order seeks to bring regulatory certainty in the development and deployment of AI in the U.S. — at least in the short term it is unlikely to alleviate compliance burdens for businesses and may only create more uncertainty.

On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The purpose of the executive order is twofold.

First, the order seeks to create a legal structure to stop states from enacting new state AI laws and from enforcing existing ones. According to the order, the goal of the U.S. must be “[t]o win” a “race with adversaries for supremacy” in AI. However, to do so, “United States AI companies must be free to innovate without cumbersome regulation.” Therefore, the order seeks to prevent “a patchwork of 50 different state regulatory regimes that makes compliance more challenging, particularly for start-ups.” Importantly, the order itself does not attempt to preempt state AI laws. Rather, as discussed below, it just creates a structure for the federal government to try to preempt some of them.

Second, the order states that the Trump administration will work with Congress to enact a “minimally burdensome national standard” that preempts state law and “ensure[s] that the United States wins the AI race, as we must.”

The order follows two prior attempts in Congress to pass a moratorium on states enacting AI laws. Most recently, an attempt to include a moratorium in the National Defense Authorization Act of 2026 failed, creating the impetus for the president to sign the order.

Although the executive order seeks to streamline and reduce AI regulation, it leaves open many questions, including the scope of laws that will be challenged and the likelihood if not certainty that states will challenge the order’s legality. It also remains to be seen whether the order slows the passage of new state AI laws and enforcement of existing ones. Indeed, it could ultimately have the unintended consequence of resulting in even more state AI laws. In the article below, we discuss the scope of the order, the state AI laws that could be targeted by the administration, how states have reacted to the order, and takeaways for businesses that are trying to comply with existing and forthcoming state AI laws.

In this post: (1) Selection of law in a choice-of-law forum can defeat privacy claims; (2) The Arizona Court of Appeals shuts down “spy pixel” litigation; (3) Multiple decisions provide guidelines as to when claims are likely to be dismissed for lack of standing; (4) Consent rises and falls on implementation but plaintiffs cannot avoid the issue; and (5) Courts in the 3rd and 9th Circuit disagree whether simultaneous messages are intercepted while in transit.

Welcome to our monthly update on how courts across the nation have handled privacy litigation involving website tools such as cookies, pixels, session replay, and similar technologies. In this post, we cover decisions from October and November 2025.