Key point: Businesses subject to the CCPA now must conduct risk assessments for certain types of processing activities and, starting in 2028, must certify to California regulators that they completed the assessments.

The California Consumer Privacy Act’s (CCPA) new regulations went into effect on January 1, 2026. Although the new regulations bring many changes for businesses subject to the CCPA, one of the biggest changes is a new requirement to conduct risk assessments for processing activities that present “significant risk to consumers’ privacy.” This can encompass many types of common data processing activities such as the use of third-party cookies and tracking technologies, processing of sensitive personal information (e.g., biometric data), and the use of AI for certain employment-related activities. Like the CCPA, the risk assessment requirement applies to consumer, employee, and commercial personal information.

Importantly, on April 1, 2028, businesses subject to the CCPA must file a certification with the California Privacy Protection Agency (CalPrivacy) attesting — under penalty of perjury — that they conducted the required risk assessments. The certification must be signed by a member of the business’s executive management team.

In the below article, we provide an overview of this new risk assessment requirement.

Key point: Set to take effect on January 1, 2026, court blocks the Texas App Store Accountability Act on constitutional grounds.

A Texas federal district court granted a preliminary injunction enjoining the Texas App Store Accountability Act today, stating that the law likely violates the First Amendment and is unconstitutionally vague. In October, an internet trade association sued the state of Texas over the act, and this month the case was consolidated with another case stating similar claims. The law was scheduled to take effect January 1, 2026, and imposed obligations on both app stores and developers providing mobile applications to Texas users. Texas will be unable to implement or enforce the act while the litigation is ongoing.

Key point: New York becomes the second state — after California — to enact an AI frontier model law, while the governor’s veto of the New York Health Information Privacy Act will be a welcome result for organizations that criticized the bill as unworkable.

In the last two weeks, New York Governor Kathy Hochul took action on numerous bills the New York legislature passed before it closed in June. Among those actions, Hochul signed four AI-related bills — including a bill regulating AI frontier models — and vetoed a controversial health data privacy bill. We discuss each of those bills in the article below.

In addition to these bills, earlier this year, New York lawmakers enacted three other AI-related laws — the Algorithmic Pricing Disclosure Act, a companion chatbot law, and a law regulating the use of algorithmic pricing by landlords.

In this episode of our special 12 Days of Regulatory Insights podcast series, Ashley Taylor, co-leader of Troutman Pepper Locke’s State AG team, sits down with Privacy and Cyber chair Ron Raether to discuss how state attorneys general (AGs) are shaping the regulatory landscape for social media and the broader ad tech ecosystem.

Key point: Although the executive order seeks to bring regulatory certainty in the development and deployment of AI in the U.S. — at least in the short term it is unlikely to alleviate compliance burdens for businesses and may only create more uncertainty.

On December 11, 2025, President Donald Trump signed an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence.” The purpose of the executive order is twofold.

First, the order seeks to create a legal structure to stop states from enacting new state AI laws and from enforcing existing ones. According to the order, the goal of the U.S. must be “[t]o win” a “race with adversaries for supremacy” in AI. However, to do so, “United States AI companies must be free to innovate without cumbersome regulation.” Therefore, the order seeks to prevent “a patchwork of 50 different state regulatory regimes that makes compliance more challenging, particularly for start-ups.” Importantly, the order itself does not attempt to preempt state AI laws. Rather, as discussed below, it just creates a structure for the federal government to try to preempt some of them.

Second, the order states that the Trump administration will work with Congress to enact a “minimally burdensome national standard” that preempts state law and “ensure[s] that the United States wins the AI race, as we must.”

The order follows two prior attempts in Congress to pass a moratorium on states enacting AI laws. Most recently, an attempt to include a moratorium in the National Defense Authorization Act of 2026 failed, creating the impetus for the president to sign the order.

Although the executive order seeks to streamline and reduce AI regulation, it leaves open many questions, including the scope of laws that will be challenged and the likelihood if not certainty that states will challenge the order’s legality. It also remains to be seen whether the order slows the passage of new state AI laws and enforcement of existing ones. Indeed, it could ultimately have the unintended consequence of resulting in even more state AI laws. In the article below, we discuss the scope of the order, the state AI laws that could be targeted by the administration, how states have reacted to the order, and takeaways for businesses that are trying to comply with existing and forthcoming state AI laws.

Key point: Starting August 1, 2026, registered data brokers will need to access California’s new one-stop-shop deletion platform to process deletion requests or risk significant fines.

Last month, the California Office of Administrative Law (OAL) approved the California Privacy Protection Agency’s (CalPrivacy) regulations further implementing the Delete Act (SB 362). Effective January 1, 2026, the Delete Act makes several changes to California’s data broker law, including charging CalPrivacy with creating a new one-stop-shop for California residents to request that all registered data brokers delete their personal information. California residents can begin registering on January 1, 2026, and data brokers must process requests starting August 1, 2026. Failure to comply is subject to a $200 fine “for each deletion request for each day the data broker fails to delete information.”

In the below article, we provide a brief background on the Delete Act and summarize the new regulations.

Key point: This is the eighth fine CalPrivacy has issued against an entity for failing to register as a data broker and comes just days after CalPrivacy announced a new Data Broker Enforcement Strike Force and only months before fines will significantly increase under the California Delete Act.

On December 3, 2025, the California Privacy Protection Agency (CalPrivacy) announced its latest fine for an entity failing to register as a data broker under California’s Delete Act. This is the eighth time CalPrivacy has fined an entity for failing to register as a data broker. The agency issued four fines in both 2024 and 2025.

The $56,600 fine comes just days after CalPrivacy announced the formation of a Data Broker Enforcement Strike Force, portending even more (and significantly higher) fines against data brokers and unregistered data brokers. This is particularly notable given that the agency’s data broker regulations adopt a broader definition of what constitutes a data broker, which definition may encompass entities that do not traditionally consider themselves to be data brokers.

In the below article, we provide a brief overview of the enforcement action. We also discuss the broader context of data broker regulation in California, including the increased risks and requirements on data brokers in 2026.

Key point: The court held that NetChoice’s complaint adequately states constitutional claims against Maryland’s Age-Appropriate Design Code Act and allowed NetChoice’s lawsuit to continue, but did not rule on the merits of the claims or enjoin the law.

On November 24, 2025, Maryland District Court Judge Richard Bennett denied Maryland’s motion to dismiss a complaint filed by NetChoice challenging Maryland’s Age-Appropriate Design Code Act, commonly referred to as the Maryland Kids Code. NetChoice’s complaint alleges that the Kids Code violates the First Amendment and is preempted by federal law. The decision finds only that NetChoice’s complaint states plausible claims. The court did not rule on the merits of the claims and did not enjoin the law. In the below article, we provide a brief overview of the Kids Code and the decision.

Key point: Oklahoma recently updated its breach notification statute for the first time since enactment, aligning with broader state trends and underscoring the ongoing, continuous review of data breach notification laws by lawmakers.

Effective January 1, 2026, Oklahoma’s Senate Bill 626 substantially revises the state’s data breach notification statute by expanding the definition of personal information, introducing a regulatory notice requirement, and updating safe-harbor exemptions. The amendments are the first changes to the law since it was enacted in 2008 and are consistent with trends in other states in recent years. For example, California adopted similar amendments set to take effect on January 1, 2026.

The below article provides an overview of the amendments.

Key point: The most recent CCPA enforcement action focuses on the CCPA’s right to opt out of sales and shares and treatment of minor’s data.

On November 21, 2025, the California Attorney General (AG) announced its latest enforcement action for violations of the California Consumer Privacy Act (CCPA). The complaint alleges that a gaming app developer failed to provide a CCPA-compliant opt-out link or setting within any of its 21 apps or website. The complaint also alleges that six of the developer’s apps sold the personal information of consumers between the ages of 13 and 16 without obtaining consent. Pursuant to the final judgment and permanent injunction, the developer agreed to pay $1.4 million, implement corrective measures, and maintain a compliance program.

The settlement is the ninth CCPA public enforcement action, including six by the AG and three by the California Privacy Protection Agency. Seven of the nine enforcement actions are from this year, showing a notable increase in enforcement activity. As with prior enforcement actions, this settlement reinforces that businesses should be auditing their current practices and procedures to ensure that they are compliant and remain compliant.

In the below article, we provide a summary of the violations and penalties.