Photo of Esther Kye

Esther is an associate in the firm’s Privacy + Cyber practice. She received her J.D. from the University of California, Irvine School of Law where she served as a representative of the Asian Pacific American Women Lawyers Alliance, the Orange County Korean American Bar Association, and as pro bono chair of the Asian Pacific American Law Students Association.

Key point: Businesses operating generative artificial intelligence systems in Utah and Washington may be subject to new legal obligations, such as including provenance data in content created or altered using generative artificial intelligence.

In March, Utah’s Digital Content Provenance Standards Act (HB 276) was signed by Governor Spencer Cox, and Washington’s HB 1170 on regulation of AI-modified content was signed by Governor Bob Ferguson. Both laws impose certain obligations related to provenance data on covered providers that create, code, or otherwise produce a generative artificial intelligence (GenAI) system that has more than 1 million monthly users and is publicly accessible within the geographic boundaries of each state. Utah and Washington’s bills largely align with the California AI Transparency Act (CAITA) and AB 853, which obligate creators of GenAI systems, large online platforms, GenAI hosting platforms, and capture device manufacturers to fulfill certain provenance data requirements. The article below provides an overview of the California, Utah, and Washington laws and compares the obligations of covered providers under each law.

In Part 1 of this series, we outlined the basics of the California Consumer Privacy Act’s (CCPA) new cybersecurity audit requirement: who is covered, when audits are required, and the key obligations to keep in mind. In Part 2, we explored the mechanics and explained what the California Privacy Protection Agency (CalPrivacy) expects the cybersecurity audit to look like in practice, including what must be evaluated, who may conduct the audit, how thorough it must be, and what goes into the audit report.

In Part 1 of this series, we walked through the basics of the California Consumer Privacy Act’s (CCPA) new cybersecurity audit requirement: which businesses are covered, when audits are required, and the high-level obligations to have on your radar.

This five-part series provides an introductory roadmap to the California Consumer Privacy Act’s (CCPA) new cybersecurity audit requirement and the California Privacy Protection Agency’s (CalPrivacy) implementing regulations.

Key point: California enacts first-in-the-nation law focused on regulating frontier artificial intelligence models.

On September 29, 2025, California Governor Gavin Newsom signed SB 53 — the Transparency in Frontier Artificial Intelligence Act (TFAIA) — into law. As explained in the Senate floor analysis, the law “requires large artificial intelligence (AI) developers . . . to publish safety frameworks, disclose specified transparency reports, and report critical safety incidents to the Office of Emergency Services (OES).” The law also “creates enhanced whistleblower protections for employees reporting AI safety violations and establishes a consortium to design a framework for ‘CalCompute,’ a public cloud platform to expand safe and equitable AI research.” The law was hailed by both Newsom and its primary sponsor, Senator Scott Wiener, as striking a proper balance between innovation and placing sensible guardrails on frontier AI models.