EDITOR’S NOTE: This is Part One of a two-part series.
There is a commonly held belief that legal-ethical regulation is unable to keep up with the pace of technology. The belief is so prevalent that it has been given a colloquial term: “the pacing problem.”
Although this term and concept was coined nearly 20 years ago, the recent propagation of generative artificial intelligence has renewed questions about whether laws can be enacted that will be adequate or relevant for such complex emerging technologies.
It’s a fair question. ChatGPT was publicly released at the end of November 2022, but there has been relatively little AI regulation passed at state or federal levels. Even legislation that seemed nearly certain to pass has fallen short, as we saw last month with the veto of HB 2094 by Gov. Glenn Youngkin (R) of Virginia.
Despite the many uncertainties that remain to be clarified, there are many parallels between how data privacy laws took shape five years ago, and how AI legislation is developing today. Thus, we can look to the evolution of data privacy laws to gain a sense of what to expect in future AI regulation, and in doing so, organizations can begin to make informed decisions about their approaches to AI that should be adaptable and “keep pace” with future compliance requirements.
AI regulation today is similar to the first ripples of the data privacy wave
Right now AI regulation is relatively sparse, but that is not surprising when compared with data privacy regulation five years ago. At the start of 2020, 14 states had introduced new privacy legislation, but only three states had enacted new privacy laws—two of which were very narrow, plus the omnibus California Consumer Privacy Act, or “CCPA.” In 2021, Colorado and Virginia also enacted new laws that were similar to, but not as expansive as, the CCPA. In 2022 two more states enacted laws similar to those in Colorado and Virginia. Then, in 2023, seven states enacted laws, and in 2024 eight more followed suit, all of which broadly aligned with each other.
Over the years that these states passed new data privacy laws, consensus developed progressively around the scope and approach of regulatory requirements. In 2025, the bills that are active in 14 states largely align with this consensus as well, as well as bills in four states that have passed through one legislative chamber.
Applying what we know about this evolution of data privacy laws to the current status of AI regulation, it’s likely that something similar will occur—many states will introduce bills that initially fail, or limited bills are passed. However, as a handful of states push comprehensive legislation through, a consensus will begin to form that provides momentum for more and more states to enact legislation.
This process is already starting to occur with AI. In 2024, eighteen states proposed broad new AI regulations. Three of those states enacted narrower bills. As of this writing, a total of more than 20 states have introduced AI bills. Although several bills have already failed and the rest are far from being enacted, Colorado has passed the only omnibus-style law (discussed below), and until Gov. Youngkin’s veto, Virginia was expected to be the second state to enact similar legislation; this would have created momentum for more laws to follow.
There are, however, some notable differences between the development of data privacy laws and AI regulations.
First, the types of data privacy laws enacted in the past five years were fundamentally new in many ways with few analogues in existing law, whereas today many states are trying to integrate AI regulation into existing legal frameworks. These existing laws were even cited by Gov. Youngkin in his veto of the Virginia bill, when he said that “there are many laws currently in place that protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more.”
Many states are likely to continue passing amendments or narrow laws as a short-term approach to addressing AI issues until better understanding and consensus can be developed. For instance, Utah (despite its short legislative session) passed multiple bills on disclosing the use of generative AI services and chatbots in professions requiring a state-granted license. Currently, most states’ bills and laws can be grouped into distinct categories:
- Consumer protections when AI is used for profiling and automated decisions.
- Use of AI for hiring and in employment contexts.
- Deceptive media or “deepfakes,” which are further sub-categorized by specific types of individuals (for example, public figures, or minors) as well as activities (for example, election-related, or sexually explicit).
- Forming AI Task Forces or groups devoted to understanding AI impacts.
Second, in contrast to data privacy laws, which developed more or less organically, AI policymakers have tried to proactively organize themselves nationwide to develop a more harmonized approach to AI regulations. However, one of the most active groups, the Multistate AI Policymaker Working Group – a bipartisan assembly of more than 200 state lawmakers from more than 45 states convened by the Future of Privacy Forum to better understand emerging technologies and related policy issues – has been put in limbo after the Forum withdrew under pressure from supporting the Working Group. As a result, the momentum that the states had been building toward developing a consistent approach to regulating AI nationwide has effectively stalled.
- Partner
Ryan is a member of the Constangy Cyber Team and is based in Seattle, Washington. As a member of our compliance advisory team, Ryan offers comprehensive compliance advisory services to clients, helping them proactively navigate the ...
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Matthew Basilotto
- Bert Bender
- Ansley Bryan
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Rebecca D.C. Eng
- Laura Funk
- Lauren Godfrey
- Taren N. Greenidge
- Seth Greenwald
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Victoria Okraszewski
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Brent Sedge
- Ryan Steidl
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Robert R. Wennagel
- Rob Yang
- Xuan Zhou
Archives
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023