Back to the Future, Part II: How data privacy laws can teach us what to expect with AI legislation

EDITOR’S NOTE: This is Part Two of a two-part series. You can read Part One here.

“Early adopters” are likely to set the tone for future legislation

The scope and approach of AI regulation is still largely up in the air, but as with data privacy the first few major laws to be passed will almost assuredly be used as reference points for additional legislation.

Looking back again to the first 10 data privacy laws enacted by U.S. states, each successive law had coalescing similarities to its predecessors in Colorado and Virginia. The CCPA may have been one of the first, but because it took clear inspiration from the European Union’s General Data Protection Regulation – which was viewed as too onerous to businesses – other states were not comfortable following suit.

Once the laws in Colorado and Virginia legislation were enacted, other states then had a model to follow for introducing legislation that would have enough support to pass (albeit with variations unique to each state), and a wave of new laws quickly followed. Some provisions in the Colorado and Virginia laws were relatively unfamiliar at the time, such as regulations on use of “dark patterns” or providing universal opt-out mechanisms for consent, but many of these aspects became part and parcel of each state law that followed. Many states have even continued to update their laws to incorporate regulations inspired by other states, such as Minnesota, which passed its Consumer Data Privacy Act only last year but has since introduced revisions that bear strong resemblance to Washington state’s My Health My Data Act.

Similarly, in AI regulation, the combination of amending existing laws and passing new narrowly tailored laws is creating a sort of “feedback” loop. Rather than face the challenge of trying to assemble broad (and often tenuous) support for passing a comprehensive law, states are quickly pushing through narrower AI legislation that is expands on new and existing laws.

As an example of the expansion of existing laws, the Board of the California Privacy Protection Agency met this month to lurch closer toward finally completing a rulemaking package on automated decision-making technology, cybersecurity, audits, and risk assessments, which had been mandated by the California Privacy Rights Act of 2020. (The Privacy Rights Act amended the California Consumer Privacy Act.)

This rulemaking process has been so protracted that much of the original scope of the Privacy Rights Act has been overtaken by developments in AI, resulting in a rulemaking package that looks considerably different from what had been expected.

When it comes to AI-specific regulation, Colorado’s Artificial Intelligence Act – which will take effect February 1, 2026) is the first omnibus-style state law to be enacted, and it   managed to pass despite facing many of the same criticisms that doomed Virginia’s AI legislation. Other states are already modeling some of their proposed legislation after the Colorado law, with many bills regulating “high-risk systems” and preventing “algorithmic discrimination” – that is, ensuring that AI system results do not create differential treatment when used to make “consequential decisions” that have a material effect on employment, financing, health services, housing, or insurance, among other things.

The AI bills also follow Colorado in differentiating between Developers (creators) and Deployers (users) of AI systems, creating separate duties for each. However, as with data privacy laws, the states are not completely in agreement on all aspects of regulation or how far they should go. One notable area is the debate about whether Developers or Deployers should bear primary responsibility for monitoring for algorithmic harms. Colorado’s AI Act requires Developers to maintain accountability for known or foreseeable risks within their AI system, including a requirement to report to both the state Attorney General and any known Deployers within 90 days of discovering or being made aware that algorithmic discrimination is occurring. Virginia’s bill also had reporting requirements, but they were not as extensive as those in the Colorado legislation. At this time, most bills lean toward putting the onus for accountability on Deployers of AI systems; however, nearly half of the proposed bills simultaneously or separately impose accountability on Developers.

Although it is likely that other states will eventually advance more comprehensive bills similar to the Colorado and Virginia legislation, a key difference between development of AI regulation versus data privacy is the sheer complexity and number of considerations involved in AI. Data privacy laws consolidated around the same fundamental principles and issues such as governance, notice, consent, individual’s rights, third party management and data sharing, data security, and retention. But when it comes to AI, even the type of systems in scope is not consistent. Some bills regulate higher-risk systems used for automated decision-making (similar to the Colorado and Virginia legislation), whereas some others target all AI systems broadly. Conversely, some narrowly target only generative AI systems.

Nevertheless, states are still likely to look for inspiration and consensus to Colorado and Utah (along with California and Virginia, who all tend to be at the forefront of tech policy developments), even if the newer legislation is  not comprehensive. For instance, Utah’s 2024 Artificial Intelligence Policy Act is narrower than Colorado’s AI Act. However, it has a unique section that establishes an “AI Learning Laboratory Program” that effectively creates a sandbox testing environment in which participants who are interested in using AI can submit an application to the state to live-test their AI technology, and in turn be granted certain safe harbors and temporary exemptions from regulatory penalties. Utah is striving to strike a balance between encouraging innovation, fostering ongoing dialogue between businesses and policymakers, and ensuring reasonable consumer protections. It is likely that other states will be monitoring the success of this program very closely.

Ultimately, many states have bills in committee that cover distinct issues (some of which have already been discussed here) but there is also wide variation in requirements for accountability, including governance structure and documentation for AI programs; conducting risk or impact assessments pre-deployment of AI systems and periodically thereafter; and providing notice of the use of AI; and requirements to report when adverse impacts of AI occur.

A comprehensive approach to AI regulation is unlikely

Despite calls for federal regulation, Congress is unlikely to pass any comprehensive legislation, instead passing laws that are much narrower or sector-specific. In five years, every single omnibus federal data privacy bill withered on the vine, and the same is already happening with AI.

Congress has introduced more than 100 bills relating to AI, but with the exception of a few outliers, it is unlikely that many of these will be become law. As at the state level, the issues regulated by these federal bills run the gamut, with some focusing on transparency and accountability, and others on consumer protection; some targeting specific industries (marketing, genetics, healthcare, education); some focused on national defense; and others more broadly on research and innovation practices. In addition to the difficulty of reconciling such a wide range of issues into a single law or set of laws, many of the same issues that hindered a federal data privacy law apply to a prospective AI law: which agency will have enforcement authority, and disagreements over preemption of existing state laws.

On an international level, where many countries modeled their data privacy laws on the European Union’s GDPR, the same has not occurred with its AI Act. The reason is presumably that the AI Act focuses on preventing AI risks and harms, whereas most jurisdictions (broadly speaking) are taking the stance of prioritizing innovation. Gov. Youngkin expressed this sentiment in his veto message that the “role of government in safeguarding AI practices should be one that enables and empowers innovators to create and grow, not one that stifles progress and places onerous burdens on our Commonwealth’s many business owners.” Similar to developments in the U.S. states, many countries are either amending their existing laws or adopting frameworks for AI governance rather than passing comprehensive new legislation. The amendments cover a wide range of issues, including the expansion of laws dealing with consumer protections, cybersecurity and national defense, banking and finance, data privacy, health care (biometrics and genetics), and intellectual property. Another similarity is that many of these European countries have established task forces and working groups to define national strategy, principles, ethics, and guidance while working toward codifying their countries’ approaches to AI governance.

What can we learn?

Although AI is novel and uniquely complex in many ways, developments in AI are likely to follow much of what has already been seen in data privacy. Initial developments to date may seem too slow, but the pace of new laws and regulations is likely to pick up speed in the coming year or so.

Additionally, many of the core principles of data privacy are already being adapted for AI purposes, including accountability and oversight, impact assessments, transparency and notice, choice and consent, options to challenge decisions and exercise rights, and protection from harm. There are still differences and uncertainties, but organizations can be doing much more than they may realize to create AI governance programs, policies, and processes, and frameworks for AI operations that are well-positioned to “keep pace” with whatever the future may bring.

Facebook Twitter/X LinkedIn Email

The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation. 

Subscribe

* indicates required
Back to Page