EDITOR’S NOTE: This is part three of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence
As with all other products and technologies, we can expect to see (and in fact already do see) the emergence of varying approaches to governance for artificial intelligence systems. Currently, AI oversight may be addressed within independent federal, state, and international frameworks – for instance, within the regulation of autonomous vehicle development, or laws applicable to automated decision-making. So, how can we expect regulatory frameworks to develop for AI as an independently regulated field?
The European Union has taken significant steps toward implementing its first Regulation for AI governance. In April 2021, the European Commission finalized its proposal for the EU AI Act. On December 6, 2022, the Council adopted the proposed Act with changes, and on June 14, 2023, the Parliament finalized its own changes to adopt the proposed Act. Negotiating the final form of the Regulation is expected to occur throughout the remainder of the year, so the final scope of its application remains to be seen. Some of the remaining hurdles for negotiation highlight the challenges that regulators around the world will face as they continue to develop their regulatory frameworks. These challenges include defining “artificial intelligence system” to ensure longevity of the term as technologies evolve, balanced with appropriate framing to avoid the over-regulation of software applications. It will also require regulators to determine with certainty which types of AI systems and practices should be considered high-risk and therefore subject to the highest levels of regulation or prohibition.
The EU is not alone. Around the world, AI policy makers and regulators are engaged in these same discussions. The Organisation for Economic Co-operation and Development AI Policy Observatory has posted its tracking of AI policy initiatives for now 69 countries and territories, as well as the EU. In May 2023, the G7 called for open international discourse on AI governance and regulatory interoperability, establishing the Hiroshima AI Process as part of its “common vision and goal of trustworthy AI.” In a recent comparison of approaches to AI governance by Canada (whose Artificial Intelligence and Data Act progressed through its second reading in the House of Commons as part of Bill C-27 in April 2023) and the United Kingdom (which released its “Pro-Innovation” White Paper earlier this year), OECD.AI noted its finding that industry standards will have an important role in the development of policy efforts.
U.S. companies by this time are familiar with the nature of the overlap between industry standards and sectoral regulation. For example, there are currently 13 enacted U.S. state privacy laws of general application, in addition to federal laws and regulations of sectoral application, such as the Health Insurance Portability and Accountability Act, industry standards such as the Payment Card Industry Data Security Standard, as well as other boards, associations, forums, and bodies formed for the purpose of standardizing and advancing industry practices.
The U.S. released its Blueprint for an AI Bill of Rights in October 2022, and has continued to emphasize the role the industry will play as the field develops. On July 26, 2023, Anthropic, Google, Microsoft, and OpenAI announced the formation of the Frontier Model Forum as one industry body focused on the direction of frontier AI models. Consistent with other industry-focused regulatory efforts, on July 26 – the same day it published its final rule on cybersecurity risk management, strategy, governance, and incident disclosure – the Securities and Exchange Commission announced a proposed rule for the use of predictive data analytics by registered broker-dealers and investment advisers. Other federal regulators are moving toward enforcement in line with their delegated authority, with an eye toward the actual and potential impact of AI systems.
As direct comprehensive regulatory frameworks to AI evolve, so too will the questions arising from system interaction, from development to end use, regarding AI’s place within existing frameworks and forthcoming regulations. The Constangy Cyber Team assists businesses of all sizes and industries with implementing necessary updates to their privacy and compliance programs to address these complex and evolving developments. If you would like additional information on how to prepare your organization, please contact us at cyber@constangy.com.
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Bert Bender
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Sebastian Fischer
- Laura Funk
- Lauren Godfrey
- Taren N. Greenidge
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Xuan Zhou