This summer, Gov. Joe Lombardo (R) signed the Consumer Health Data Privacy Act into law. The Act, which will take effect March 31, 2024, provides protections for consumer health data collected and maintained by regulated entities.
Boards of Directors for public companies across the country are likely to be taking stock of their companys’ cybersecurity practices and strategies after the Securities and Exchange Commission’s adoption of the Cybersecurity Incident Disclosure Rule on July 26. Although the SEC removed the requirement for corporate boards to include members with cybersecurity expertise, it still intends for the Rule to result in greater transparency of companies’ cybersecurity governance and to aid in investor understanding. The Rule presents additional reasons for companies to determine who, if anyone, on their Boards can help with oversight of cybersecurity governance.
As a former Special Agent for the Federal Bureau of Investigation who investigated cybercrimes involving children, I know from experience that the topic of increasing online protections for minors provoked intense debates among law enforcement, social services, parents, and the civil rights communities.
Often the discussions focused on how to preserve the positive impact of the internet while addressing the negative aspects, such as the facilitation of cyber bullying, narcotics trafficking, and various forms of exploitation. While others continue the discussion, Texas has stepped beyond the debate and enacted a new regulatory regime intended to shield certain materials from being viewed by minors, and to limit the collection and usage of their data.
This year has proven to be active in terms of state privacy legislation. In addition to Montana’s Consumer Data Privacy Act, the state has now passed a Genetic Information Privacy Act.
On July 31, the California Privacy Protection Agency’s Enforcement Division announced that it would be reviewing connected vehicle manufacturers’ and technologies’ privacy practices. Connected vehicles contain features that collect information about owners and riders, including location sharing, web-based entertainment, cameras, and smartphone integrations.
EDITOR’S NOTE: This is part three of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence
As with all other products and technologies, we can expect to see (and in fact already do see) the emergence of varying approaches to governance for artificial intelligence systems. Currently, AI oversight may be addressed within independent federal, state, and international frameworks – for instance, within the regulation of autonomous vehicle development, or laws applicable to automated decision-making. So, how can we expect regulatory frameworks to develop for AI as an independently regulated field?
On July 26, the Securities and Exchange Commission adopted a new rule regarding cybersecurity risk management, strategy, governance, and incident disclosure. The “Cybersecurity Incident Disclosure Rule” will be applicable to public companies subject to the reporting requirements of the Securities Exchange Act of 1934. It is premised on the belief that investors will benefit from more timely and consistent disclosure about material cybersecurity incidents, and follows interpretive guidance the SEC issued in 2011 and 2018. The Final Rule will take effect 30 days after being published in the Federal Register – likely by September 1.
EDITOR’S NOTE: This is part two of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.
Recent developments in Artificial Intelligence have opened the door to exciting possibilities for innovation. From helping doctors communicate better with their patients to drafting a travel itinerary as you explore new locales (best to verify that all the recommendations are still open!), AI is beginning to demonstrate that it can positively affect our lives.
However, these exciting possibilities also allow malicious actors to abuse the systems and introduce new or “improved” cyber threats.
On July 10, 2023, the European Commission (“EC”) adopted its adequacy decision for the EU-U.S. Data Privacy Framework (“EU-U.S. DPF”).
EDITOR’S NOTE: This is part one of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.
Artificial Intelligence is not a new concept or endeavor. In October 1950, Alan Turing published “Computing Machinery and Intelligence,” proposing the question: Can machines think? Since then, the concept has been studied at length, with an immediately recognizable example being IBM Watson, which memorably defeated Jeopardy! champions Ken Jennings and Brad Rutter in 2011. AI has been captured and fictionalized in movies, video games, and books. Even if we are not aware of it, AI underlies many technical tools that we use every day.
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Bert Bender
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Sebastian Fischer
- Laura Funk
- Lauren Godfrey
- Amir Goodarzi
- Taren N. Greenidge
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Xuan Zhou