In early August, the National Institute of Standards and Technology released the initial public draft of its Cybersecurity Framework 2.0. The draft is a long-awaited update to a framework that’s been in place for almost 10 years: The Framework for Improving Critical Infrastructure Cybersecurity, first released in 2014 and updated in 2018.
Boards of Directors for public companies across the country are likely to be taking stock of their companys’ cybersecurity practices and strategies after the Securities and Exchange Commission’s adoption of the Cybersecurity Incident Disclosure Rule on July 26. Although the SEC removed the requirement for corporate boards to include members with cybersecurity expertise, it still intends for the Rule to result in greater transparency of companies’ cybersecurity governance and to aid in investor understanding. The Rule presents additional reasons for companies to determine who, if anyone, on their Boards can help with oversight of cybersecurity governance.
As a former Special Agent for the Federal Bureau of Investigation who investigated cybercrimes involving children, I know from experience that the topic of increasing online protections for minors provoked intense debates among law enforcement, social services, parents, and the civil rights communities.
Often the discussions focused on how to preserve the positive impact of the internet while addressing the negative aspects, such as the facilitation of cyber bullying, narcotics trafficking, and various forms of exploitation. While others continue the discussion, Texas has stepped beyond the debate and enacted a new regulatory regime intended to shield certain materials from being viewed by minors, and to limit the collection and usage of their data.
This year has proven to be active in terms of state privacy legislation. In addition to Montana’s Consumer Data Privacy Act, the state has now passed a Genetic Information Privacy Act.
On July 31, the California Privacy Protection Agency’s Enforcement Division announced that it would be reviewing connected vehicle manufacturers’ and technologies’ privacy practices. Connected vehicles contain features that collect information about owners and riders, including location sharing, web-based entertainment, cameras, and smartphone integrations.
EDITOR’S NOTE: This is part three of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence
As with all other products and technologies, we can expect to see (and in fact already do see) the emergence of varying approaches to governance for artificial intelligence systems. Currently, AI oversight may be addressed within independent federal, state, and international frameworks – for instance, within the regulation of autonomous vehicle development, or laws applicable to automated decision-making. So, how can we expect regulatory frameworks to develop for AI as an independently regulated field?
EDITOR’S NOTE: This is part two of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.
Recent developments in Artificial Intelligence have opened the door to exciting possibilities for innovation. From helping doctors communicate better with their patients to drafting a travel itinerary as you explore new locales (best to verify that all the recommendations are still open!), AI is beginning to demonstrate that it can positively affect our lives.
However, these exciting possibilities also allow malicious actors to abuse the systems and introduce new or “improved” cyber threats.
On July 10, 2023, the European Commission (“EC”) adopted its adequacy decision for the EU-U.S. Data Privacy Framework (“EU-U.S. DPF”).
EDITOR’S NOTE: This is part one of “Cyber AI Chronicles” – written by lawyers and named by ChatGPT. This series will highlight key legal, privacy, and technical issues associated with the continued development, regulation, and application of artificial intelligence.
Artificial Intelligence is not a new concept or endeavor. In October 1950, Alan Turing published “Computing Machinery and Intelligence,” proposing the question: Can machines think? Since then, the concept has been studied at length, with an immediately recognizable example being IBM Watson, which memorably defeated Jeopardy! champions Ken Jennings and Brad Rutter in 2011. AI has been captured and fictionalized in movies, video games, and books. Even if we are not aware of it, AI underlies many technical tools that we use every day.
The national impact of ransomware is expanding. Following a dip in the recorded number of ransomware attacks for 2022, there have been multiple nationwide events with devastating effect in 2023. Given the damage across private and public enterprises, the federal government has sought to provide additional information and resources to assist those who are preparing to defend against an attack or for businesses who have already experienced a ransomware attack.
The Constangy Cyber Advisor posts regular updates on legislative developments, data privacy, and information security trends. Our blog posts are informed through the Constangy Cyber Team's experience managing thousands of data breaches, providing robust compliance advisory services, and consultation on complex data privacy and security litigation.
Subscribe
Contributors
- Suzie Allen
- John Babione
- Bert Bender
- Jason Cherry
- Christopher R. Deubert
- Maria Efaplomatidis
- Sebastian Fischer
- Laura Funk
- Lauren Godfrey
- Amir Goodarzi
- Taren N. Greenidge
- Chasity Henry
- Julie Hess
- Sean Hoar
- Donna Maddux
- David McMillan
- Ashley L. Orler
- Todd Rowe
- Melissa J. Sachs
- Allen Sattler
- Matthew Toldero
- Alyssa Watzman
- Aubrey Weaver
- Xuan Zhou