On Wednesday, June 21, 2023, Senate Majority Leader Chuck Schumer (D-NY) unveiled the five policy objectives underpinning his ongoing work on a legislative framework to regulate Artificial Intelligence (AI)—the Security, Accountability, Foundations, Explain (SAFE) Innovation Framework for AI—at a Center for Strategic and International Studies (CSIS) event. A summary of the framework is available here.
The culmination of months-long discussions with more than 100 stakeholders, Leader Schumer’s push for an AI framework was first announced in April (see prior alert here), and the effort, which is expected to span across multiple congressional committees, is centered on four guardrails: “Who,” “Where,” “How” and “Protect.”
In the latest development, Leader Schumer has noted that the SAFE Innovation Framework, which will use the CHIPS and Science Act (P.L. 117–167) process as a model, will adhere to the following policy objectives:
- Innovation: Strike an appropriate balance between collaboration and competition among companies developing AI, while also addressing the appropriate amount of federal taxes and spending and the appropriate balance between private and open AI systems.
- Security: Establish guardrails to ensure that foreign adversaries cannot use United States advances in AI for illicit and harmful purposes, while also taking measures to prevent job loss or misdistribution of income, including by engaging workers, businesses, educators and researchers.
- Accountability: Establish guardrails regulating how AI is developed, audited and deployed, and make clear that certain practices should be prohibited.
- Foundations: Ensure AI technologies align with American foundations such as human liberty, civil rights and justice.
- Explainability: Ensure transparency for how AI systems operate, with companies taking a leading role, while also guarding against threats to intellectual property (IP).
Leader Schumer announced that he will be leading the bipartisan charge on AI regulation, along with Sens. Mike Rounds (R-SD), Todd Young (R-IN), Martin Heinrich (D-NM). As part of this effort, Commerce Chair Maria Cantwell (D-WA), Homeland Security and Governmental Affairs Chair Gary Peters (D-MI), Intelligence Chair Mark Warner (D-VA), Judiciary Chair Dick Durbin (D-IL) and Antitrust Subcommittee Chair Amy Klobuchar (D-MN) were all asked to contact their Ranking Members to commence bipartisan efforts. In terms of timeline, Leader Schumer projected that the framework would take “months.”
As part of the push, Leader Schumer has also unveiled a slate of AI-focused, Member-only briefings this summer. On June 13, 2023, lawmakers convened for the first briefing, after which the Majority Leader noted the sense of urgency for lawmakers to stay proactive on the issue. While he pointed to societal benefits from AI such as medical advances and fusion energy, Leader Schumer also highlighted challenges associated with the technology, including the difficulty of “explainability,” calling for increased cooperation between legislators, developers, researchers, academics and advocates. Next month, the second and third briefings will focus on the trajectory of AI in the near future, as well as the resulting implications for U.S. national security.
Leader Schumer has indicated that he will convene a series of “AI Insight Forums” in the fall with AI developers and executives, scientists, national security experts and others. The goal is to establish a formal information gathering process that is more efficient than traditional congressional hearings and better aligned with the rapid timeline of AI innovation and advancement. Following these forums, committees of jurisdiction will still need to propose legislation informed by the preceding discussions.
As discussions convened by this bipartisan Senate group unfold, the Akin cross-practice AI team continues to keep clients apprised of key developments, as well as other forthcoming congressional, administrative, private-stakeholder and international initiatives on AI.
The culmination of months-long discussions with more than 100 stakeholders, Leader Schumer’s push for an AI framework was first announced in April (see prior alert here), and the effort, which is expected to span across multiple congressional committees, is centered on four guardrails: “Who,” “Where,” “How” and “Protect.”
In the latest development, Leader Schumer has noted that the SAFE Innovation Framework, which will use the CHIPS and Science Act (P.L. 117–167) process as a model, will adhere to the following policy objectives:
- Innovation: Strike an appropriate balance between collaboration and competition among companies developing AI, while also addressing the appropriate amount of federal taxes and spending and the appropriate balance between private and open AI systems.
- Security: Establish guardrails to ensure that foreign adversaries cannot use United States advances in AI for illicit and harmful purposes, while also taking measures to prevent job loss or misdistribution of income, including by engaging workers, businesses, educators and researchers.
- Accountability: Establish guardrails regulating how AI is developed, audited and deployed, and make clear that certain practices should be prohibited.
- Foundations: Ensure AI technologies align with American foundations such as human liberty, civil rights and justice.
- Explainability: Ensure transparency for how AI systems operate, with companies taking a leading role, while also guarding against threats to intellectual property (IP).
Leader Schumer announced that he will be leading the bipartisan charge on AI regulation, along with Sens. Mike Rounds (R-SD), Todd Young (R-IN), Martin Heinrich (D-NM). As part of this effort, Commerce Chair Maria Cantwell (D-WA), Homeland Security and Governmental Affairs Chair Gary Peters (D-MI), Intelligence Chair Mark Warner (D-VA), Judiciary Chair Dick Durbin (D-IL) and Antitrust Subcommittee Chair Amy Klobuchar (D-MN) were all asked to contact their Ranking Members to commence bipartisan efforts. In terms of timeline, Leader Schumer projected that the framework would take “months.”
As part of the push, Leader Schumer has also unveiled a slate of AI-focused, Member-only briefings this summer. On June 13, 2023, lawmakers convened for the first briefing, after which the Majority Leader noted the sense of urgency for lawmakers to stay proactive on the issue. While he pointed to societal benefits from AI such as medical advances and fusion energy, Leader Schumer also highlighted challenges associated with the technology, including the difficulty of “explainability,” calling for increased cooperation between legislators, developers, researchers, academics and advocates. Next month, the second and third briefings will focus on the trajectory of AI in the near future, as well as the resulting implications for U.S. national security.
Leader Schumer has indicated that he will convene a series of “AI Insight Forums” in the fall with AI developers and executives, scientists, national security experts and others. The goal is to establish a formal information gathering process that is more efficient than traditional congressional hearings and better aligned with the rapid timeline of AI innovation and advancement. Following these forums, committees of jurisdiction will still need to propose legislation informed by the preceding discussions.
As discussions convened by this bipartisan Senate group unfold, the Akin cross-practice AI team continues to keep clients apprised of key developments, as well as other forthcoming congressional, administrative, private-stakeholder and international initiatives on AI.
Attachments
Previous Entries
Data Dive
November 19, 2024
The European Union’s AI Office published the inaugural General-Purpose AI Code of Practice on November 14, 2024. The Code is intended to assist providers of AI models in their preparations for compliance with the forthcoming EU AI Act, to be enforced from August 2, 2025. The Code is designed to be both forward-thinking and globally applicable, addressing the areas of transparency, risk evaluation, technical safeguards and governance. While adherence to the Code is not mandatory, it is anticipated to serve as a means of demonstrating compliance with the obligations under the EU AI Act. Following a consultation period that garnered approximately 430 responses, the AI Office will be empowered to apply these rules, with penalties for nonconformity potentially reaching 3% of worldwide turnover or €15 million. Three additional iterations of the Code are anticipated to be produced within the coming five months.
Data Dive
November 15, 2024
On October 29, 2024, the DOJ issued a proposed rule prohibiting and restricting certain transactions that could allow persons from countries of concern, such as China, access to bulk sensitive personal data of U.S. citizens or to U.S. government-related data (regardless of volume).
Data Dive
October 17, 2024
During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information.
Data Dive
September 17, 2024
Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.
Data Dive
August 6, 2024
On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.
Data Dive
July 18, 2024
On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.
Data Dive
July 18, 2024
On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1
Data Dive
June 11, 2024
In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.