President Biden Issues Long-Awaited Artificial Intelligence (AI) Executive Order
Key Points
- The Biden-Harris Administration has issued its long-awaited executive order (EO) regulating artificial intelligence (AI). The order issues directives to over twenty federal agencies, with the deadline for implementation spanning between 30 and 365 days—just ahead of the 2024 election.
- The order relies on guidance from the National Institute of Standards and Technology (NIST) and imposes requirements for the federal government’s use, evaluation and procurement of AI software and systems.
- While the EO is primarily directed toward U.S. federal agencies, it also includes a number of requirements for private companies, including that developers of “dual-use foundation models” submit reports to the Commerce Department outlining their training and testing procedures.
Background
On October 30, 2023, the Biden-Harris Administration issued its long-awaited AI EO, which issues directives to over twenty federal agencies, with the deadline for implementation spanning between 30 and 365 days—just ahead of the 2024 election. The EO builds on the White House’s 2022 AI Bill of Rights, which consisted of non-binding guidelines for the design, use and deployment of AI systems in public and private sectors and addressed concerns raised by civil society groups (see prior alert here).
The EO precedes the United Kingdom’s (UK) global AI summit and arrives on the heels of Senate Majority Leader Chuck Schumer’s (D-NY) second “AI Insight Forum,” where the Majority Leader outlined the need to provide for $32 billion in annual federal investment to enable the U.S. to lead in safe innovation. Alongside the EO, the Group of Seven (G7) leaders also unveiled international principles and a code of conduct for companies developing advanced AI systems.
In addition to its agency-specific directives, the EO creates a White House AI Council to more broadly coordinate the federal government’s AI activities, chaired by the White House Deputy Chief of Staff for Policy and comprised of representatives from each agency.
Agencies’ implementation of the directives outlined in the EO will occur alongside continued legislative efforts, and President Biden is slated to meet with a group of bipartisan lawmakers, including Leader Schumer, at the White House on October 31, 2023, to discuss the path forward on AI legislation. The next two Senate AI Forums are also slated for November 1, with a focus on AI in the workforce and the risk of biased data training in high-impact fields. The series is expected to continue through the end of the year with additional installments focused on national security, elections and democracy, intellectual property (IP) and other topics with the ultimate goal of finalizing a legislative framework for AI regulation.
EO Overview
The sprawling EO consists of thirteen sections, which include requirements for industry, federal agencies and various White House offices:
- Section 1: Purpose.
- Section 2: Policy and Principles.
- Section 3: Definitions.
- Section 4: Ensuring the Safety and Security of AI Technology.
- Section 5: Promoting Innovation and Competition.
- Section 6: Supporting Workers.
- Section 7: Advancing Equity and Civil Rights.
- Section 8: Protecting Consumers, Patients, Passengers, and Students.
- Section 9: Protecting Privacy.
- Section 10: Advancing Federal Government Use of AI.
- Section 11: Strengthening American Leadership Abroad.
- Section 12: Implementation.
- Section 13: General Provisions.
Requirements for Industry
While the EO is primarily directed toward U.S. federal agencies, its implementation will impose a number of requirements on private companies, including that they submit reports to the U.S. Department of Commerce (Commerce) outlining their training and testing procedures for “dual-use foundation models.” Further, the order requires developers of such models to provide regular reports to Commerce outlining how they plan to protect their technology, including the performance of dual-use foundation models in relevant red-team testing and related safety measures the company has taken.
Entities that acquire, develop, or possess a large-scale computing cluster must also report to the Commerce Department on certain factors, including the existence and location of such clusters and the amount of total computing power available in each.
In an aim to address the use of U.S. Infrastructure as a Service (IaaS) products by foreign malicious cyber actors, the order also requires the Commerce Department to prescribe reporting requirements for IaaS providers to ensure that foreign resellers of such products verify the identity of any foreign person that obtains an IaaS account.
Requirements for Agencies
At a high level, the EO directs federal agencies to adhere to eight principles when governing the development and use of AI:
- Ensuring AI Safety and Security: Ensuring robust, reliable, repeatable and standardized testing and evaluations of AI systems. The Administration will also help develop effective labeling and content provenance mechanisms.
- Promoting Responsible Innovation, Competition and Collaboration: Leveraging investments in AI-related education, training, development, research and capacity; addressing certain IP questions; and promoting competition by providing small developers access to technical assistance and encouraging the Federal Trade Commission (FTC) to exercise its authorities.
- Supporting American Workers: Adapting job training and education, as well as principles and best practices to address job displacement; labor standards; workplace equity, health and safety; and data collection.
- Advancing Equity and Civil Rights. Ensuring that AI complies with all federal laws and promoting substantive oversight and engagement with affected communities, and regulation to protect against unlawful discrimination and abuse, including through increased coordination between the U.S. Department of Justice (DOJ) and federal civil rights offices.
- Standing Up for Consumers, Patients and Students: Enforcing existing consumer protection laws and enacting appropriate safeguards against fraud, unintended bias, privacy infringements and other harms, including by advancing the responsible use of AI in healthcare, and specifically the use of the technology in drug-development processes.
- Protecting Americans’ Privacy and Civil Liberties: Ensuring that the collection, use and retention of data is lawful, secure and promotes privacy, including by directing federal agencies to use privacy-enhancing technologies (PETs) where beneficial. In signing the order, President Biden also reiterated his calls for Congress to pass bipartisan data privacy legislation.
- Ensuring Responsible and Effective Government Use of AI: Working to attract, retain and develop public-service-oriented AI professionals and ensuring that the government modernizes information technology infrastructure.
- Advancing American Leadership Abroad: Engaging with international partners in developing a framework to manage AI’s risks, while advancing American leadership.
Agency-specific directives, along with their accompanying deadline, if applicable, are outlined at a high-level below.
U.S. Department of Commerce (Commerce)
- When implementing the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022 (L. 117-167), promote competition by increasing the availability of resources to small businesses, among other things.
- 90 Days:
- Impose reporting obligations on companies developing or intending to develop potential “dual-use foundation models,” which are defined to include “an AI model that is trained on broad data, generally uses self-supervision, contains at least tens of billions of parameters, is applicable across a wide range of contexts and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.” Specifically, such companies must report on (1) activities related to training, developing, or producing dual-use foundation models; (2) the ownership and possession of the model weights of any dual-use foundation models and the measures taken to protect them; and (3) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing guidance developed by NIST and related safety measures the company has taken.
- Require companies, individuals, or other entities that acquire, develop, or possess a potential large-scale computing cluster to report on certain factors, including the existence and location of these clusters and the amount of total computing power available in each.
- Propose regulations that require U.S. IaaS providers to report foreign persons transacting on U.S. cloud to train large AI models that could be used for malicious cyber-related activities (“training run”). The rules must also bar U.S. IaaS providers from providing such products unless the foreign reseller submits to the provider a report detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run.
- 180 Days:
- Through NIST, initiate an effort to engage with industry and relevant stakeholders to develop and refine for possible use by synthetic nucleic acid sequence providers (1) specifications for effective nucleic acid synthesis procurement screening; (2) best practices, including security and access controls, for managing sequence-of-concern databases to support such screening; (3) technical implementation guides for effective screening; and (4) conformity-assessment best practices and mechanisms.
- 240 Days: Provide a report to the Office of Management and Budget (OMB) and the White House identifying practices for authenticating content and tracking its provenance; labeling and detecting synthetic content; preventing generative AI from producing Child Sexual Abuse Material (CSAM), among other things.
- Within 180 days of the report, develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures. OMB must then issue guidance to agencies for labeling and authenticating official U.S. government digital content that they produce or publish. The Federal Acquisition Regulatory Council is also directed to consider amending the Federal Acquisition Regulation (FAR) to reflect the guidance.
- 270 Days:
- Through NIST, develop best practices for deploying safe and trustworthy AI systems, including by developing companion resource to the AI Risk Management Framework for generative AI, developing a secure software development framework for generative AI and for dual-use foundation models and creating guidance for evaluating and auditing AI capabilities.
- Through NIST, establish guidelines to enable developers of AI, particularly of dual-use foundation models, to conduct AI red-teaming tests.
- Solicit stakeholder input on potential risks and benefits of dual-use foundation models and submit a report to the President with related recommendations.
- Establish a plan for global engagement on promoting and developing AI standards. Within 180 days of establishing the plan, Commerce must submit a report to the President with related recommendations.
- 365 Days: Create guidelines for agencies to evaluate the efficacy of differential-privacy-guarantee protections, including for AI.
U.S. Department of Homeland Security (DHS)
- 90 Days: The head of each agency with relevant regulatory authority over critical infrastructure and the heads of relevant sector risk management agencies (SRMAs), must provide to DHS an assessment of potential risks related to the use of AI in critical-infrastructure sectors.
- 120 Days: Publish informational resources to better attract and retain experts in AI and other critical technologies.
- 180 Days:
- Initiate policy changes to clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies and consider beginning a rulemaking to modernize the H-1B program.
- Incorporate the AI Risk Management Framework, NIST AI 100-1, as well as other appropriate security guidance, into safety and security guidelines for use by critical infrastructure owners and operators. DHS would then coordinate with agency heads to take steps for the federal government to mandate such guidelines.
- Complete an operational pilot project test, evaluate, and deploy AI capabilities to help remediate vulnerabilities in critical U.S. government software, systems, and networks.
- Evaluate the potential for AI to be misused to enable the development or production of chemical, biological, radiological, and nuclear (“CBRN”) threats, and develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening.
- Develop a program to help developers mitigate AI-related IP risks.
- 270 Days: Develop a plan for multilateral engagements to further the adoption of the newly-developed AI safety and security guidelines for critical infrastructure owners and operators. Within 180 days of establishing the plan, DHS must submit a report to the President on needed actions to reduce cross-border risks to critical U.S. infrastructure.
- Establish an Artificial Intelligence Safety and Security Advisory Committee, to be comprised of AI experts from the private sector, academia, and government.
- Lead collaboration with international partners to mitigate the risk of critical infrastructure disruptions resulting from incorporation of AI into critical infrastructure systems or malicious use of AI.
U.S. Department of Defense (DOD)
- 120 Days: Enter into a contract with the National Academies of Sciences, Engineering and Medicine to conduct a study examining concerns and opportunities at the intersection of AI and synthetic biology.
- 180 Days:
- Complete an operational pilot project to test, evaluate and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical U.S. government software, systems and networks.
- Submit a report to the White House outlining recommendations to address gaps in AI talent for national defense.
U.S. Department of Energy (DOE)
- 120 Days: Consistent with available appropriations, establish a pilot program to enhance training programs for scientists, with the goal of training 500 new researchers by 2025.
- 180 Days: Among other things, establish an office to coordinate development of AI and other emerging technologies across DOE programs and the National Laboratories.
- 270 Days: Develop and, to the extent possible by available appropriations, implement a plan for developing DOE’s AI model evaluation tools and AI testbeds.
U.S. Department of the Treasury
- 150 Days: Submit a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
U.S. Department of State
- Lead efforts to expand engagements with international partners and establish a strong international framework for managing the risks and harnessing the benefits of AI.
- 90 Days: Alongside DHS, streamline visa applications and appointments for immigrants who plan to work on AI or other critical technologies.
- 120 Days:
- Consider initiating a rulemaking to establish new criteria to designate countries and skills on the Exchange Visitor Skills List as it relates to the two-year foreign residence requirement for certain J-1 nonimmigrants.
- Consider implementing a domestic visa renewal program to highly skilled talent in emerging technologies, as well as a program to identify and attract top talent at universities, research institutions and the private sector overseas.
- 365 Days: Publish an AI in Global Development Playbook that incorporates the AI Risk Management Framework’s guidelines, as well as develop a Global AI Research Agenda.
U.S. Patent and Trademark Office (USPTO)
- 120 Days: Publish guidance for both patent examiners and applicants on how to address the use of AI.
- 270 Days:
- Issue additional guidance to patent examiners and applicants to address other issues at the intersection of AI and IP.
- Consult with the U.S. Copyright Office and issue recommendations to the President on potential executive actions at the intersection of copyright and AI.
U.S. Copyright Office
- 270 Days (or 180 days after the Copyright Office publishes its AI study, whichever is later): Recommend additional executive actions the White House can take to address issues related to copyright protections for AI-generated work and the use of copyrighted work to train AI algorithms.
U.S. Department of Labor (DOL)
- 45 Days: Solicit information from the private sector on where immigrants with advanced skills in science and technology are most needed.
- 180 Days:
- Publish best practices for employers that could be used to mitigate AI’s potential harms to employees’ wellbeing. Agency heads must consider encouraging the adoption of such best practices.
- Submit to the President a report analyzing the abilities of federal agencies to support workers displaced by the adoption of AI.
- 365 Days: Publish guidance for federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems.
The Office of Personnel Management (OPM)
- 60 Days: Conduct an evidence-based review on the need for hiring and workplace flexibility.
- 90 Days: Coordinate a pooled-hiring action informed by subject-matter experts and using skills-based assessments to support the recruitment of AI talent across agencies.
- 120 Days: Issue guidance on existing pay flexibilities or incentive pay programs for AI, AI-enabling, and other key technical positions.
- 180 Days:
- Develop guidelines on the use of generative AI by the federal workforce.
- Establish guidance and policy on government-wide hiring of AI, data and technology talent.
- Establish an interagency working group to facilitate hiring of people with AI and other technical skills.
- Review competencies for civil engineers and other related positions and make recommendations for ensuring AI expertise and credentials in such positions.
- 365 Days: Implement new Executive Core Qualifications (ECQs) in the Senior Executive Service (SES) assessment process.
Executive Office of the President
- 45 Days:
- The Office of Science and Technology Policy (OSTP) and OMB are directed to identify priority mission areas for increased government AI talent.
- The Assistant to the President and Deputy Chief of Staff for Policy must convene an AI and Technology Talent Task Force to further the hiring of AI and AI-enabling talent across the federal government.
- 60 Days: OMB is directed to convene and chair an interagency council to coordinate the development and use of AI in agencies’ programs and operations. The council must include key agency heads, the Director of National Intelligence, and other agencies as identified by the chair.
- 150 Days: OMB is directed to issue guidance to agencies to strengthen the effective and appropriate use of AI.
- Within 60 days of the issuance of the guidance, OMB must develop a mechanism for federal agencies to assess their ability to adopt AI into their programs.
- Within 90 days of the issuance of the guidance, Commerce must develop practices to support implementation of the minimum risk-management practices.
- Within 180 days of the issuance of the guidance, OMB must develop a mechanism to ensure that agency contracts for the acquisition of AI systems are aligned.
- 180 Days:
- OSTP is directed to establish a framework, incorporating existing guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive procurement screening mechanisms.
- The President’s Council of Advisors on Science and Technology (PCAST) is directed to publish a report on the potential role of AI and issues that may hinder the effective use of the technology in research.
- 270 Days: Oversee an interagency process of developing and submit to the President a National Security Memorandum on AI, which will, among other things, outline actions for federal agencies to address the national security risks and potential benefits posed by AI.
U.S. Department of Justice (DOJ)
- 90 Days: The Civil Rights Division must convene the heads of federal civil rights offices to discuss their efforts to prevent discrimination in the use of automated systems and increase collaboration.
- 180 Days: The interagency working group must identify and share best practices for recruiting and hiring of law enforcement professionals with relevant technical skills and training law enforcement professionals about responsible application of AI.
- 270 Days: The Attorney General must consider such best practices and, if needed, develop additional recommendations for State, local, Tribal and territorial law enforcement agencies and criminal justice agencies.
- 365 Days:
- Submit to the President a report that addresses the use of AI in the criminal justice system.
- If needed, reassess the existing capacity to investigate law enforcement deprivation of rights resulting from the use of AI.
U.S. Department of Health and Human Services (HHS)
- The EO directs HHS to broadly identify and prioritize grantmaking and other awards to support responsible AI development and use.
- 90 Days: Create an HHS AI Task Force that must, within 365 days, develop a plan for responsible deployment and use of AI and AI-enabled technologies.
- 180 Days:
- Publish a plan on the use of automated or algorithmic systems in the implementation by States and localities of public benefits and services.
- Direct HHS offices to develop a strategy to determine whether AI-enabled technologies in the health care space maintain appropriate levels of quality.
- Consider ways advance compliance with federal nondiscrimination laws by health and human service providers that receive federal funding.
- 365 Days:
- Create a task force to develop a strategic plan on the responsible use of AI, including with respect to generative AI.
- Establish an AI safety program that, in partnership with voluntary federally listed Patient Safety Organizations establishes a framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings, among other things.
- Develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes.
Federal Trade Commission (FTC)
- Consider exercising its rulemaking authority to help enforce competition in the sector and protect consumers.
Federal Communications Commission (FCC)
- Examine how AI may improve telecom network resiliency and spectrum efficiency and aid the federal government’s fight against unwanted robocalls and robotexts.
National Science Foundation (NSF)
- 90 Days: Launch a pilot program implementing the National AI Research Resource (NAIRR), consistent with past recommendations of the NAIRR Task Force. NSF must identify computational, data, software and training resources appropriate for inclusion in the NAIRR pilot.
- 120 Days: Fund the creation of a Research Coordination Network (RCN) to further privacy research.
- 150 Days: Fund and launch at least one NSF Regional Innovation Engine that prioritizes AI work.
- 240 Days: Coordinate with federal agencies on potential opportunities to leverage PETs.
- 540 Days: Establish at least four new National AI Research Institutes.
General Services Administration (GSA)
- 30 Days: The Technology Modernization Board must consider prioritizing funding for AI projects, particularly generative AI, for the Technology Modernization Fund (TMF) for at least one year.
- 90 Days: Develop and issue a framework for prioritizing critical and emerging technologies offerings in the Federal Risk and Authorization Management Program (FedRAMP) authorization process, which would apply for at least two years.
- 180 Days: Work to facilitate access to acquisition solutions for certain types of AI services and products.
U.S. Department of Education
- 365 Days: Develop guidance on ensuring responsible and nondiscriminatory uses of AI in education.
U.S. Department of Housing and Urban Development (HUD)
- 180 Days: Alongside the Consumer Financial Protection Bureau (CFPB), issue guidance on how fair-lending and housing laws will prevent discrimination by AI in digital advertisements for credit and housing, as well as guidance on the use of tenant screening systems.
U.S. Department of Transportation (DOT)
- 30 Days: Direct the Nontraditional and Emerging Transportation Technology (NETT) Council to examine the need for guidance regarding the use of AI in transportation.
- 90 Days: Direct appropriate Federal Advisory Committees to provide recommendations on the safe use of AI in transportation.
- 180 Days: Direct the Advanced Research Projects Agency-Infrastructure (ARPA-I) to examine the challenges and opportunities of AI and prioritize the allocation of grants to those opportunities.
U.S. Department of Agriculture (USDA)
- 180 Days: Issue guidance to State, local, Tribal and territorial public-benefits administrators on the use of automated or algorithmic systems in implementing benefits or providing customer support for such programs.
U.S. Department of Veterans Affairs (VA)
- 365 Days: Host two three-month nationwide AI Tech Sprint competitions.
U.S. Small Business Administration (SBA)
- Work to support small businesses innovating and commercializing AI, as well as in responsibly adopting and deploying AI, including by assessing the extent to which the eligibility criteria of existing programs support expenses by small businesses related to the adoption of AI.
Conclusion
The Akin cross-practice AI team continues to advise clients on navigating the evolving AI regulatory landscape and will closely track implementation of the EO’s directives and the resulting opportunities for industry engagement, as well as parallel Congressional efforts to regulate AI and keep clients apprised of key developments.