After considerable anticipation and speculation, President Biden recently issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
While the EO focuses primarily on federal agency directives, its impact on the private sector will be profound, as forthcoming requirements for federal contractors and standards and guidance for employers, landlords, health care providers, the criminal justice system, and education will ensure its effects are felt throughout American society and across economic sectors.
The EO builds on prior, less-formal efforts to build protections into AI (including voluntary private-industry commitments), but it is the federal government’s first major step in regulating this new, shifting, and headline-grabbing frontier. To further its stated purpose of promoting safe, responsible development and use of AI—and recognizing the “extraordinary potential for both promise and peril” that this technology brings—the EO charges myriad federal agencies with producing guidance (including to the private sector) and taking other protective actions. EOs are published directives to the federal government, while they are not legislation, they have the force of law. A Fact Sheet summarizes the extensive EO.
The American public may know AI as an amorphous “something” that—while created by people and empowered to perform such person-like tasks as generating play lists, writing fan fiction, and making life-changing decisions about employment, finances, and access to health care—is not human. In different hands, AI can be a toy or a tool, a colleague or a boss, a doctor or a personal shopper. Although AI is a growing facet of society’s experience, it is fair to say we are only beginning to appreciate the range of its potential risks and benefits. As the EO makes clear, central to regulating AI is understanding (or working to understand) its scope, depth, and breadth.
The EO broadly defines AI to include any “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Recognizing that ultimately “AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built,” the EO directs certain actions to promote safe, secure, and trustworthy AI development, and it sets forth eight overarching principles to guide these actions.
The guiding principles
Per the EO, the following principles should, as appropriate and consistent with law, guide the use and development of AI:
- Safety and security
- Responsible innovation, competition, and collaboration
- Commitment to the American workforce
- Dedication to advancing equity and civil rights
- Protecting Americans who integrate AI into their daily lives
- Safeguarding privacy and civil liberties
- Managing risks from the Federal government’s own use of AI
- Federal leadership in global social, economic, and technical progress, recognizing the “disruptive innovation and change” that AI brings
Actions in accordance with the principles.
The EO directs federal agencies to promote safe and secure AI and AI systems according to these principles, and ,where feasible, to account for the views of stakeholders such as industry, academia, labor unions, civil society, and international partners:
1. Creating and deploying AI safety and security standards.
“[R]obust, reliable, repeatable, and standardized evaluations of AI systems,” together with mechanisms to test, understand, and mitigate risk, are foundational to balancing AI-related risks and benefits. To promote safe and secure AI and AI systems:
- Certain developers will be required to share critical information—including the results of safety testing—with the federal government.
- Designated executive agencies will set standards for pre-release red-team testing, apply them to critical infrastructure sectors, assess and address AI-related threats to critical infrastructure, protect against AI-enabled fraud and deception by developing guidance for content authentication and watermarking, and establish an AI Safety and Security Board.
- Agencies funding life science projects will establish standards for biological synthesis screening, to safeguard against dangerous AI-engineered biologics.
2. Protecting personal privacy.
The EO calls on the Director of the Office of Management and Budget (OMB), in collaboration with the Federal Privacy Council and the Interagency Council on Statistical Policy, to develop privacy-protective techniques and technology. It also directs the evaluation of federal agency standards and procedures for procuring, processing, storing, and disseminating commercially available personally-identifiable information.
Recognizing the potential for AI to extract, identify, and exploit personal data, the EO calls for:
- Bipartisan federal privacy legislation.
- Federal support for developing and implementing AI-related privacy tools.
- Robust privacy research and technology.
- Evaluation of federal agencies’ collection and use of commercially-available data (and strengthening privacy guidance to address AI risk).
- Standards for federal agencies to evaluate privacy protection efforts.
3. Advancing equity and civil rights.
The EO directs federal agencies to combat “algorithmic discrimination” in housing, banking, employment, criminal justice, and more. The Department of Justice(DOJ) and federal civil rights offices are encouraged to exercise their existing authority to combat discrimination and bias. To promote responsible use of AI for equity and civil rights, the EO directs:
- Anti-discrimination guidance for landlords, federal contractors, and federal benefit programs.
- Technical assistance, training, and DOJ/civil rights coordination to address algorithmic discrimination.
- Best practices for AI use in the criminal justice system—for example, in surveillance, forensics, and sentencing.
4. Protection for patients, students, and consumers.
The EO directs the Department of Health and Human Services to conduct a study and issue recommendations on implementing AI in the health and human services sector. Other federal agencies, such as the Federal Trade Commission and the Consumer Financial Protection Bureau, are called upon to enact safeguards against AI-related harms such as discrimination, privacy violations, fraud, bias, and safety risks, with an emphasis on critical sectors (for example, education, housing, financial services, law, transportation, and health care).
To prevent AI-enabled consumer fraud and exploitation while promoting high-quality, economical, and widely-available goods and services, the EO calls for:
- AI-enabled educational tools.
- Responsible use of AI in health care and drug development.
5. Workplace safeguards.
The EO reflects a historical concern for technology displacing workers, making displacement a key policy concern and encouraging risk mitigation. To advance productivity while protecting the American workforce from bias and inappropriate surveillance and job displacement, the EO directs:
- Reporting and study, at the federal level, of potential AI-related labor impacts and disruptions.
- Guidance for employers to promote responsible use of AI in workplace health, safety, equity, labor standards, and data collection.
6. Bolstering innovation and competition.
The EO strives to advance the US as a global AI leader through:
- Resources, data, and grants from the National AI Research Resource.
- Technical assistance and resources for small developers and entrepreneurs.
- Easing the path for an inflow of AI talent (“study, stay, and work”).
- Clarifying issues at the intersection of AI and intellectual property, including inventorship, patent eligibility, copyright protection for AI-generated works, and treatment of copyrighted works in AI training.
7. Advancing American leadership.
The EO seeks to capitalize on the global challenges and opportunities in AI by engaging with global partners in:
- Collaborative efforts.
- Developing and deploying standards.
- Promoting safe and responsible use.
8. Ensuring responsible and effective government use.
A theme throughout the EO is the risk attendant to irresponsible development and use of AI. To ensure that the federal government itself employs AI in service to its constituents, the EO directs:
- Clear guidance for federal agencies.
- Fast and efficient contracting to speed AI-related products and services into agencies’ hands.
- Accelerated acquisition of AI talent.
Determining the extent to which the EO affects an organization will require careful assessment. An organization may need to evaluate its third-party vendors’ use of AI, along with its own. To avail themselves of the potential to help shape regulation, organizations should closely monitor activity from the various agencies tasked with action items. The OMB is one such agency, and has already released a draft policy on use of AI by the federal government focused on strengthening AI governance, advancing responsible AI innovation, and managing risks from the use of AI through adoption of mandatory safeguards. This process will be the first of many comment opportunities on the U.S. approach to governing AI systems. Interested parties have until December 5, 2023, to submit comments to the agency.