What Is the AI Executive Order (EO 14110)? 

AI Executive Order, also known as EO 14110, was issued by the U.S. government on October 30, 2023. It outlines a comprehensive framework for the development, deployment, and governance of artificial intelligence (AI) technologies within federal agencies and across the nation. It aims to promote AI innovation while ensuring that AI systems are developed and used in a manner that is ethical, secure, and respects privacy and civil rights.

This executive order establishes guidelines for federal agencies to follow in their use of AI technologies. It emphasizes the importance of transparency, accountability, and public engagement in the deployment of AI solutions. The order also directs agencies to consider the impact of AI on workers and to strive for advancements that benefit all Americans, ensuring equitable outcomes across different segments of society. 

You can see the full text of Executive Order 14110 here.

In this article:

The Motivation Behind the AI Executive Order 

The motivation behind the AI Executive Order 14110 stems from a need to balance rapid technological innovation with ethical considerations and public welfare. As AI technologies become increasingly integral to national security, economic growth, and societal functions, there is a need for a structured approach to govern their development and use. This ensures that advancements in AI contribute positively to society without compromising individual rights or safety.

Furthermore, the executive order acknowledges the competitive global landscape of AI technology. It aims to position the United States as a leader in ethical AI development and usage, setting a global standard for responsible innovation. By doing so, it seeks not only to protect American interests but also to encourage international cooperation in managing the challenges and opportunities presented by AI technologies. 

Understanding the Sections of EO 14110 

Section 1: Purpose 

The Executive Order on AI, signed by President Biden, underscores the transformative potential of artificial intelligence, but also highlights the risks associated with AI, such as its potential to amplify societal issues like fraud, bias, and disinformation. The order emphasizes the need for a collaborative approach involving various sectors to leverage AI’s benefits while minimizing its risks.

Section 2: Policy and Principles 

Section 2 of EO 14110 sets forth eight guiding principles that shape the policy framework for AI governance and development:

  1. Artificial intelligence must be safe and secure: This principle emphasizes robust and standardized evaluations of AI systems, policies to test and mitigate risks, and mechanisms to label AI-generated content. The goal is to ensure AI systems are reliable, secure, and compliant with federal laws and minimize risks.
  2. Promoting responsible innovation, competition, and collaboration: The U.S. aims to lead in AI by investing in education, training, research, and capacity-building. This principle supports a competitive AI ecosystem, encourages fair market practices, and addresses IP challenges to protect inventors and creators.
  3. Supporting American workers: AI should create new job opportunities and improve job quality without undermining workers’ rights or causing harmful labor disruptions. The administration seeks to adapt education and training to support a diverse workforce and ensure AI benefits all workers.
  4. Advancing equity and civil rights: AI must not perpetuate discrimination or bias. This principle builds on existing frameworks to ensure AI promotes equity, civil rights, and justice, holding developers accountable to prevent unlawful discrimination and abuse.
  5. Protecting consumer interests: AI applications must comply with consumer protection laws, safeguarding against fraud, bias, discrimination, and privacy infringements. This principle is crucial in fields like healthcare, financial services, education, housing, and transportation.
  6. Protecting privacy and civil liberties: AI’s ability to extract and act on personal data necessitates robust privacy measures. The government will ensure lawful, secure, and privacy-respecting data handling practices.
  7. Government AI use: Enhance federal AI capabilities by attracting and training AI professionals, modernizing IT infrastructure, and ensuring safe, rights-respecting AI adoption.
  8. Global leadership: The U.S. will engage internationally to develop shared AI standards and promote responsible AI practices globally, aiming to balance innovation with ethical considerations.

Section 3: Definitions 

The definitions section of Executive Order 14110 aims to establish a shared language and framework for implementing and complying with the directives of EO 14110. They ensure that stakeholders have a common understanding of the terms critical to AI governance and innovation. The primary definitions include:

  • Artificial Intelligence (AI): AI is defined as a machine-based system that can make predictions, recommendations, or decisions for given human-defined objectives, influencing real or virtual environments. These systems use both machine and human inputs to perceive environments, create models through automated analysis, and use these models to formulate options for information or actions.
  • AI Model: An AI model refers to a component of an information system that uses computational, statistical, or machine-learning techniques to produce outputs from given inputs.
  • AI System: An AI system encompasses any data system, software, hardware, application, tool, or utility that operates partially or wholly using AI.
  • AI Red-Teaming: AI red-teaming involves structured testing efforts to identify flaws and vulnerabilities in AI systems. This is often done in controlled environments and involves collaboration with AI developers. The goal is to find harmful or discriminatory outputs, unforeseen behaviors, or potential misuse risks.
  • Generative AI: Generative AI models emulate the structure and characteristics of input data to generate derived synthetic content, such as images, videos, audio, text, and other digital content.
  • Dual-Use Foundation Model: This refers to an AI model trained on broad data, generally using self-supervision, and containing at least tens of billions of parameters, which exhibits high performance in tasks that pose serious risks to security, national economic security, or public health and safety. Examples include models that lower the barrier for non-experts to design or use CBRN weapons, enable offensive cyber operations, or evade human control.
  • Synthetic content: Images, videos, audio, text, or other information that was significantly modified or generated by any type of algorithm, including AI.

Section 4: Ensuring the Safety and Security of AI Technology 

Section 4 of the executive order mandates the formulation of guidelines, standards, and best practices aimed at enhancing AI’s reliability and safety across multiple domains. Among its key directives, it requires cooperation among various government departments to establish secure development practices for AI technologies, particularly those with potential “dual-use applications”—in other words, AI systems that could be used both for good and harm—and could have an implication on national security and critical infrastructure.

The directive emphasizes development of robust testing environments, or “testbeds,” for AI technologies. This is the primary approach for preemptively identifying and mitigating risks associated with AI deployment. Furthermore, this section introduces a requirement for regular reporting by entities involved in developing or deploying significant AI models or computing resources. Such measures aim to ensure transparency and accountability in the development process, aiming to safeguard against misuse or unintended consequences.

Section 5: Promoting Innovation and Competition 

Section 5 emphasizes the U.S. government’s commitment to enhancing the nation’s competitiveness in the global AI landscape. It outlines strategies to attract and retain AI talent by streamlining visa processes and modernizing immigration pathways for professionals in the AI field.

This section also advocates for fostering an ecosystem of innovation and competition within the AI sector. It calls for the implementation of development programs and guidance on managing intellectual property risks, aiming to stimulate creativity while safeguarding inventors’ rights. By addressing anti-competitive practices and bolstering support for small businesses, it seeks to create an environment where innovation can flourish and contribute to economic growth. 

Section 6: Supporting Workers 

In response to the evolving AI landscape, Section 6 of the executive order focuses on supporting American workers as AI technologies integrate into various sectors. It mandates a comprehensive report by the chair of the Council of Economic Advisers to assess AI’s impact on the labor market. This initiative aims to understand how AI may transform job opportunities, necessitating adjustments in workforce development and education to prepare for future demands.

Additionally, this section tasks the Secretary of Labor with evaluating and enhancing federal support for workers potentially displaced by AI advancements. This includes examining existing programs and proposing new legislative solutions to support workers at risk. The secretary is also directed to develop guidelines for employers on deploying AI in ways that enhance job quality and ensure fair labor practices.

Section 7: Advancing Equity and Civil Rights 

Section 7 of the Executive Order outlines measures for evaluating and mitigating potential biases and discrimination facilitated by AI technologies. Agencies are instructed to scrutinize how AI applications might inadvertently perpetuate discrimination or hurt civil rights, particularly in critical areas such as federal program administration and benefits delivery. 

This section also emphasizes the importance of integrating civil rights considerations into the development and deployment phases of AI systems. It mandates comprehensive reports on potential civil rights abuses and the need for government guidance on preventing discrimination, with a “dedication to rectifying historical injustices” in every sphere where AI is applied. 

Section 8: Protecting Consumers, Patients, Passengers, and Students 

Section 8 of the Executive Order outlines a multi-faceted approach focusing on consumers, patients, passengers, and students. It empowers regulatory agencies to enforce consumer protection laws and ensure AI applications do not undermine financial stability or privacy of individuals.

In the healthcare domain, the order mandates the Department of Health and Human Services to develop comprehensive strategies for AI oversight. This includes ensuring equitable access to AI-enhanced healthcare services while maintaining strict privacy standards. For transportation and education, it calls for assessing AI’s regulatory needs and creating resources that promote responsible use of AI technologies.

Section 9: Protecting Privacy

Section 9 of the Executive Order focuses on enhancing privacy protections for AI systems. It outlines specific directives to mitigate privacy risks, in light of AI’s capacity for personal data collection and analysis. The section mandates the Director of the Office of Management and Budget (OMB) to identify and evaluate how federal agencies handle commercially available information, particularly personal data obtained from data brokers.

Furthermore, this section tasks the Secretary of Commerce, through the Director of the National Institute of Standards and Technology (NIST), with developing guidelines for evaluating privacy-enhancing technologies (PETs), including differential privacy. The goal is to advance research and implementation of PETs across federal agencies. 

Section 10: Advancing Federal Government Use of AI 

To enhance the federal government’s use of AI, Section 10 establishes an interagency AI Council, which aims to unify efforts across federal agencies, excluding national security systems. Agencies are required to appoint chief AI officers responsible for overseeing AI initiatives, ensuring compliance with risk management practices, and fostering public engagement.

Furthermore, this section outlines a framework for managing AI technologies within federal operations. It includes developing a system for tracking AI adoption across agencies and creating tools to support risk management in line with established guidelines. The OMB director is tasked with issuing annual reporting instructions for agency AI applications to promote transparency and accountability. Additionally, the General Services Administration is directed to facilitate access to commercial AI capabilities for federal agencies.

Section 11: Strengthening American Leadership Abroad 

Section 11 of the Executive Order emphasizes a proactive approach to international engagement and standard-setting in AI. It outlines a strategy for enhancing collaboration with allies and partners, fostering global consensus on responsible AI development and deployment practices. 

The Secretary of State, along with national security and economic advisers and the director of the Office of Science and Technology Policy (OSTP), is tasked with leading these efforts outside military and intelligence applications. Their goal is to cultivate international partnerships that promote shared principles for AI governance, aiming to establish a unified framework that balances innovation with ethical considerations.

Section 12: Implementation 

Section 12 establishes the White House Artificial Intelligence Council to coordinate and oversee the implementation of policies related to artificial intelligence as outlined in the executive order. This council is tasked with ensuring that federal agencies effectively develop and apply AI technologies in accordance with the order’s guidelines.

The creation of this council signifies a commitment to a structured, whole-of-government approach to AI governance. By facilitating collaboration among various departments and agencies, the council aims to harmonize efforts, share best practices, and address challenges in a unified manner.

Section 13: General Provisions 

Section 13 of the Executive Order on AI serves as a clarificatory provision, setting the boundaries for the order’s application and authority. It states that nothing in the order overrides or conflicts with pre-existing mandates. It also emphasizes that the executive order depends on available appropriations, indicating that its directives are subject to budgetary constraints and allocations. 

Lastly, the section clarifies that the order does not create any right or benefit, substantive or procedural by any party against the United States (in other words, the EO cannot be grounds for a lawsuit against the US government).

Advantages and Limitations of the AI Executive Order 

The AI Executive Order introduces several important measures to regulate artificial intelligence, promoting safety, security, and ethical practices while encouraging innovation and public trust. However, it also has limitations. Here are some notable advantages and limitations raised by experts and political commentators.

Advantages of the Executive Order

  • Establishment of new standards for AI safety and security. This includes the requirement for companies to conduct red-teaming exercises—where independent experts identify vulnerabilities in AI systems—and share the results with the government before public release.
  • Emphasizes the importance of privacy protection, advocating for comprehensive strategies to ensure equitable access to AI-enhanced services in healthcare, transportation, and education. 
  • Integrates guidelines for watermarking AI-generated content, which can help combat the spread of deep fakes and disinformation, fostering a safer digital environment.

Limitations of the Executive Order

  • Lacks robust enforcement mechanisms for the standards it sets. While it mandates the sharing of red-teaming results, it does not specify what actions the government can take if an AI model is deemed dangerous.
  • Guidelines for watermarking AI-generated content are not enforceable, making it difficult to prevent the dissemination of unmarked deep fakes. The Executive Order was enacted several months before a US general election, and there are serious concerns that misinformation can impact public opinion and democratic processes.
  • Lack of regulation over high-performance graphics processing units (GPUs), which are essential for AI development. Without controlling access to these powerful hardware components, efforts to ensure the safe and responsible development of AI are incomplete.
The Cloud Native Experts
"The Cloud Native Experts" at Aqua Security specialize in cloud technology and cybersecurity. They focus on advancing cloud-native applications, offering insights into containers, Kubernetes, and cloud infrastructure. Their work revolves around enhancing security in cloud environments and developing solutions to new challenges.