For critical infrastructure markets such as industrial, communications, transportation, education, and financial sectors, what AI is and is not, and what is it used for, and its risk classification has great bearing on the degree of regulatory burden that may be imposed. On March 13, 2024, the European Parliament passed the Artificial Intelligence Act (“AI Act”).[1] The AI Act will likely have reach over many US companies that currently conduct business in EU markets either by virtue of market presence or doing business with EU citizens. Considerable focus has been placed on the AI Act’s risk-based regulatory regime and associated classifications. Yet, there is another important issue that has not received as much attention—the AI Act’s definition of an “AI system” despite it being inextricably linked to the AI Act’s classification system. Though the United States has not yet adopted a national AI regulatory scheme like the EU Act, one will likely arise, and in the interim various state laws will be enacted that will draw on the principles of the EU AI Act. Thus, the EU Act can be instructive to those in sectors dealing with critical infrastructure in preparing for eventual regulation.
EU AI Act Background. The AI Act lays down a comprehensive risk management-based regulatory regime that is intended to safeguard EU citizens from the potential adverse effects of AI. Under the AI Act, artificial intelligence systems are controlled by a risk classification system. Risks are categorized from “unacceptable risk,” “high risk,” “limited risk,” to “minimal risk.” The classification system is constructed around a mixture of types of use within sectors in some cases, and generally identified product categories in others. Many commentators argue that most AI would fall into minimal or limited risk categories subject to minimal regulatory impositions. This may not be an accurate assessment. A relatively broad swath of potential market applications could be characterized as AI systems and could fall into a high-risk category, inviting extensive regulatory oversight absent an exemption.
High-Risk Classification. Because critical infrastructure systems are so fundamental to society, the use of AI in these systems naturally invites enhanced scrutiny. In Article 6, the AI Act largely, but not exclusively, defines what constitutes high-risk AI systems. Article 6(1) covers AI systems that serve a safety purpose either on a standalone product basis or as a component of other products. Article 6(2), on the other hand, lays out a more generalized approach designating types of goods or services under the EU’s harmonization laws. Generally speaking, high-risk AI systems include (a) education, banking and finance, public and private services, and critical infrastructure, and (b) goods subject to the EU’s product safety laws, including vehicles, toys, medical devices, aviation, elevators, watercraft, and farming equipment. With respect to critical infrastructure, the term is generally accepted to mean essential support systems, functions, and services that are essential to sustaining and maintaining a civil society. Within the United States, critical infrastructure and key resources (CIKR) are defined by 16 key economic and governmental sectors[2]. Similarly, the EU lays down an expansive functional sector list in its approach to identifying critical infrastructure, and includes utilities and energy, water, transportation, food, water, waste, public services administration, communications digital infrastructure, and space under Directive (EU) 2022/2557 and Regulation (EU) 2023/2450 (together “EU Critical Infrastructure Laws”).
From a harmonization perspective, AI Act adopts the same critical infrastructure definition as the EU Critical Infrastructure Laws in Article 3(62). However, the AI Act begins to fragment its treatment and approach to critical infrastructure in AI risk classification. Annex III (2) of the AI Act states high risk AI system are:
“AI systems intended to be used as safety components in the management and operation of critical digital infrastructure (emphasis added), road traffic, or in the supply of water, gas, heating or electricity.”
Curiously, this is a subset of the EU’s critical infrastructure sectors, and a new, undefined term called “critical digital infrastructure” is introduced. Recital 55 of the AI Act provides guidance stating that it is intended to mean management and operational systems within the critical infrastructure sectors and subsectors delineated in Annex 1 to Directive (EU) 2022/22557. Presumptively, though not expressly stated in the AI Act, any AI system that serves any operational or management control function would be classified as high-risk whether or not an AI system has an intended safety function. Likewise, any safety system used in critical infrastructure would be a covered high-risk system, though a system by necessity would be part of an operational system.
Critical infrastructure is further addressed in a roundabout way in Annex III (5) of the AI Act. High risk AI systems under Point 5 include systems that control “[a]ccess to and enjoyment of essential private services and essential public services and benefits.” Thus, systems such as account management, billing, credit, and provisioning systems used within critical infrastructure service sectors would be covered as well.
Finally, as noted generally above, other sectors that fall within the EU’s Critical Infrastructure Law’s definition may be assigned a high-risk classification under Article 6(2) by reference and incorporation of a Union harmonization legislation list in Annex 1 which is not limited to use in safety systems. Consequently, there are a number of overlapping and complementary provisions that may implicate a wide range of systems used in the operation, management, safety, provision, and accesses to critical infrastructure functions, and may involve other systems and products of categorical nature.
Taking a Step Back – Smart or Artificially Intelligent? Though exposure to high-risk AI system classification for critical infrastructure operators should be a key focus, there is a more fundamental question. What is AI is a central issue. The AI Act’s “AI system” definition coupled with other ambiguities could result in a legacy smart system being caught in the AI Act’s wide definitional net. Critical infrastructure systems are complex and leverage advanced technologies to operate, control, and monitor critical functions. These technologies include real time or near-real time sensing technologies leveraging network connectivity and often employ predictive models to make decisions and take actions ranging from notification to executing system functions. These systems are found in everything from trading systems to rail systems and have existed for decades. Though “smart,” the question is whether they are or should constitute AI systems.
The AI Act broadly defines an“AI system” under the Article 3(62) as:
“…a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition deviated from EU Commission’s (Commission) proposed approach and instead follows an older formulation[3], adopting in substantive form the OECD’s 2019 definition.[4] This is indictive of the ongoing difficulty in establishing a consensus around what artificial intelligence is and is not.[5] For instance, the AI Act’s definition includes “adaptivity” as an AI system trait but passes on making it a necessary trait (i.e., it may or may not be). Instead, the definition reduces to: (i) a machine system, (ii) designed with autonomy, (iii) that performs any kind of objective, (iv) based upon inputs received, (v) that can influence any environment. Notably, this five-part test arguably captures systems that conventionally may not or may not be considered artificial intelligence. The only readily distinguishing “AI-like” trait in the definition is “autonomy.” But this does little to clarify the issue. In its understood plain English meaning, the Oxford English Dictionary defines “autonomy” to mean, “with reference to a thing: the fact or quality of being unrelated to anything else, self-containedness; independence from external influence or control, self-sufficiency.” [6] An alternative definition of the word employs the notion of irreducibility of a system having its own laws or methods.[7] In any case, the ordinary meaning of “autonomy” does little to inform on what it means in relation to a machine, and, as a consequence, contributes little to ferreting out what an AI system is.
What can be drawn from the definition is some level of independence from external influence or control, and this becomes the focal point of distinction. Many conventional software-controlled systems utilize statistical algorithms and methods such as regression analysis, Monte Carlo simulation, and factor analysis to interpret data inputs, draw inferences, and take actions without human intervention. This type of software is found in everything from factory production line robots to comprehensive system management platforms that make near real time decisions in areas of energy production and transmission switching, financial trading, logistics and more. Presumably, though, a self-executing system that is controlled or constrained by Boolean logic, even if using various statistically derived functional inputs, would not be considered autonomous because it is the product of outside control or influence.
One Point of Clarity Arbitrarily Drawn. Where the AI Act begins to draw a quantitative line is under Chapter V with general purpose AI models because they are deemed to present systemic risks. Article 3(63) defines “general purpose AI Model” as:
“… an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are released on the market.”
and defines “systemic risk” under Article 3(65) as:
“a risk that is specific to the high-impact capabilities of general purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.”
Under Article 51(1)(a) and (b), a general purpose AI model presents significant risks if it has high impact capabilities evaluated on the basis of technical tools, methods, and standards that are identified by the Commission in accordance with the impact criteria in Annex VIII. Article 51(2) goes further in specifically identifying general purpose models that use cumulative training computations greater than 1025 FLOPS (floating point operations per second) as presenting a systemic risk under Article 51(1)(a). Presumably, this is aimed toward newer large language models (LLMs) like GPT-4, though it is reported that the Commission’s settlement on the 1025 FLOPS threshold was negotiated without a clear picture at the cumulative computational level at which these models are trained and will continue to be trained.[8] Notably, modern artificial intelligence systems are vastly more complex than their predecessors but do not stray too far from long understood principles taking advantage of computational scale.[9] What is certain is that the use of LLMs in areas such as critical infrastructure and other identified high-risk areas will be subject to significant scrutiny and regulatory oversight.
The High-Risk Safe Harbor. Outside the realm of general-purpose AI, though, where ambiguity reigns in high-risk classification, the AI Act attempts to provide some relief from definitional uncertainty through Article 6 (3). Point 3 removes AI systems from a high-risk classification if the system “… does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making.” This is consistent with the Boolean-based distinction presumed above. Point 3 goes on to provide a test for derogation (exclusion) if a system meets at least two of the following conditions:
“(a) the AI system is intended to perform a narrow procedural task;
(b) the AI system is intended to improve the result of a previously completed human activity;
(c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; or
(d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III.”
Though the provision wraps around itself by articulating a test that indicates a non-AI system, it provides an articulable basis for excluding systems from a high-risk classification. This became a key provision for critical infrastructure systems providers if they choose to demur to AI system treatment and can firmly establish the Article 6 (3) safe harbor is met. However, this path is not free from classification risk or regulation. Article 6(4) of the AI Act requires that a provider of a purported non-high-risk Annex III system both register it under Article 49(2) and document its non-high-risk assessment and make it available to authorities upon request. Finally, the AI Act inserts another reservation in Article 6(5) requiring the European Artificial Intelligence Board (AI Board) to develop and implement guidelines that furnish practical examples and use cases that distinguish between “high-risk and not high-risk” systems within 18 months. This mandate is presumably, but not exclusively, intended to provide guidance not only to the market but to enable the Commission under Article 6(6) to make changes and add to the high-risk exclusion conditions under the commission’s delegated authority under Article 97(2).
Analysis. The AI Act unmistakably is a dynamic regulatory framework intended to adapt to evolving technological risks and ongoing advancements that may be better understood in the future. For US companies that serve critical markets and may fall into the reach of the AI Act, analyzing what technologies in their portfolio may constitute “AI systems” involves determining if a system (a) meets the AI Act five-part trait test, (b) whether the system’s use falls within a high-risk category, and (c) if so, whether exclusionary conditions are satisfied.This analysis should be documented, and great attention should be applied to the technical nature and substance of the system with an emphasis on determining the degree of freedom from constraint the system possesses.
As a threshold matter, a system determination would focus on indicia of sufficient decisional autonomy to rise to the level of an AI system. The AI Act hints at these indicia in Article 6 (3)’s exclusionary conditions.Two of the conditions, subparagraphs (3)(b) and (c), are tethered to human control input and control concepts. Recital 12 of the AI Act, appears to embrace this concept stating that:
“[This Regulation] … should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not covers systems that are based on the rules defined solely by natural persons to automatically execute operations.”
This suggests that a system that makes decisions based upon inputs using legacy or traditional statistical decision-making methods are not AI systems if there is an adequate level of human control in the process. Arguably, on this point, a system that engages in inference but is controlled by Boolean logic that constrains an automated system’s decisions to a prescribed and knowable set of actions would not be an AI system, because the system is not operating free of a priori programmed human logic. In an analogous way, the exclusionary condition in Article 6 3(a) arrives at the same conclusion by referencing systems that perform a narrow procedural task. This notion of narrowness and proceduralism fits into a general framework of a conventionally programmed system.
Takeaways. US companies whose advanced systems play important functions in critical infrastructure, banking and finance, education and other product safety regulated markets and employ advanced statistical algorithms should begin the process of analyzing and documenting these systems through an AI Act lens as well as EU harmonizing legislation for product safety, certification, and market surveillance.[10] If sufficient human logic or processes are present to constrain a system’s decisions or actions that influence a real world or virtual environment, it should be carefully documented it can be shown that the system is a non-AI System, and as a secondary position, that the system is not a high-risk AI system by availing to the exclusionary safe harbor conditions. In the case where significant market exposure exists, it may be prudent to pursue registration as a non-high-risk system rather than taking a self-determined position that a smart system is not by definition an AI system in order to avoid the specter of regulatory enforcement actions and exposure to significant fines.
[1] European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD)) (link: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.html).
[2] See, Presidential Policy Directive/PPD-21 – Critical Infrastructure Security and Resilience (Feb. 12, 2013)(link: https://www.cisa.gov/sites/default/files/2023-01/ppd-21-critical-infrastructure-and-resilience-508_0.pdf), and U.S. DHS National Infrastructure Protection Plan (2013)(link: https://www.cisa.gov/sites/default/files/2022-11/national-infrastructure-protection-plan-2013-508.pdf).
[3] See, European Commission, Proposal for a Regulation of The European Parliament and of The Council, Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts, COM (2021) 206 final (21.4. 2021, p.39). Earlier versions of the proposed AI Act took a more technological approach referencing specific technologies:
“artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”
[4] Organization for Economic Cooperation and Development (OECD), Recommendation of the Council on Artificial Intelligence, Part I, p. 1 (21.5.2019, amended 2.5.2024), and ; ISO/IEC 22989:2022See, also, The Office of NIST’s Artificial Intelligence Risk Management Framework 1.0 (NIST AI RMF) also defaults to the OECD definition. NIST AI RMF at 1.
[5] Section 5002(3) of the National Artificial Intelligence Initiative Act of 2020 (Pub. Law 116–283), 15. U.S.C. §9401(3) adopts a different definition of artificial intelligence:
‘‘artificial intelligence’’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments. Artificial intelligence systems use machine and human- based inputs to— (A) perceive real and virtual environments; (B) abstract such perceptions into models through analysis in an automated manner; and (C) use model inference to formulate options for information or action.”
[6] “Autonomy, N., Sense 1.d.” Oxford English Dictionary, Oxford UP, March 2024, https://doi.org/10.1093/OED/8284327557.
[7] “The condition of a subject or discipline (e.g. biology) of having its own laws, principles, and methodology which are not simply deducible from or reducible to those of a more fundamental subject (e.g. physics).” “Autonomy, N., Sense 4.” Oxford English Dictionary, Oxford UP, March 2024, https://doi.org/10.1093/OED/6331580041.
[8] Lomas, Natasha , EU says Incoming Rules for General Purpose AIs can Evolve over Time, Tech Crunch (Dec. 11, 2023)(link: https://techcrunch.com/2023/12/11/eu-ai-act-gpai-rules-evolve/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAABKouerrVGVozvN6g25oNZmn77eR-Cc-5fQB732HZLva1Dn8QrRp6oEJ7MJf4dJ_aaBIgmbol95c_oQc7bOSKrC8uPUHTcH8WZngU9jo8WS3z-kWv7v2-UhQrjdFninQoXggaek4yoATbGwqTVynQt0le50-8UuGhkSNz3phEV55), quoting a unidentified EU Commission representative: “There are no official sources that will say ChatGPT or Gemini or Chinese models are at this level of FLOPs,” the official said during the press briefing. “On the basis of the information we have and with this 10^25 that we have chosen we have chosen a number that could really capture, a little bit, the frontier models that we have. Whether this is capturing GPT-4 or Gemini or others we are not here now to assert — because also, in our framework, it is the companies that would have to come and self assess what the amount of FLOPs or the computing capacity they have used. But, of course, if you read the scientific literature, many will point to these numbers as being very much the most advanced models at the moment. We will see what the companies will assess because they’re the best placed to make this assessment.”
[9] LLMs are neural network-based systems using hyperdimensional vector spaces and dot product, matrix multiplication with recursive parameterized weighting to create conceptual proximity through abstract spatial proximity. In some respects, it is rooted in a distant cousin where inference is drawn from form fit data using linear regression. The development of transformers is a key advance in AI because they add encoders and decoders that essentially create sequencing of a series of data as associative representation through spatial number proximity. The more encoders and decoders, then greater the “attention”, meaning the system can keep of track of distant but coupled relationships, for example in a long conversation or article.
For an introductory explanation of LLMs, see, Lee, T. and Trot, S., A jargon-free explanation of how AI large language models work, Ars Technica (Jul. 31, 2023)(link: https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/).
[10] See, Regulation (EU) 2019/1020 of The European Parliament and of The Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011, Decision No. 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC; and Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and repealing Regulation (EEC) No 339/93.