Criminal negligence and acceptable risk in the EU’s AI Act: casting light, leaving shadows

DCU Law and Tech regularly publishes blog posts discussing the topics Law and Technology written by a variety of authors.

Leonardo Romano
The problem of risk-acceptability for high-risk AI systems

Regulators and policymakers in Europe are currently grappling with a ‘post-modernity shock’, marked by the challenge of balancing competing goals and interests associated with the vast and diverse array of technologies that go under the ambiguous label of Artificial Intelligence (AI). In particular, the pressing need to ensure Europe reaps the economic and social benefits that AI can bring clashes with the necessity to establish criminal liability when self-learning algorithms, behaving in unexpected ways, cause harm to individuals and society.

When determining who is to be held liable for production activities, the applicable legal framework is generally that of negligence offences. In the AI context, this raises the broader issue of how much risk from (dangerous but socially useful) intelligent products European society is willing to accept. For that purpose, the conceptual tool that comes into play is the ‘area of permitted or acceptable risk’ (erlaubtes Risiko). This legal concept, long debated in criminal law doctrine and recently receiving renewed attention in relation to AI technologies, introduces a ‘margin of tolerance’; within this margin, operators cannot be held criminally liable, based on generic negligence, for harmful events that occur despite compliance with codified precautionary norms.

Here, the problem of balancing social utility with the protection of the legal interests threatened by AI involves crucial questions regarding the extent of this acceptable risk area and the identification of objective rules of diligence on a legal basis with specific regards to the position of AI providers. Identifying the boundaries of this risk and defining what constitutes acceptable behaviour for AI systems are essential for ensuring that AI technologies can develop and benefit society while still ensuring legal certainty and safeguarding against potential harms.

We should accept the risk – but how much risk, exactly?

Although it does not have any direct effect on domestic criminal matters, the risk-based approach adopted by the recently approved EU Artificial Intelligence Act (‘AI Act’) could still exert an ‘indirect’ influence on domestic legislation by delineating acceptable risk areas through a ‘tempered’ precautionary approach.

Specifically, Article 9 imposes a requirement on providers of ‘high-risk AI systems’ to adopt ‘appropriate and targeted risk management measures’ designed to reduce ‘known and foreseeable risks’ (when they cannot be eliminated tout court) ‘as much as technically feasible’. This is intended to ensure that ‘residual risk’ – the marginal risk remaining after precautions have been put in place – is kept at a level that is “judged to be acceptable”.

While the AI Act grants discretion to those responsible for its implementation to determine the ‘acceptability threshold’, it does not establish clear standards or provide guidance on how to make such a complex judgment. On a narrow, literal reading, the vague ‘as far as technically feasible’ parameter seems to impose a potentially indeterminate burden of precaution on providers, who are required to implement every risk management measure that is possible as a matter of engineering, regardless of cost-benefit considerations.

One thing is certain: risk management must have limits – even for high-risk products. Clarifying how much risk management is enough to ensure an acceptable level of residual risk is thus essential for the allocation of criminal liability among the humans behind the machine.

(Un)timeliness and technical (un)feasibility of AI standards

One way of judging whether residual risks have been reduced to an acceptable level might be by referencing harmonised technical standards issued by European standardisation bodies. Despite their voluntary nature, objective and measurable standards carry significant weight as a way of demonstrating compliance with the essential requirements set out in Article 9. Providers who choose to adhere to harmonised standards benefit from the associated ‘presumption of conformity’, meaning that complying with them is enough to ensure an acceptable level of residual risk.

Developing such standards, however, might pose significant challenges due to two inherent limitations: timeliness and technical feasibility. Firstly, the standardisation for AI applications in Europe is still in its infancy and there is a clear discrepancy between the rapid deployment of AI-based products and services and the slower development of corresponding standards. While these standards are yet to be fully developed, their review by the Commission and subsequent publication in the Official Journal can take years. Secondly, crafting effective standards for AI technologies is technically challenging due to their complexity. Standardisation works well in areas where risks are well-known and there is substantial evidence from past events; less so when dealing with untested systems or those that can change behaviour autonomously. For instance, technical standards developed for a certain AI technology – say, self-driving cars – imposing the use of high-quality, diverse datasets to train algorithms and test for bias, may not adequately address specific risks posed by a specific application in a specific context, such as the unpredictable conditions of a crowded city with complex traffic patterns.

Ultimately, the struggle to create effective technical standards points to an uneasy reality: it is unlikely that anyone will be able to develop standards that provide sufficient confidence in the safety and trustworthiness of AI applications. While risk management systems, monitoring guidelines, and documentation requirements are essential, they may be inherently inadequate to prevent specific harmful events. In fact, these standards may merely serve to make residual risks appear ‘acceptable’, potentially giving a false sense of security and reducing the goal of trustworthy AI to a box-ticking exercise focused on compliance with general procedural standards.

Therefore, if harmful events were to occur despite adherence to industry-specific technical standards, a critical question arises: will compliance with such weak, ineffective rules of diligence – once they are introduced – be sufficient to exclude the criminal liability of the provider (case of specific negligence), or will the judge still need to assess whether the provider’s conduct aligns with that of the reasonable agent (case of generic negligence)?

The limits of the limit: from acceptable risk to the ‘reasonable algorithm’ standard

The AI Act’s reference to a risk-acceptability threshold for high-risk AI systems does not mean that operators can rely exclusively on written technical standards, nor does it imply that negligence is solely specific. In fact, the inherent limits of these standards open the door to the assessment of generic negligence in the light of the reasonable agent standard. Specifically, where the causes leading to the failure of the precautionary rule were known or recognisable by a reasonable provider, the limit of permitted risk cannot operate and the agent’s behaviour will be judged based on the homo eiusdem condicionis et professionis standard. While this standard offers a compelling template for a more nuanced approach to liability attribution, the challenge here is to adapt its distinctly human parameters of recognisability and avoidability to cases of ‘algorithmic negligence’.

The first problem concerns the object of the foreseeability judgment. The reasonable provider parameter cannot be simply modelled on the recognisability of production ‘defects’, because AI failures are usually the result of a mixture of limitations (and not defects) of the learning process: AI systems may fully comply with legal specifications yet still fail to act within the boundaries set by their design and training. Instead, according to several scholars, a more appropriate solution might be to assess the provider’s diligent behaviour based on a ‘reasonable algorithm’ standard, which refers to the diligent behaviour expected from a machina eiusdem fabricationis et condicionis. The idea of a standard of care referring directly to the artificial system does not imply that algorithms are recognised as having legal personality, but only that the assessment of the algorithmic negligence becomes a precondition for the provider’s criminal liability for having produced and placed on the market an unsafe product. To date, however, legal doctrine has not attempted to address the practical question of how to establish the diligent behaviour expected from a reasonable algorithm.

Secondly, another complex issue is identifying the moment when the provider could reasonably be expected to recognise instances of algorithmic negligence. Given the relational hyper-complexity of AI systems, identifying the point in time when the failure manifests itself (whether during information acquisition, information analysis, decision selection, or action implementation) will direct the judgment on a specific human-machine interaction, thus allowing the distribution of negligent liability along the supply chain.

In conclusion, although it does not have any direct effect on domestic criminal matters, the AI Act could still exert an ‘indirect’ influence on domestic legislation. A failure to implement the precautionary rules set out for high-risk AI systems might constitute negligent behaviour. At the same time, however, the inherent limits of technical standards in the AI domain make necessary the ascertainment of the generic negligence in the light of the reasonable person standard. This will raise questions about the applicability of the ‘reasonableness’ standard to AI systems, allowing the evaluation of the reasonable provider parameter based on the intelligent product that acts with diligence.

More Blog Posts

Project Bluebird: Innovation in the (AI)r
Victor Lopez Juarez
The global airspace is experiencing a confluence of factors that challenge traditional air traffic management systems. An increase in flight numbers,…
The DAO Regulation Dilemma
Juan Diego Arregui
Calderon & Associates Law Firm
Blockchain Based Smart Contracts and DAOs Blockchain technology is a groundbreaking security technology which has risen to prominence over the last…
How can AI improve corporate criminal compliance?
Nicolò Di Paco
University of Pisa
At present, the difficulties that companies have to face in fulfilling self-regulatory tasks are increasing. First of all, corporations nowadays deal…