On June 14, 2023 the European Parliament adopted its version of the draft Artificial Intelligence Act («AI Act»). While the final approval is expected by the end of the year, there remain some legal loopholes in relation to the banking sector that do not seem, as yet, to have been taken into account during the legislative process. Indeed, on the assumption that banks may also be considered providers of «high-risk AI systems», the draft regulation essentially proposes integrating the requirements contained therein within financial regulation, while also entrusting the financial supervisory authority with the task of supervising the implementation of the AI Act in this field. As also noted by the ECB, this poses several critical issues, both regarding the exact definition of the rules that intermediaries will have to follow in order to set up “adequate” structures to contain the risks linked with AI systems, and regarding the role of the supervisory authorities.
What is AI?
The concept of AI has been a subject of ongoing debate and discussion, even within the scientific community. There are essentially two distinct perspectives on AI. First, there is the “anthropocentric approach”, which seeks to draw parallels between machines and humans. The second is the “rationalist approach”, which focuses on the machine’s capacity to replicate logical reasoning abilities. Exploring this topic in detail would require an extensive examination, but it is worth noting that, even from a legal perspective, the definition of AI poses significant challenges for scholars and professionals.
Indeed, an initial definition of AI can be found in the document Recommendation of the Council on Artificial Intelligence published by the Organisation for Economic Cooperation and Development (OECD) on May 22, 2019, according to which: «AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy». Thus, even within the draft AI Act approved by the European Parliament, it has been highlighted the need to provide a notion of AI that is «closely aligned with the work of international organisations working on artificial intelligence to ensure legal certainty, harmonization and wide acceptance» (see recital 6). However, it has been observed that the defining characteristic of AI systems lies in their potential integration within decision-making processes, both in the private and public sectors. This development marks the beginning of a new chapter in the evolving relationship between law and technology. Initially, the focus was primarily on safeguarding personal data, as exemplified by the GDPR. Notably, both sets of regulations share a common objective of promoting greater accountability among market players.
In this new AI landscape, the supervised entities themselves play a crucial role as the first line of supervision.
Regulating the banking sector in the EU
The European legislator has also shown a similar perspective towards banking law, as evidenced by its early attention to this area in the European Communities Directive 77/780/EEC. This directive aimed to coordinate laws, regulations, and administrative provisions related to the establishment and operation of credit institutions, thereby promoting competition within member States’ banking markets. Financial regulation, in brief, encompasses a set of rules primarily designed to discourage the assumption of excessive risks for the individual intermediary or group, and by extension, to safeguard the stability of the entire system.
In alignment with these principles, the European legislator has taken further steps in this direction, exemplified by EU Directive 2013/36 of the European Parliament and of the Council of 26 June 2013 («CRD»). Among its provisions, CRD highlights the necessity for banks to establish robust governance arrangements. This entails having a clear and well-defined organizational structure with transparent and consistent lines of accountability. By promoting such governance frameworks, the directive aims to ensure effective processes are in place, allowing for informed decision-making, risk management, and overall operational efficiency within the banking sector. These measures contribute to fostering stability, trust, and sound practices, ultimately benefiting both the institutions themselves and the broader financial landscape.
High-risk AI systems in the banking sector?
Banks can also act as providers of «high-risk AI systems». The risks associated with the use of AI pertain to health, safety, and fundamental rights. Two particular concerns are emerging: on one hand, inadequately designed AI systems may entail risks of discrimination (perpetuating historical patterns of discrimination, e.g. on the basis of racial or ethnic origin, disability, age or sexual orientation, or giving rise to new forms of discriminatory effects). On the other hand, there are serious risks of opacity, in the sense that the technical formula underlying the algorithm (and the decisions it takes) may sometimes be completely incomprehensible for humans.
On these assumptions, the AI Act proposal includes several provisions that reference – even indirectly – the CRD (see Articles 9 and 17). The objective is to prevent any potential overlaps with other EU legislation. Therefore, it is intended to incorporate the requirements for managing and controlling high-risk AI systems into the existing framework of financial regulation, which already consists of a complex set of rules. Also to this end, the AI Act proposal envisages that the financial supervisory authority – and «where appropriate the European Central Bank» (see recital 80 of the proposal of AI Act) – will be the competent authority to check from time to time whether the banking institution complies with the requirements of the European AI regulation.
Critical issues
There are two main critical issues that the AI Act proposal poses from our point of view, neither of which seems to have been explicitly considered by the European Parliament in its first reading.
On one hand, as also noted by the European Central Bank in its Opinion of 29 December 2021, «because of the novelty and complexity of AI, and the high-level standards of the proposed regulation, further guidance is necessary to clarify supervisory expectations with regard to the obligations in relation to internal governance» (see point 1.3). It is essential not to overlook the significance of the issue at hand, considering the potential challenges that may arise when sector regulations employ general clauses or vaguely defined concepts. Such approaches can inadvertently result in excessive regulatory power and disproportionate sanctions. Furthermore, they may contribute to a climate of uncertainty, making difficult to distinguish between lawful and unlawful practices. Recognizing these concerns emphasizes the need for clear, well-defined, and proportionate regulatory frameworks that strike a balance between fostering compliance and providing legal certainty. Furthermore, as highlighted in the Ethical Guidelines for Trustworthy AI, «in a context of rapid technological change, [it is] essential that trust remains the cement of societies, communities, economies and sustainable development»; from this perspective, it becomes necessary to preserve the trust that market players can and should place in AI systems and the accompanying legal framework, which must be appropriately detailed and specified.
Moreover, it has been rightly emphasized that the primary objective of the proposed regulation is not centered on maintaining the financial stability of banks, but rather on safeguarding individual interests and rights from potential threats posed by an unregulated AI market. In light of the above, and according to Council Regulation (EU) No 1024/2013 of 15 October 2013, the competent authorities should primarily be the national supervisory bodies (and not the ECB as might be deduced from recital 80 of the AI Act proposal). This clearly poses a problem of expertise for regulators, who apparently will increasingly have to deal with AI.
AI and suptech
The context of banking and financial services, in particular, has proven to be a fertile environment for the advancement of technology-driven products and services (this includes the emergence of credit scoring systems, which are acknowledged as «high-risk» under the AI Act proposal). This trend has been unfolding for several years, highlighting the industry’s proactive approach to embracing technological innovations (fintech), also in order to enhance regulatory compliance (regtech) and supervisory functions (suptech). Regarding the latter, in a report published by the Financial Stability Board on 9 October 2020, they discuss the use of supervisory and regulatory technology by authorities and regulated institutions. In this report, it is pointed out that the Banco de España, the Spanish supervisory authority, has successfully experimented with a natural language processing (NLP) system – which allows for the analysis of written or spoken information in a given language – in processing Spanish banks’ DNFIs (Disclosures of Non-Financial Information) from 2014 to 2019. This is a noteworthy experiment, as normally inspections in this area are only conducted on a random basis, which does not allow for full and effective supervision. Another example can be found in the Italian legal system, where the Bank of Italy recently published its regulation on the processing of personal data in the context of the management of complaints, which regulates the ways in which the authority itself makes use of machine learning systems in the activity of analysing complaints. One passage in particular should be noted, which seems to us very much in line with the content of the ethical guidelines referred to above, but also with the principles expressed by the AI Act proposal (even after the last parliamentary amendments): «This activity [of automated analysis of complaints] does not therefore imply any form of profiling or prediction of the behaviour of natural persons (…) nor do the results of the analyses have an immediate and direct impact on the decisions referred to the Bank of Italy». Therefore, this regulation not only notes that some uses of AI are at unacceptable risk (“profiling or prediction of the behaviour of natural persons”) but also affirms the principle of human oversight, as the machine only performs the first stage (i.e., examination and collection in clusters of complaints), while the outcome of the procedure is up to humans.
In conclusion, it is evident that the regulation of AI is still in its early stages and further developments can be expected, also regarding the governance of banks as resulting providers of high-risk AI systems (such as in the case of automated credit scoring). In our view, however, it is to be hoped that the AI Act will provide further guidance as to the requirements in relation to internal governance, as also noted by the ECB. Secondly, as we trust in the increasing expertise of the supervisory authorities, the initiatives in the area of so-called suptech are worthy of particular attention, as they can contribute to the affirmation of principles of transparency and human oversight, even while awaiting the EU legislative intervention.
Valeriana Forlenza is a Ph.D. candidate in economic law at the University of Pisa, Italy, and a lawyer in corporate law. In 2023 she was a visiting researcher at the DCU Law and Tech Research Cluster. Her research project focuses, among the other things, on the impact of artificial intelligence on bank governance, which is the topic she is investigating during her visiting at the Dublin City University.