On the 12th of July 2024, the Regulation 2024/1689 known as the “Artificial Intelligence Act” has been published at the European Union’s Official Journal. It marks the end of a long legislative process, lasting since the publication of the EU White paper on AI in 2020 and the first text proposal back in April 2021. July 12th is also the starting point of the full entry into application of the AI Act, which will enter into force 20 days after this date and which provides for an application in several stages (6, 12, 18, 24 and 36 months after the entry into force). Implementation and compliance with this new regulation will become a serious concern for companies within the next months and will impact the daily professional lives of thousands of AI engineers across the EU and beyond.
The complexity of the AI Act seems to be proportionate to its importance in the AI sphere. The 144 pages text adopts a risk-based approach, providing for different obligations for different categories of AI systems. Some systems will be purely prohibited (unacceptable risk), others will be subject to a comprehensive set of compliance requirements (high risk or general purpose AI models), while certain AI systems will only be subject to transparency obligations. Not falling into any of these categories means that the system is not directly regulated by the AI Act. Then, understanding the consequences of the compliance with the AI Act on the development process of an AI system or simply the impacts of the text on a company require a basic understanding of these categories and the related obligations.
At Telecom Paris, a public engineering school placed under the authority of the French Ministry of Economy which purpose is to train digital engineers, law courses are fully integrated in the students’ curriculum. Indeed, the digital sector is subject to a growing number of complex regulations, starting with the GDPR, the DSA and DMA, and now the AI Act. Future engineers need to be able to navigate this complex regulatory ecosystem as they are the one directly impacted by them: they will be developing the technologies that will have to comply with the regulations, but also the technical features to meet the regulations’ objective (think of the privacy-by-design principle under the GDPR for example). As such, Telecom Paris has developed the “AI Act Game” (designed by Ass. Prof. Thomas Le Goff), a pedagogical content to teach the AI Act in a “fun” and more accessible way to audiences with no or very little legal background. As a gamified educational tool, the AI Act Game is designed to introduce learners to the AI Act’s main provisions in an engaging and accessible way.
Initially developed for Telecom Paris’ engineering students as part of both its initial and executive education programs, the game aims to simplify the complexities of the AI Act, providing a foundational level of understanding. After multiplying its use in teaching contexts and considering the feedback we got, we believe it can be beneficial not only for engineers but also for law students, legal professionals or any non-legal audience willing to learn about the AI Act in less than 30 minutes. It can also be useful for every company willing to ensure AI literacy among its employees (which is an obligation for every AI providers within the first 6 months after the entry into force of the AI Act, under its article 4). That’s why we decided to make it public, so that anyone can use it, whether it is for autonomous learning or for teaching purpose.
What’s in it?
The AI Act Game is not a true “game” as it is only an interactive content for the moment. The reason behind this is that it has been developed in-house by a law professor, thus limited in game-design technical skills! Nevertheless, the AI Act Game is structured to provide an immersive learning experience in just 30 minutes. Users are placed in the role of a team leader responsible for developing an AI system. This scenario-based approach helps learners understand the practical applications and implications of the AI Act. Depending on the chosen AI use case, users navigate through different pathways that correspond to various AI system categories defined by the AI Act. This method ensures comprehensive coverage of the Act’s obligations. Throughout the game, a virtual robot assistant provides key information about the AI Act, including its construction, the authorities responsible for its implementation, and potential sanctions. While the game does not delve deeply into every obligation, it covers essential aspects, offering a good introduction to the AI Act. This makes it an excellent tool for initial familiarization.
How to use it?
The AI Act Game can be utilized in multiple ways.
On the one hand, individuals can use the game on their own to gain a basic understanding of the AI Act. The interactive and engaging format ensures that users remain engaged and absorb the material effectively. There is no pre-requisite in terms of background!
On the other hand, teachers can incorporate the game into their curriculum to animate classes on the AI Act. The lecturer can divide the class into 4 groups and ask each group to dive into one of the four proposed use cases in the game. 15 to 30 minutes later, the class gets back together, and each group has to explain to the entire class the provisions applicable to the specific category they learned about. At the end of the discussion, the whole class learned about the four categories of the AI Act. The content can also be used by the reader just like “teaching slides” to guide a class in a more interactive way. The game proved to be an excellent tool to kickstart discussions and deepen students’ understanding through interactive learning.
Future developments of the AI Act game
The current version of the AI Act Game is just the beginning. Our vision for the future includes making the game more interactive by allowing users to engage directly with the virtual assistant and incorporating quizzes to test learners understanding. Our goal is to make the content available on a dedicated website, making it easily accessible to a broader audience and allowing us to grant certification just like a real e-learning or MOOC. Future versions will include links to additional resources, providing users with the opportunity to explore specific topics in greater detail. By the end of the year, we aim to launch a more professional and ambitious learning platform, offering an even richer experience.
Dr Thomas Le Goff is Assistant Professor of Law & Technology at Télécom Paris – Institut Polytechnique de Paris, where he teaches and writes about the regulation of digital technologies, data, cybersecurity and AI. His research focuses on the links between AI and sustainability, from a legal and public policy perspective. Prior to becoming an academic, he worked as an in-house legal counsel at Electricité de France (EDF), within the IP, digital and data legal department, where he was in charge of data protection and digital regulation expertise.
Dr Thomas Le Goff graduated from University Paris Cité (PhD in law and Master’s), University of Exeter (LLM) and University of Rennes 1 (LLB and Master’s). He wrote his PhD thesis on AI regulation in the energy sector, including an in-depth analysis of the AI Act and first proposals to incorporate environmental sustainability in AI regulation.