Put into the FRYER!: Simplifying AI Act Assessments with Homomorphic Encryption

DCU Law and Tech regularly publishes blog posts discussing the topics Law and Technology written by a variety of authors.

Loya Haughton
Erasmus Mundus Scholar

In this article, we will demystify the Fundamental Rights Impact Assessment and explore its crucial connection to the European Union’s (EU’s) groundbreaking Artificial Intelligence Act. Then, get ready to meet FRYER, the cutting-edge tool that will revolutionise how we conduct these assessments by harnessing the power of homomorphic encryption to enhance privacy and simplicity.

What is the Artificial Intelligence Act?

In April 2021, the European Commission published a proposal for the ‘world’s first comprehensive AI law’. In August 2024,  the Act entered into force, starting a gradual three-year adoption of the Act. One can see more information about this timeline here: Timeline of Developments | EU Artificial Intelligence Act.

From Article 1, the Act avers its goal to balance responsible innovation, through the uptake of human-centric and trustworthy AI, with the protection of health, safety and fundamental rights. This is done by taking a risk-based approach, where AI systems are divided into the following categories:

  • minimal risk (like spam filters) which are unregulated under this legislation;
  • limited risk (such as GPT-chatbots), whose deployers will be subject to transparency obligations
  • unacceptable risk which should not be deployed, such as those tools which distort behaviour and impair informed decision-making; and finally,
  • high risk (which the majority of the Act focuses on) includes products which are used as safety components, and therefore, deployers of these systems are mandated to comply with more stringent obligations.

It is important to note that the minimal and limited risk labels, do not exist verbatim in the Act. However, these labels are inferred categories which have become the industry standard for describing the risk levels.

How is the Fundamental Rights Impact Assessment related?

One of the obligations for deployers of high-risk AI systems is the  Fundamental Rights Impact Assessment. This is an assessment done by deployers of high-risk AI systems, that are ‘bodies governed by public law or private operators providing public services, and operators providing high-risk systems’. This assessment, conducted in alignment with a template issued by the AI Office, assesses the description of the processes, the temporal aspects of use, affected persons, and human oversight and complaint measures. The result of the assessment is reported to the Market Surveillance Authority and the solution, before deploying the solution or when the solution has been significantly changed.

However, a FRIA is not a novel concept and literature has shown that these assessments can be burdensome, particularly on small to medium enterprises, for the following reasons:

  • the Charter of Fundamental Rights contains over fifty fundamental rights and since the template has not yet been released, there are relevant concerns about how one template can cover the wide variety of tools which fall under the category of high risk and cover the plethora of fundamental rights;
  • there are palpable resource constraints, of time and money, but also finding someone knowledgeable enough to complete the assessment;
  • where a deployer outsources the assessment, there may also be privacy concerns since proprietary content of the AI solution may be disclosed, in addition to possible personal data. Though a non-disclosure agreement (NDA) may be considered a viable solution, NDAs truly only ‘gain teeth’ after they have been broken and the breach has been discovered.
Introducing the FRYER

This is why, I proposed the FRYER, whose name is derived from the acronym FRIA. This is a novel homomorphic encryption-powered tool that will facilitate the Fundamental Rights Impact Assessment on an end-to-end basis.

Homomorphic encryption can be likened to having loose grapes in translucent Tupperware. If I asked a colleague to tell me how many grapes I had in this container, then the colleague would open the lid and count them. But, what if I protest? This is my lunch and I do not want the container to be opened. Then he/she could keep the Tupperware closed and look for hints on the outside; maybe there is a label stating the number of grapes? My colleague could also peek through the translucent portion of the Tupperware.  Likewise, if I had two containers of grapes, then my colleague could use the same method to tell me how many grapes I have in all.

Let us say that the closed container represented encrypted data (grapes). Therefore, opening the container would represent decrypting it. Therefore, Homomorphic Encryption works similarly to my analogy. Mathematical operations can be performed on encrypted data, without needing to decrypt it thus providing better access control due to fewer parties requiring the decryption key. This is efficient, since encryption and decryption take roughly the same amount of time, and yield a correct decryption. Additionally, as homomorphic encryption has been around for about fifty years, this technique has been tested and proven in a variety of contexts from data analysis to electronic voting systems. Finally, unlike pseudonymisation and anonymisation techniques which can negatively impact the depth of analysis possible, homomorphic encryption preserves the data, but limits access.

In the next section, the Hello Barbie toy will be used as a case study for applying the FRYER.

The Hello Barbie Case Study

Mattel’s first AI-controlled Barbie, ‘Hello Barbie’ was a voice-interactive tool that, when connected to Wi-Fi, listened to audio and then searched a remote cloud server to provide a voice response. This tool was marketed to young girls. However, despite the manufacturer’s promise to avoid collecting personal data, toy reviews (Buzzfeed) revealed that the toy was processing personal data from children, such as their names.

Based on its categorization as a toy and its target age group, this toy is likely a high-risk AI system, due to the potential risks to children’s privacy and data security and because it was included in the areas listed in the AI Act’s Annex.

Placing Hello Barbie into the FRYER

FRYER is designed to streamline the Fundamental Rights Impact Assessment (FRIA) process, making it more efficient and accessible. Here’s how it works:

Launch FRYER: As a cloud-deployed tool, FRYER is easily accessible from anywhere, ensuring seamless integration into your workflow. Also, concerns about the safety of the cloud platform are addressed through the Homomorphic Encryption used.

Select the Template: Choose from customizable templates for the FRIA, including those issued by the AI Office (as described in the AI Act). The ability to use various templates increases the longevity of this tool. Each template is also associated with a specific Risk Analysis algorithm, such as Inveradi et al’s approach.

Complete the Template: Upload necessary attachments, such as voice log transcripts or third-party documentation like whistleblower letters. Though stakeholder involvement in the FRIA is not clearly defined in the AI Act, Mattel has received Whistleblower letters, thus the upload of stakeholder documents is a relevant feature to highlight in our case study.  These documents can be analysed according to the company’s requirements, such as searching for personal data in transcripts.

Encrypt the Data: The completed template and associated documents are homomorphically encrypted. While keys can be downloaded as needed, this is rarely necessary.

Risk Analysis and Dashboard: The risk analysis algorithm calculates the risks, which are then displayed on a dashboard. This visualises the overall risk score and allows for personnel to be assigned to human oversight measures. The Risk Analysis can also be downloaded.

FRYER simplifies the complex process of conducting a Fundamental Rights Impact Assessment. By leveraging homomorphic encryption, it ensures that sensitive data remains secure throughout the process. This innovative tool not only saves time but also enhances the accuracy and reliability of the assessments. It excels even in scenarios where the storage platform cannot be trusted, such as insecure cloud environments, when NDAs are unsuitable, or when multiple parties are involved. Therefore, FRYER offers an end-to-end solution, covering data acquisition, processing, and presentation. Its customizable templates and user-friendly dashboards ensure it grows with its clients’ needs.

In closing, with FRYER, organizations can confidently navigate the requirements of the EU’s AI Act, ensuring compliance and protecting fundamental rights. FRYER is not just a tool; it’s a transformative solution that brings unparalleled security and efficiency to the world of Fundamental Rights Impact Assessments.

Loya Haughton is a 2022 Erasmus Mundus Scholar, from the Computing stream’s Cybersecurity track of the European Master in Law, Data and Artificial Intelligence (EMILDAI). As a member of the programme’s inaugural cohort, Loya served as EMILDAI’s first representative to the Erasmus Mundus Association and as the EMILDAI Student and Alumni Association’s first president.

Loya, a proud Jamaican, holds a Bachelor of Science in Computer Science with a minor in Management Studies; graduating as proxime valedictorian (proxime accessit). She is also a Microsoft Certified Data Analyst, with over five years’ experience in Financial Technology focusing on privacy-preserving Change Management and Information Technology Auditing. Loya’s dedication to promoting financial inclusion for the unbanked remains the cornerstone of her professional and academic endeavours.

More Blog Posts