The Intersection of AI and Legislation: A Historic Congressional Hearing in the US

DCU Law and Tech regularly publishes blog posts discussing the topics Law and Technology written by a variety of authors.

Deborah Djon

In an era where artificial intelligence (AI) is rapidly evolving, the need for its regulation has never been more critical. This was the central theme of a recent congressional hearing in the United States on May 16 2023 featuring Sam Altman, CEO of OpenAI, and other leading professionals in the field. In the hearing, Altman voiced his concerns about the potential for opinion manipulation, especially in the context of the upcoming election. The hearing is the first of its kind where industry leaders proactively approach legislators to advocate regulative measures for their technology. The hearing raised awareness of the advantages and dangers of artificial intelligence and outlined the benefits and potential pitfalls of regulation.

The Impact of AI on Our Lives

Artificial Intelligence has been influencing our lives for many years. Everyday applications such as personal voice assistants, Face ID, personalized advertisements, and social media content recommendations are all powered by AI. However, the recent surge in generative AI technologies, such as Large Language Models like ChatGPT or text-to-image models like Midjourney and DALL·E, has brought AI to the forefront of public discourse.

These models have enhanced human lives, businesses, and society through benefits like increased productivity, improved decision-making, and advancements in medical diagnosis. Yet, this technology also presents challenges such as a lack of transparency. Many AI models are often referred to as “black boxes” due to their complex nature and difficulty in interpretation. Furthermore, the large amounts of data required to train these models raise privacy and copyright concerns.

AI introduces new professions like the Prompt Engineer, who crafts text prompts for AI-generated content to ensure its relevance in various scenarios. However, it also has the potential to result in job losses due to increased automation. In the recent hearing, concerns were raised about the potential malicious use of AI for misinformation and opinion manipulation. This could be done using DeepFakes, as demonstrated by the recording Senator Richard Blumenthal used to open the hearing. The recording was created using ChatGPT and a voice-cloning software.

Although AI models can be beneficial, they can also inadvertently introduce biases and errors with significant real-world consequences. A notable example is Sarah Wysocki, a teacher from a DC school who was dismissed based on an unfavorable evaluation from an AI-based teaching assessment system. This system incorporated various metrics, including classroom observations, but the most influential was the “value-added” measure. This measure aimed to quantify a teacher’s direct impact on their students’ standardized test scores. Although this metric is often touted as the best predictor of student success, it is susceptible to numerous external factors beyond the classroom’s control, such as learning disabilities or poverty. Therefore, it is not recommended to rely solely on this measure for decisions as critical as teacher dismissal. In Wysocki’s case, the negative score contradicted the positive feedback she received from students, parents, and fellow teachers.

Wysocki found a new position, shortly after her dismissal, however, her case illustrated the potential dangers of AI.  Several experts even called for a halt in the development of AI technologies. However, Sam Altman and his fellow testifiers opposed this suggestion at the congressional hearing. Instead, they called for regulations that are jointly developed with leading industry experts.

The Proposals

The testifiers at the hearing proposed several points. Altman emphasised the desirability of regulation. Proper regulation not only provides incentives for AI application creators but also ensures safeguards for users. AI systems intersect with legal and ethical realms as they become more autonomous and complex. A case in point is the tragic incident this year where a Belgian man died by suicide following a recommendation from a chatbot he interacted with. This case raised questions about the ethics of the chatbot and its legal responsibility.

The hearing proposed a new approach to the regulation of technological innovation, where industry partners and legislators jointly create legislation. This ensures that the regulations take into account the technical specifications and limitations of the technologies. Altman suggests that this approach could be more useful than, for instance, in the case of social media, where more regulation and safeguards could prevent incidents like the storming of the Capitol or its negative impact on mental health.

Regulation created jointly with the industry could also ensure the competitiveness of AI models. Altman proposes a legal agency that oversees AI activities and has the potential to issue and revoke licenses for AI model creation. This approach could serve as a model for future technological advancements.

Implications for EU Legislation

While there is no federal legislation for AI in the U.S., the recent congressional hearing has several implications for the ongoing development of EU AI regulations. The Artificial Intelligence Act, proposed by the EU in 2021, aims to establish a common legal and regulatory framework for artificial intelligence. It categorizes applications based on their risk and proposes stricter regulations for higher-risk applications.

Recent changes to the proposal prohibit the use of AI technology for biometric surveillance and require generative AI systems like ChatGPT to disclose information that was created with artificial intelligence. However, many executives from EU companies oppose the draft, arguing that the legislation could potentially hinder the competitiveness of EU companies.

The proposed regulations for foundation model providers, such as OpenAI with ChatGPT, could disadvantage EU-based companies aiming to build on top of such foundational models. These companies would first need to provide regulators with specific details on the model’s capabilities, training procedure, and potential risk mitigation. Foundational model providers may not be able or willing to adhere to EU requirements, potentially causing EU companies to be cut off from cutting-edge non-EU models.

Furthermore, Altman’s suggestion for a U.S. or international organization to license the most potent AI systems may have repercussions regarding how AI is governed in the EU. If the U.S. is the first to create such an agency, it might set a precedent for other regions, such as the EU, to create comparable regulatory bodies.


The recent congressional hearing on AI regulation, featuring industry leaders such as Sam Altman, CEO of OpenAI, highlighted the critical need for effective regulation in the rapidly evolving field of artificial intelligence. The hearing underscored the profound impact of AI on our lives, from everyday applications to more complex systems that raise ethical and legal questions.

The testifiers at the hearing, emphasized the importance of regulation to provide incentives for AI application creators and safeguards for users. They proposed a joint approach to legislation, where industry partners and legislators work together to create regulations that take into account the technical specifications and limitations of AI technologies.

However, the potential dangers of AI must be acknowledged, from job losses due to automation to the misuse of AI for misinformation and opinion manipulation.

In conclusion, while AI offers immense benefits, it also presents significant challenges that require careful consideration and proactive regulation. The congressional hearing marked a crucial step towards this goal, advocating for a collaborative approach to legislation that involves both legislators and industry experts. This approach ensures that as we continue to advance in the field of AI, we do so responsibly and ethically, maximizing benefits while minimizing potential risks.

Deborah Djon has an MSc in Computing with a specialization in Artificial Intelligence from Dublin City University. Her academic journey began with an industry-sponsored undergraduate degree at Nokia Germany. Now Deborah’s passion lies in leveraging Artificial Intelligence to address business challenges and understanding the societal implications of this transformative technology.

More Blog Posts