Age of Empires: The Cyberage Last Warnings for Military AI Governance

DCU Law and Tech regularly publishes blog posts discussing the topics Law and Technology written by a variety of authors.

Oscar Josafat Leyva Ferzuli

From war to peace, from peace to war?

On August 6 and 9, 1945, the United States dropped two atomic bombs on the Japanese cities of Hiroshima and Nagasaki. The death toll approached 200,000, with most casualties occurring on the day of the attacks and many more succumbing in the following months and years due to the devastating effects of radiation. These bombings not only ended the Second World War but also marked the dawn of a new era, one defined by the destructive potential of technology.

In the aftermath, the international community moved swiftly to prevent such devastation from recurring. The creation of the United Nations laid the foundation for a global order based on international law and diplomacy, aiming to avoid another global catastrophe.

However, the subsequent Cold War between the United States and the Soviet Union quickly shifted the world into another form of conflict, an arms race fuelled by ideology and power. Proxy wars became commonplace, and while direct military clashes between superpowers were avoided, the world existed under a cloak of artificial global peace, an illusion that dulled awareness while nuclear stockpiles silently expanded. The Missile Crisis of 1962 brought the world to the brink, underscoring the fragility of this tense equilibrium.

Efforts to ban nuclear weapons gained traction in the decades that followed the Missile Crisis, yet it was only on 2017, that a significant milestone was achieved with the adoption of the Treaty on the Prohibition of Nuclear Weapons. Despite its symbolic weight, the treaty’s effectiveness remains limited: major nuclear powers, including the United States, Russia, China, the United Kingdom, and France, refused to sign it. Today, multiple states possess nuclear arsenals.

Eighty years later, Hiroshima and Nagasaki continue to serve as stark warnings but the global order is again under strain. With Russia’s invasion of Ukraine and growing tensions in the Middle East and South Asia, war has returned to the global stage. Traditional alliances are fraying, and international law appears increasingly ineffectual. The cloak has proven to be but a transparent veil.

War: An insidious engine of technological development

Historically, war has served as a powerful engine for technological innovation. The Industrial Revolution and the development of nuclear energy were both deeply intertwined with military ambitions. Today, the new arms race is not centred on nuclear weapons but on digital supremacy and Artificial Intelligence (AI) has emerged as the next strategic frontier.

The United States, China, and Russia are at the forefront of this race, investing heavily in AI technologies. With American companies dominating much of the global digital infrastructure, the U.S. government has embraced large-scale development of AI, often prioritizing innovation, speed and capability over legal safeguards.

In contrast, the European Union (EU) has positioned itself as a leader in AI governance. The EU AI Act, its landmark legislative framework, aims to promote trustworthy and human-centric AI. However, the Act explicitly excludes military applications.

As such, the EU is not absent from the defence tech domain. Through recent initiatives like the Strategic Compass and Rearm Europe, it seeks to regain strategic autonomy and integrate AI into its military resources.

This dual approach reveals a critical vulnerability: there are currently no binding international rules governing the use of AI in warfare.

Meanwhile, a growing ecosystem of companies is shaping the future of military AI. From traditional defence contractors to Silicon Valley tech giants and specialized start-ups, the private sector plays a central role in developing AI-powered capabilities for the battlefield. In the tech world, names like Microsoft, IBM, Alphabet, NVIDIA, OpenAI, Meta and Anthropic, alongside defence-focused firms such as Palantir Technologies and Anduril Industries, are at the forefront of this transformation.

As AI systems grow more advanced, so too does the autonomy and lethality of the weapons they control. The convergence of increasingly capable AI with autonomous military platforms raises profound ethical, legal, and strategic concerns. Without robust oversight and regulation, the battlefield risks becoming a domain where machines, not humans, determine the outcomes of life and death.

A thin line between automated precision and destruction

There is growing consensus that military AI must never operate without meaningful human oversight. The principle of keeping a “human in the loop” is essential, not only to preserve moral responsibility, but also to ensure strategic stability.

The current regulatory gap is deeply troubling as AI has the potential to redefine warfare entirely. On one hand, it offers opportunities to reduce human casualties by removing soldiers from direct combat, increasing precision in targeting, and improving operational efficiency through real-time data analysis and predictive modelling.

On the other hand, these same capabilities can be turned toward unprecedented destruction. Fully autonomous systems can be deployed at scale, at speed, and with minimal human oversight, raising the risk of dehumanized killing, accidental escalation, and unlawful strikes. The inherent flaws of commercial AI systems, such as bias, hallucinations, and lack of explainability, are exponentially more dangerous in military contexts, where mistakes cost lives and misjudgements could provoke full-scale conflict.

The gravest concern arises when considering the potential convergence of AI with weapons of mass destruction. If AI systems are granted access to or influence over nuclear, biological, or chemical arsenals, the consequences of failure or misuse could be existential.

In the absence of enforceable international regulations, humanity stands at a dangerous crossroads. Once again, we are tempted to pursue technological supremacy before building ethical guardrails. The lessons of the nuclear age should not be ignored: unchecked innovation in the name of power has led us to the brink before.

Have we learned our lesson?

Examples of successful international cooperation in arms control do exist. The global prohibition of biological and chemical weapons stands as a significant achievement in multilateral diplomacy. The Biological Weapons Convention and the Chemical Weapons Convention have been signed and ratified by the vast majority of countries worldwide, establishing legal norms against the development, stockpiling, and use of such weapons. These treaties demonstrate that the most destructive technologies can be regulated when political will and international consensus align.

Regulation of AI in military contexts must be precise, robust, and purpose-built. Unlike traditional arms control frameworks, governing autonomous systems requires specific standards tailored to each phase of development, from the system’s intended objective and the data used for training, to validation, testing, and eventual deployment. Crucially, automated weapons systems must never operate outside the bounds of meaningful human control. Without such safeguards, the risk of unintended escalation, unlawful strikes, or catastrophic malfunctions increases dramatically.

But in the current geopolitical climate, marked by great-power rivalry and declining trust in multilateral institutions, who should lead the governance of AI in the defence sector?

While International Humanitarian Law and the Geneva Conventions impose obligations on conduct in armed conflict, these frameworks were not designed with autonomous systems in mind. Existing arms control treaties, such as the Arms Trade Treaty, also fall short of addressing the nuances of autonomous warfare.

The stakes could not be higher. Last time, secrecy trumped openness and deterrence took precedence over dialogue. Will this time be any different? Will we fail again to regulate emerging instruments with potential for mass devastation? And if we do, what will the death toll be? will there be another chance?

Suggested citation:

Oscar Josafat Leyva Ferzuli, ‘Age of Empires: The Cyberage
Last Warnings for Military AI Governance’ (Comparative Digital Law Blog, 21 July 2025) <https://lawandtech.ie/age-of-empires-the-cyberage-last-warnings-for-military-ai-governance>.

About the author:

Oscar Josafat Leyva Ferzuli is currently completing a Master’s degree in Law, Data and Artificial Intelligence at Dublin City University. He has been selected as a PhD researcher at the School of Law and Government, where his research will focus on AI regulation, digital sovereignty, and the future of the EU defence strategy.

Share:

More Blog Posts