Generative artificial intelligence (AI) – including ChatGPT 4 – is perceived (and even feared) by some as ‘a threat to humanity’. Whether this is actually true, numerous risks to fundamental rights, safety and human autonomy due to the lightning-fast development of AI are already having an impact on all sectors of the economy and society.
Take AI in defence – whether it’s Palantir’s ChatGPT-like AI platform for military decision-making, Clearview’s facial recognition systems to identify enemies, or autonomous drones deliberately used as lethal weapons systems, the entire military sector has become increasingly reliant on AI. And when AI is deployed in defence, the stakes are high and the EU has some serious catching up to do.
Large investments in military AI (around USD 6 billion in 2021) as a share of rising global defence expenditures (USD 2 trillion in 2020) reflect the defence industry’s burgeoning love affair with the technology. Beyond weaponry, AI is instrumental for various Intelligence, Surveillance and Reconnaissance (ISR) tasks at the strategic, operational and tactical level, as well as automated reasoning, logistics, training, and much, much more.
Taken together, AI enables what experts call ‘Information Superiority’ – in short, gaining a strategic advantage over other nations’ defence through data and intelligence, with far-ranging geopolitical implications.
Indeed, AI-enabled technology supplied by European industry is one of Ukraine’s core assets against Russia. Unmanned aerial vehicles (supplied by the US, Norway, Luxembourg and the UK) and autonomous underwater drones (provided by the Netherlands) are tasked with preventing Russian attacks. Unmanned ground vehicles (courtesy of Germany) and mobile autonomous intelligence centres improve geospatial intelligence as well as data processing on the ground. AI-enabled acoustic monitoring solutions can also detect incoming missiles.
These examples show that AI can move the needle in conflict, intelligence and deterrence. Thus, the use of AI in and for defence has become a game changer for geopolitics and warfare. Yet while Europe’s military AI industry is thriving, its political leadership has decided to turn a blind eye to its uses and the associated risks.
The devil is in the (AI Act’s) detail…
The proposed Regulation on Artificial Intelligence (the AI Act), which will soon enter trilogue negotiations, promotes AI uses that are ethical and respect fundamental rights, yet – almost sneakily – mentions in a footnote that military AI uses are not within its scope.
This leaves Member States ample leeway in governing the most critical uses of AI in warfare. This is concerning given the EU’s investment in AI and other advanced technologies to the tune of almost EUR 8 billion for 2021-2027. This is possible through the European Defence Fund and because the EU doesn’t prohibit the use of autonomous weapons, despite resolutions passed by the European Parliament in 2014, 2018 and 2021.
Although military AI is excluded, the AI Act will nonetheless have an impact on European defence. Many non-adversarial AI systems are not exclusively developed or used for defence, but are instead dual-use by nature, meaning they can be used for various civilian and military purposes (for example, a pattern recognition algorithm can be developed to detect cancer cells or to identify and select targets in a military operation).
In dual-use cases, the AI Act would apply, requiring systems to comply with its provisions for high-risk AI. However, applying regulatory requirements, including human-centricity and oversight, may often not be feasible for systems operating autonomously or in a classified environment. Additionally, most defence organisations don’t closely follow civilian digital policy developments, and thus may be underprepared for the AI Act once it enters into force.
On a political level, governments are increasingly engaging on the critical questions around military AI. The Dutch and South Korean governments co-hosted a summit on Responsible AI in the Military Domain (REAIM) in February 2023, bringing together over 50 government representatives to endorse a joint call for action, aiming to place ‘the responsible use of AI higher on the political agenda’. The Defence Departments of Canada, Australia, the US and the UK have already established guidelines for the responsible use of AI. NATO adopted its own AI Strategy in 2021, along with a dedicated Data and Artificial Intelligence Review Board (DARB) to ensure lawful and responsible AI development through a certification standard.
NATO’s AI Strategy may face implementation hurdles, however, and apart from France’s public AI Defence strategy, there is no EU-wide legal and ethical framework for the military uses of AI. Consequently, Member States may adopt different approaches, leading to gaps in regulation and oversight.
Time for the EU to step up
This is why the EU should step up and develop a framework for both dual-use and military AI applications, specifically a European-wide approach to use AI responsibly in defence, based on the AI Act’s risk tiering. This would direct defence institutions and industry to develop, procure and use AI responsibly based on shared values.
Although defence is not an EU competency under the Treaties, the EU has found ways around this in its response to Russia’s invasion of Ukraine. An EU framework would constitute the first meaningful whole-of-government approach to govern AI risks across all institutions. Ultimately, establishing a unified framework for responsible AI in defence would signal the EU’s global leadership ambitions in shaping the future of values-based AI governance by mitigating the most severe risks in both military and civilian contexts.
In sum, Europe cannot afford to overlook the significant implications of AI in defence. Current EU legislation only party covers AI defence applications (in the case of dual-use AI) or not at all (again, military AI is excluded from the AI Act). This leaves political responsibility and risk management in the hands of Member States, or, in the worst-case scenario, to the defence industry alone.
The EU’s much-touted risk-based approach to AI is only worth its salt if it also effectively governs military systems – perhaps the most critical sector where AI is concerned. Otherwise, the real risks remain unaddressed and the full potential benefits of responsible AI remain untapped.