CEPS Task Force

Artificial Intelligence and Cybersecurity

Technology, Governance and Policy Challenges


Download the final report here.

Artificial Intelligence is showing enormous promise for improving our daily life. Countless applications in many sectors of the economy are already being developed and more can be expected over the long term. As this new world emerges, we need to seize the opportunity to decide how AI can help us promote a better society and a more sustainable future. Indeed, AI developments, as for any powerful general purpose, dual use technology, not only bring extensive possibilities, but also challenges to match, with people able to use AI to achieve both honourable and malevolent goals. Cybersecurity is a case in point. AI in the form of machine learning and deep learning will make an escalation of cyber-attacks easier, allowing for faster, better targeted and more destructive attacks. At the same time, AI could improve cybersecurity and defence measures allowing for greater system robustness, resilience and responsiveness. However, the application of AI in cybersecurity poses security as well as ethical concerns. For instance, while AI systems can exceed human performance in launching aggressive counter cyber operations, they could also fail in ways that a human never would. If this is the case, should ‘kill switches’ be incorporated in the systems? Furthermore, using AI for cybersecurity increases the need for better information sharing and collection of real time threat data. In parallel, cybersecurity for AI would need to be developed to make systems safe and secure. How can autonomous and intelligent systems be protected from malicious attacks? What are the implications of the vulnerabilities of AI-enabled systems to manipulation such as data poisoning and adversarial examples? How should the search for undiscovered exploits and well-known vulnerabilities be framed in the AI domain?

This Task force will bring attention to the market, technical, ethical and governance challenges posed by the intersection of AI and cybersecurity, focusing in particular on EU policy, but also looking at developments in other parts of the world. It will be composed of academics, industry players from various sectors, European Institutions & Agencies and civil society. And it will discuss issues such as: the state and evolution of the application of AI in cybersecurity; the debate on the role that AI could play in the dynamics between cyber attackers and defenders; the increasing need for sharing information on threats and how to deal with the vulnerabilities of AI-enabled systems; options for policy experimentation; and possible EU policy measures to ease the adoption of AI in cybersecurity in Europe.

Members of the Scientific Board:

  • Joanna Bryson, Reader (tenured Associate Professor), University of Bath
  • Jean-Marc Rickli, Head of Global Risk and Resilience, Geneva Centre for Security Policy (GCSP)
  • Marc Ph. Stoecklin, Principal Research Scientist and Manager of Cognitiv Cybersecurity Intelligence (CCSI) group, IBM T.J. Watson Research Center
  • Mariarosaria Taddeo, Research Fellow (Assistant Professor), Oxford Internet Institute, University of Oxford

Task Force Leader: Lorenzo Pupillo, Associate Senior Research Fellow and Head of Cybersecurity@CEPS Initiative


  • Stefano Fantin, legal and policy Researcher, Center for IT and IP Law (CITIP), University of Leuven
  • Afonso Ferreira, Research Director, CNRS, Toulouse institute for Computer Sciences (IRIT)
  • Carolina Polito, Research Assistant, CEPS


1st meeting – 10 September 2019: What is the state of the interplay between AI & Cybersecurity? Stocktaking with experts from various backgrounds. Presentations by the private sector, and the European Commission. Draft agenda

2nd meeting29 October 2019: AI for CybersecurityDraft agenda
– AI empowerments of different actors.
– Systems robustness, resilience and response: technological, ethical and governance issues

3rd meeting – 5 December 2019: Cybersecurity for AI – Draft agenda:
– AI and better information and real time threat data sharing
– AI and safety (data poisoning, adversarial examples)
– AI misuses vs malicious uses
– AI and the search for undiscovered exploits and vulnerabilities

4th meeting – 22 January 2020. Presentation of the Task Force Evaluation of the HELG Trustworthy Assessment List (Pilot version) to members of the European commission – Draft agenda

For further questions, please do not hesitate to contact Lorenzo Pupillo by email at: lorenzo.pupillo@ceps.eu.

Lorenzo Pupillo

Associate Senior Research Fellow and Head of the Cybersecurity@CEPS Initiative

+32 (0)2 229 39 68

Carolina Polito

Associate Research Assistant