This event has passed.

Agile Rabbit: How Safe is Artificial Intelligence?

Tue 26 Mar 2024






Book Now

Agile Rabbit: How Safe is Artificial Intelligence?

Whether it’s a deepfake of Taylor Swift or Rishi Sunak’s summit at Bletchley Park, news about Artificial Intelligence seems unescapable. And it’s accelerating at an unprecedented rate. With a group of world experts, we’re getting behind the headlines to look at what’s really happening in AI safety, particularly the role that law and politics play in providing regulation and reassurance.

The AI landscape is complex, ranging from positive impacts such as healthcare advances to those more negative that affect real-life opportunities and services. So, what are the risks of AI as well as the transformative opportunities?

Come and join us to discuss, make comments, and ask questions.


This event is in partnership with the University of Exeter’s Institute for Data Science and Artificial Intelligence and the Alan Turing Institute.


Sabina Leonelli is a professor in philosophy and history of science at the University of Exeter, where she co-directs the Centre for the Study of the Life Sciences (Egenis). She gained her PhD at the Vrije Universiteit Amsterdam, following an MSc in history and philosophy of science at the London School of Economics and a BSc (hons) in history, philosophy and social studies of science at University College London.

Her research focuses on the methods and assumptions involved in the use of big data for discovery; the challenges involved in the extraction of knowledge from digital infrastructures, and the implications of choices in data curation for the outputs and uses of science and technology; the role of the open science movement within current landscapes of knowledge production, including concerns around inequality; and the status and history of experimental organisms as scientific models and data sources. She published widely in a variety of disciplines including philosophy, history, social studies of science, data science and biology; and is active in science policy, particularly as adviser on Open Science implementation for the European Commission and the steering boards of various research data infrastructures.


Atoosa is a Chancellor’s Fellow at the University of Edinburgh’s Department of Philosophy and the Futures Institute. She previously held the positions of Visiting Research Scientist at Google DeepMind in London and Postdoctoral Fellow at the Humanizing Machine Intelligence Grand Challenge at the Australian National University. Her research and teaching focuses on the ethics and philosophy of AI and computing, the roles of mathematics in empirical sciences and normative inquiry, and modelling of morality.

Atoosa holds a Ph.D. in Philosophy of Science and Technology from the University of Toronto and a Ph.D. in Mathematics from the Ecole Polytechnique de Montreal. At Montreal, she was part of the Group for Research in Decision Analysis (GERAD) that works on developing mathematical models and computational algorithms for large-scale, data-driven decision problems.


As Programme Director of AI: Futures and Responsibility, Seán’s research interest include emerging technologies, global risk, science and technology policy, and horizon-scanning and foresight. He specifically focuses on the impacts of artificial intelligence on societies. He is the founding Executive Director of the University of Cambridge’s Centre for the Study of Existential Risk and has developed the centre’s research vision in collaboration with its other founders.

Seán has led research programmes on the topics of emerging technologies and AI at the Future of Humanity Institute (Oxford) from 2011-2015, and co-developed both the Strategic AI Research Centre (Cambridge-Oxford collaboration), and the Leverhulme Centre for the Future of Intelligence (Cambridge-Oxford- Imperial-Berkeley collaboration) in 2015. Prior to Cambridge, Sean established the FHI-Amlin Collaboration on Systemic Risk. He has a PhD in genomics from Trinity College Dublin.


Markus is the Head of Policy at GovAI. He leads his team’s research on how governments, AI companies, and other stakeholders can best ensure the safe and beneficial development of transformative AI systems. In his work, Markus focuses on the impacts and governance of the most capable AI systems available today, and how society can prepare for even more capable AI systems in the future. His research addresses whether regulations should be imposed on frontier AI systems.

Markus is Adjunct Fellow at the Center for a New American Security and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was seconded to the UK Cabinet Office as a Senior AI Policy Specialist advising on the UK’s regulatory approach to AI. Amongst others, Markus’s research has been published in Science, Nature Machine Intelligence, International Joint Conference on AI, and Journal of Artificial Intelligence Research.


Rebecca Kesby is a freelance journalist, broadcaster and live host. She joined the BBC’s local radio in 1996 and has been at the BBC World Service since 2000.

Evidence based, accurate, fair and responsible journalism are front and centre in Rebecca’s work. She likes digging into the detail, challenging the discourse, rigour, context and substance. She has reported from Africa, China and the US. She is also an audio and visual filmmaker.


Agile Rabbit is on an adventure to make creative events and experiences that engage everyone. The topics that matter to us are the natural and scientific world, global affairs, and their relationship to art and culture.


Tell a Friend