Agenda

09:30 - 10:00: Registration, Pre-conference coffee & tea / welcome

10:00 - 10:30: Opening Ceremonies

10:30 - 11:00: Fazl Barez, “What does it mean to understand, in the age of AGI?”

11:00 - 11:30: Victoria Krakovna, “Evaluating Scheming Propensity with Realistic Honeypots” (not broadcast)

11:30 - 11:45: Coffee break 1

11:45 - 12:15: Gary Marcus, “LLMs are not the way to alignment”

12:15 - 13:00: AI Safety Panel, “AI Safety in the age of AI reasoning”

Panelists: Gary Marcus (New York University), Victoria Krakovna (Google DeepMind, Future of Life), Andrei Krutikov (Noeon Research), Fazl Barez (University of Oxford).

Moderator: Blaine Rogers (Noeon Research)

13:00 - 14:00: Lunch

14:00 - 14:30: Sara Bernandini, “Designing Safe, Risk-Aware Autonomous Systems”

14:30 - 15:00: Alessio Lomuscio, “Robustness Verification of Machine Learning Systems”

15:00 - 15:15: Coffee break 2

15:15 - 16:00: AIGI Panel, “Latest AI Governance Developments in the US, China & EU”

Panelists: Luise Eder, Robert Trager, Nicholas Caputo & Miro Pluckebaum

Moderator: Lisa Klaassen

16:00 - 16:30: Markus Anderljung, “Reflections on Frontier AI Regulation”

16:30 - 16:45: Coffee break 3

16:45 - 17:15: Seán Ó hÉigeartaigh, “Prospects for West-China cooperation on AI safety”

17:15 - 17:45: Oliver Sourbut, “Risk Modelling—and Safety Engineering?—for Loss of Control”

17:45 - 18:15: Closing ceremonies

Talk titles TBA

Agenda may be subject to change

Poster Sessions (Concurrent with talks)

10:30–13:00: Poster Session 1

14:00–16:30: Poster Session 2