MATS Autumn 2026 Fellowship

The autumn cohort brings together fellows from around the world for an intensive, ten-week research program hosted in Berkeley, California and London, UK. Fellows work closely with experienced mentors, leading researchers in AI alignment, transparency, and security, to develop high-impact research projects. For the first time, we are also launching new biosecurity and field building tracks.

The program combines structured learning with independent inquiry, and entails the following support:

  • Mentorship from world-class researchers

  • Dedicated research management to support scoping and execution

  • Seminars, workshops, and guest lectures across AI safety disciplines

  • A collaborative research environment, with comfortable office space and daily interaction

  • Stipend ($12,500), compute budget ($20,000), free housing, catered meals, travel covered, and J-1 visa sponsorship where required

  • An extension phase in which fellows can extend their research for a further 6 to 12 months with continued funding ($7,680 monthly stipend, $2,000/week compute), mentorship, and community support; over the past two cohorts, around 80% of applicants who applied were admitted

The program concludes with a Research Symposium, where fellows present their work to an audience of researchers, practitioners, and community members. The Autumn 2026 cohort runs 28th September to 4th December 2026 in Berkeley and London. Applications open in early May 2026 and close on 7th June 2026.


The MATS Program

MATS is a research fellowship designed to train and support researchers working on AI alignment, interpretability, governance, and security. Fellows collaborate with world-class mentors, receive dedicated research management support, and join a vibrant community in Berkeley and London focused on advancing safe and reliable AI. The program provides the structure, resources, and mentorship needed to produce impactful research and launch long-term careers in AI safety.

Impact

We have trained 527 fellows to date, with around 80% of pre-2025 alumni now working in AI safety at organisations such as Anthropic, Google DeepMind, OpenAI, and the UK AI Security Institute. Fellows have co-authored 160+ papers (7,800+ citations) and around 10% of alumni have co-founded 30+ AI safety startups, including Apollo Research and Timaeus.

MATS alumni have gone on to contribute to leading AI labs, nonprofits, and academic institutions. Many continue their research independently or through MATS extension programs. Together, their work spans interpretability, model evaluations, governance, safety benchmarks, agent behaviour analysis, theory, and more, helping strengthen global capacity for AI safety research.