The Secure Program Synthesis Fellowship
About Apart Research
Apart Research is an AI safety nonprofit building the global talent pipeline for the AI safety ecosystem. Our remote hackathons and fellowships have become a primary on-ramp into empirical AI safety. In just the last two years, we've engaged 7,000+ participants across 50+ global research Sprints (hackathons), and we have published 30+ peer-reviewed AI safety papers, including two oral spotlights at ICLR 2025 (top 1.8% of accepted papers), through our fellowships.
Our alumni now work at Anthropic, Google DeepMind, the UK AI Security Institute, and dozens of other labs, government bodies, and AI safety orgs.
We're at an inflection point. Our hackathons currently draw around 400 signups each, and we want to 5x that by the end of 2026 while launching more events better and faster. Achieving this requires investing in the engineering and operational backbone needed to run more and larger events at higher quality.
We are looking for an AI engineer excited to make high-impact events happen (ideally with real events experience), building the infrastructure and tooling that make the manual work disappear.
Diversity and inclusion
We're committed to building a diverse and inclusive team. We particularly encourage applications from women, people of colour, and neurodivergent applicants. Reasonable adjustments are available at any stage of the process. Contact careers@apartresearch.com if you need any.
Specification Elicitation
Develop tools and workflows for extracting formal specifications from ambiguous, distributed, or implicit sources (e.g., documentation, legacy systems, human stakeholders). Projects may include structured editors, GUIs, or pipelines that translate informal requirements into formal representations (e.g., Lean), building on approaches like “SpecIDE.”
Specification Validation
Design methods to verify that extracted specifications are correct and complete. This includes techniques for testing, cross-checking, or formally validating whether a specification accurately captures intended system behavior.
Spec-Driven Development & Evaluation
Explore workflows where specifications generate multiple candidate implementations, which are then evaluated against each other. Projects may involve building infrastructure to compare robustness, correctness, or performance across implementations derived from the same spec.
Adversarial Robustness for FM & QA Tools
Investigate failure modes in formal methods pipelines, LLM-assisted tooling, and QA systems. Projects may focus on adversarial inputs, robustness guarantees, or identifying weaknesses in automated reasoning and verification systems.
Details
📅 Duration: 4 months (June - September 2026)
🏢 Research Teams: Expert Mentor + Apart Research Project Manager + Mentees
⏱️ Time Commitment: Commitment of 8-30 hours a week per person
📍 Locations: Remote
📋 Format: Two research stages; Workshop paper (or equivalent) → Demo day/Conference paper
🤝 Support:
Research guidance
Compute
APIs
Community
Demo day/Conference Travel funding
Prizes
Structure
Gross World LoC (Line of Code) is skyrocketing due to AI. By default, we have no way of knowing if what we're vibecoding is doing what we think its doing. Secure program synthesis, on the other hand, is the intervention for throwing advanced software correctness techniques at all the code coming out of the AIs.
This fellowship offers part-time research opportunities on mentor-led projects at the intersection of formal methods, AI systems, and security. Participants work in small teams to tackle challenging, underspecified problems in specification, validation, and adversarial robustness.
For background on the framing, see How to Solve Secure Program Synthesis, The Scalable Formal Oversight Research Program, and Lies, Damned Lies, and Proofs.
A joint initiative of Apart Research and Atlas Computing.
Key Dates
April 28th - May 9th 2026: Mentor Applications open
May 12th 2026: Mentors Announced
May 13th 2026: Participant Applications open
May 26th 2026: Participant Applications Close
June 9th 2026: Participants Announced
June 15th 2026: Projects Begin
July/August 2026: Mid-Project Presentations & Milestone Submissions
August/September 2026: Final submissions
September/October 2026: Demo day