GSAIS

Guaranteed Safe AI Summit

About the March 2025 Summit

The Guaranteed Safe AI (GSAI) Summit was a 2-day gathering in San Francisco on March 8-9, 2025. Leading researchers, funders, and experts of industry gathered to identify bottlenecks and advance the paradigm of guaranteed safe AI. The summit set out to advance research, collaboration, and forward progress in the field. The agenda included multiple breakout sessions for structured discussions, unstructured time for collaborations, and presentations from Yoshua Bengio, Stuart Russell, Luke Ong, Max Tegmark, Dawn Song, Steve Omohundro, Clark Barrett, davidad, and Sanjit Seshia.

GSAIS Logo

Given attendee interest, we are looking into organizing a subsequent event in the fall.

The Guaranteed Safe AI Framework

Guaranteed Safe AI (GSAI) is an emerging framework that provides quantifiable guarantees of AI safety.

As AI systems become more powerful and autonomous, traditional empirical safety assessment methods like red-teaming and evals become increasingly insufficient to mitigate risks from misalignment or misuse.

Approaches that follow the GSAI framework aim to provide the level of quantitative safety guarantees we've come to expect from other engineered systems. This is achieved with three core components: an auditable, separable world model, a way to describe what portions of the state space are "safe" and "unsafe," and a verifier (which provides an auditable proof certificate that the output satisfy the safety specification in the world model).

For more on GSAI, take a look at the seminal paper Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems. And if you'd like to join or support this research effort, we also have a public email list.

Organizers

Evan Miyazono

Evan is the founder & CEO of Atlas Computing, an R&D nonprofit prototyping AI-powered tools to generate formal specifications. Evan previously led initiatives at Protocol Labs, an innovation network driving breakthroughs in computing, to push humanity forward.

Nora Ammann

Nora focuseses on developing quantitative safety guarantees for AI systems at the Safeguarded AI programme at the UK's Advanced Research and Invention Agency (ARIA). She also contributes to research on Flexible Hardware-Enabled Guarantees (flexHEG) for AI governance. She previously co-founded and led a research initiative fostering interdisciplinary AI safety research (PIBBSS).

Ben Goldhaber

Ben is the Projects Lead at the Future of Life Foundation. Previously, he was the cofounder & board member of the Quantified Research Institute and the director of FAR AI, an AI safety research lab that incubates and accelerates research agendas that are too resource-intensive for academia but not yet ready for commercialisation by industry.

Sponsors

The Guaranteed Safe AI Summit was made possible through the generous support of The Beneficial AI Foundation, and generously hosted by FutureHouse