Introducing The Option Space
Exploring rapid solutions for AI geopolitics, competition and cooperation
While leading AI companies are promising AGI timelines measured in years — not decades — international AI governance still operates on diplomatic and academic timescales. International AI summits take place annually, while most ideas take many months and years to germinate.
Anyone who's had a research project stretch from three months into a full year knows this particular mix of regret and resignation — watching your findings slowly drift toward irrelevance as the AI industry accelerates past you. The irony is sharp: sometimes, the longer you take to get it right, the harder it is to stay relevant.
When crises hit, it's the ideas that are lying around that get picked up. Given the pace of development, we may face governance challenges sooner than expected.
Megapapers, extensive reports and deep analysis are critical, but alongside that we also need quicker ideation, faster feedback loops and a greater volume of public work that informs our collective intuitions about how best to manage AI, geopolitics, cooperation and competition.
This is why we are setting up The Option Space – an exploratory and experimental platform for researchers to share initial ideas about AI, geopolitics and international coordination. We're aiming for that sweet spot of not-quite an arXiv paper but quite a bit more than a tweet, getting more ideas into circulation so that when those critical moments arrive, decision-makers can draw from a broader menu of options.
The blog will feature historical case studies of successful coordination, current challenges in AI geopolitics, proposals for managing transformative technological change, and much more.
Upcoming work for September and October will attempt to answer:
When have warning shots led to international agreements?
How have technology companies verified the security and safety of their code to governments?
What lessons can we draw from agreements like Open Skies?
Good ideas need time to mature, but crises don't wait. We're betting that rapid iteration will generate useful ideas, so that we have the right solutions when they're needed most.
Do you have ideas relating to AI geopolitics and coordination? Send us your pitch.
The Option Space is maintained by the Safe AI Forum (SAIF), a US 501(c)3 facilitating international cooperation on extreme AI risks. Views expressed represent individual authors' perspectives, not official SAIF positions.

