When Have Warning Shots Led to International Agreements?
Crises alone do not always lead to cooperation. What conditions might matter for international AI governance?
Summary
If a rogue AI system shut down electrical grids across multiple countries for a week before being contained, would it be enough to demonstrate the potential for catastrophic risk? Could it lead to coordination between countries to reduce AI risk?
Many AI governance experts expect that warning shots—events demonstrating extreme future AI risks—could lead to policy change by making abstract risks seem concrete and salient. The reasoning goes: once policymakers witness real AI-caused harm or capability demonstrations that clearly suggest future dangers, the urgency of the shared threat will overcome political barriers and drive coordinated action.
This post looks at a broad set of historical cases including Chernobyl, ozone layer depletion, and COVID-19 to test whether warning shots tend to lead to international agreements. It finds that warning shots seem more likely to translate into international agreements when specific conditions are already in place:
Pre-existing institutional capacity: Established bodies with technical expertise and diplomatic legitimacy to convene nations and draft treaties quickly (like the IAEA for nuclear).
Clear attribution: Unambiguous evidence of causation.
Transnational harm: Damage crossing borders and affecting multiple nations directly, creating genuinely shared stakes.
Political alignment: Aligned incentives that overcome typical barriers to international cooperation (e.g., geopolitical competition).
Available solutions: Ready-made technical or policy responses that can be quickly deployed.
If AI warning shots follow historical patterns, it appears that they may not readily lead to international agreements. For AI governance, a robust strategy would be to start building institutional and political readiness before major crises happen — while learning from potential smaller warning shots along the way.
Table of Contents
Introduction
The idea of a “warning shot”, an event serious enough to bring attention to future risks—yet not globally catastrophic, is widely referenced1 in AI governance discourse.
An argument for the importance of a warning shot usually goes something like this: AI risk is sufficiently abstract, speculative, and counterintuitive that many policymakers and the public will struggle to take it seriously until something concrete happens. Unlike more familiar risks (e.g., AI bias or discrimination), the prospect of AI systems causing large-scale extreme harm can seem too “sci-fi” or remote to warrant urgent action. A clear instance of AI-caused harm could therefore be necessary to make the risks visceral and credible enough to generate the urgency and legitimacy necessary for international action. Such demonstrations of risk have worked in the past. For example, the 1986 Chernobyl disaster produced binding international treaties within months.
However, there is considerable ambiguity regarding both the characteristics of a warning shot, and the specific mechanism by which it would lead to desirable policy changes. Recent AI incidents have not led to international action despite evidence that systems strategically underperform when their abilities are being evaluated and cheat to achieve their objectives, as well as reports of sycophantic responses leading to harmful user behaviour. So, what conditions would have a chance of leading to action? Large-scale physical harm that crossed multiple borders? A terrorist using an AI-assisted bioweapon? We need greater clarity regarding both the precedents for a warning shot leading to action, and the mechanism by which this would occur.
In this post, the term "warning shot" is used to broadly include any event that makes extreme future risks become salient to policymakers and the public—whether through actual harm (like Chernobyl), communication of potential harm (like the ozone hole), or capability demonstration (ChatGPT arguably fits this criteria).
We focus specifically on when such events lead to international agreements2, and what conditions may need to be met for an AI warning shot to spur similar action. While some international agreements like the Antarctic Treaty System and Outer Space Treaty were negotiated before major disasters, this might be more challenging for international AI agreements due to domestic competition, rising securitisation, and policy polarisation.
This post sets aside the question of whether AI warning shots will occur in the first place. Given the speed at which key AI capabilities increase, evidence of concerning behaviour in current systems, and the potential for misuse (e.g., in potentially aiding the creation of biological weapons), one seems to be increasingly likely to happen at some point.
Why Don't Crises Always Lead to Policy Changes?
A useful framework for understanding is political scientist John Kingdon's Multiple Streams Framework.3 It conceptualises policy change taking place when three conditions align at the same time:
A problem becomes undeniable - through shocking statistics, dramatic events, or clear policy failures.
Solutions are ready to go - experts have already developed workable proposals that seem feasible and affordable.
Politics are favourable - public mood has shifted, new leaders are in power, or interest groups line up in support.
When all three align, they create brief “policy windows” where major change becomes possible. But these windows require “policy entrepreneurs” — people or groups ready to push their solutions when the moment is right. A 2019 essay has already applied this framework to AI governance, emphasising that policy windows are rare and fragile. Even when such a window opens, policy change depends on policy entrepreneurs who can shape decision-makers' interpretation of ambiguous information through framing and other tactics.
We can apply this framework to analyse the “ChatGPT moment”. The release of ChatGPT in November 2022 opened a “policy window”, by making AI capabilities suddenly salient to policymakers and the public through rapid adoption (100 million users in two months). The window allowed various policy entrepreneurs to push for domestic and international change — thousands of researchers signing open letters about AI risks, amendments were made to the EU AI Act, and new AI Safety Institutes were set up in multiple countries.
However, the impact of the “ChatGPT moment” was also limited in some ways — it did not lead to binding international agreements (e.g., the UK's Bletchley Park summit produced only non-binding declarations). Nonetheless, the release of ChatGPT, serving as type of early warning shot, did create political space for AI governance discussions and institution-building that continues to influence policy, with many current AI governance initiatives (e.g., national AI strategies and ongoing international discussions) tracing their origin back to the “ChatGPT moment.”
To understand what conditions enable warning shots to generate international agreements, we turn to historical cases where such cooperation did and did not emerge.
Learning from Historical Disasters and Crises
How does a warning shot actually lead to international action? Successful international cooperation seems to depend on five key conditions being in place before the moment of crisis/widespread attention occurs. When they are missing, even severe incidents may produce suboptimal responses.
The case studies below were chosen because they are well-documented, discrete, relatively frequently referenced in AI governance discussions, and vary in terms of how much international cooperation followed. Given the lack of consensus on which precedents, if any, are the most relevant for AI, this post covers a broad cross-section of domains such as public health, nuclear and environmental policy. There are several other possible case studies for warning shots that were not included but could be the subject of future work.
Based on the historical cases analysed below, an event seems most likely to lead to international cooperation (in the form of agreements/treaties) when it meets the following conditions:
Legend:
TMI — Three Mile Island
IAEA — International Atomic Energy Agency
UNEP — United Nations Environment Programme
OECD — Organisation for Economic Co-operation and Development
Three Mile Island (TMI)
Summary: TMI generated some international attention, but no agreements because there was no transnational harm and the incident only revealed domestic regulatory gaps rather than international coordination failures. However, it did inadvertently create governance models that became templates for international cooperation after Chernobyl.
On 28 March 1979, a cooling malfunction, compounded by mechanical failures and operator errors, led to a partial meltdown at the Three Mile Island (TMI) Unit 2 nuclear plant in Pennsylvania. While radioactive release was negligible and no immediate deaths occurred, the accident's domestic impact was immense.4
It demonstrated how the perception of risk, amplified by intense media coverage, conflicting official statements, and subsequent revelations of safety violations and cover-ups, can drive significant policy change. Around 140,000 people voluntarily evacuated, fuelled by fears and the coincidental release of the film “The China Syndrome” just 12 days prior, which depicted a similar fictional event and alleged cover-up.
The consequences for the US nuclear industry—and people's perception of it—were severe. Public support became split on the issue, many planned reactors were cancelled between 1980-1984 and no new commercial reactor approvals were approved until 2012. The Kemeny Report, established by President Carter, prompted major reforms at the Nuclear Regulatory Commission (NRC), focusing on operator training, emergency planning, and human factors engineering. The industry itself formed the Institute of Nuclear Power Operations (INPO) to enhance safety standards. These changes, while improving safety, also increased regulatory costs, leaving a lasting mark on the US nuclear landscape driven more by perceived vulnerability than direct radiological harm.
How Did the Rest of the World React?
The TMI accident garnered significant international attention, with the US Department of State sharing NRC reports with numerous countries, including those like Ireland and Venezuela considering nuclear power plants. Reactions varied: France and the Netherlands showed high interest, sending observers; Japan experienced “intense” public reaction, though officials continued to remain calm and focused; Germany planned safety meetings and saw anti-nuclear demonstrations; and in Sweden, where nuclear power was already contentious, TMI made it an even more salient political topic. Other nations, from Bermuda to the Philippines and Cuba, also registered concern.
However, responses were largely limited to information gathering, observation, and domestic policy reviews or debates, rather than advocacy for new international agreements. “Lessons from TMI were taken on board by nuclear safety authorities outside the US,” leading to worldwide changes in emergency planning and operator training, but not binding treaties.
Foreign interest in the Kemeny Commission’s report did not translate into collective international action, suggesting TMI was viewed more as an event from which to learn rather than a shared global crisis demanding immediate, coordinated governance, a contrast to the later Chernobyl disaster.
Following TMI, the International Atomic Energy Agency (IAEA) focused on strengthening existing programmes, not new treaties. The IAEA reassessed its Nuclear Safety Standards (NUSS) Programme (a suite of codes and safety guides). However, it remained voluntary.
TMI also led to the launch of the IAEA’s Operational Safety Review Team (OSART) system in 1982. OSART missions, conducted at a host nation’s request, provided in-depth reviews of operational safety at specific plants, aiming to improve safety and promote best practices internationally through information exchange, not to enforce compliance. While not leading to treaties, these IAEA initiatives strengthened mechanisms for international cooperation that proved valuable after Chernobyl.
How TMI Inadvertently Led to Future International Cooperation
Despite some international attention, TMI did not generate immediate international agreements. The accident revealed regulatory failures America could fix internally through expanded NRC rules, not gaps in international law requiring treaties. Unlike Chernobyl, which, for example, would expose the absence of any obligation to warn neighbours about radioactive releases, TMI pointed to no missing international framework.
While TMI opened a policy window within the US, it did not immediately lead to international cooperation—without transnational harm, the problem was seen as domestic, solutions weren't internationally relevant, and political incentives for cooperation were absent. A parallel could perhaps be drawn to the 1962 Cuban Missile Crisis: its first responses were narrowly bilateral—the Moscow-Washington hotline (June 1963) and the Limited Test‑Ban Treaty (August 1963)—yet those measures became the scaffolding for later multilateral accords such as the 1968 NPT (Non-Proliferation Treaty) and the 1972 SALT (Strategic Arms Limitations Talks) I, showing how a single near‑catastrophe seems to have bootstrapped international governance capacity in stages.
TMI inadvertently created a governance model that would prove integral when a transnational disaster (Chernobyl) required a framework for international cooperation. The accident led to the establishment of the US Institute of Nuclear Power Operations (INPO) in December 1979, funded by the US nuclear industry itself. The INPO developed operational standards, conducted rigorous plant inspections, and established comprehensive performance tracking systems. Rather than relying solely on government oversight, INPO created peer pressure mechanisms through detailed evaluations and shared performance metrics among all member plants. This voluntary but professionally enforced system represented a new form of governance that was neither pure self-regulation nor traditional government regulation.5
This model proved so effective that when Chernobyl happened seven years later, it became the template for international nuclear governance. The 1986 accident led to the formation of the World Association of Nuclear Operators (WANO), which drew direct inspiration from INPO's approach, implementing similar peer evaluations, performance data collection and analysis, standard-setting, and conferences. Unlike the binding treaties that also emerged from Chernobyl, WANO represented the internationalisation of INPO’s voluntary-but-rigorous approach.
This progression (from TMI catalysing INPO, to INPO inspiring WANO’s creation after Chernobyl) seems to show how warning shots can sometimes build upon each other. TMI did not lead to international agreements, but it did create governance models that would lay dormant at the international level until Chernobyl provided the crisis necessary to globalise them.
The Chernobyl Disaster
Summary: Chernobyl generated international agreements within months because there was clear transnational harm, and the IAEA already existed as a trusted institution with relevant technical expertise that could convene countries to draft treaties quickly. Without this pre-existing institutional capacity, achieving binding conventions would have likely taken far longer.
The explosion and subsequent meltdown at Chernobyl's reactor 4 on 26 April 1986 resulted in the release of substantial quantities of radioactive material. Importantly, radioactive fallout spread extensively across Europe, with contamination detected from Belarus and Ukraine to Scandinavia and Germany. This created immediate, tangible cross-border harms such as environmental contamination, agricultural losses, and mass evacuations (approximately 116,000 people initially, with a further 210,000 resettled after 1990).
The Soviet Union’s initial attempts to conceal the disaster seemed to have further compounded its effect as an international warning shot. Swedish nuclear workers at the Forsmark plant detected abnormally high radiation levels on 28 April, two days after the explosion. Only after Swedish pressure did Moscow acknowledge that an accident had occurred, and even then Soviet leadership minimised the severity, claiming the situation was under control. The full extent of the damage wasn’t disclosed until 25 August, three months later, in a meeting with the UN and IAEA. Chernobyl’s rapid translation into international agreements occurred alongside several conditions:
Chernobyl provided an undeniable, large-scale demonstration of the concrete global risks inherent in nuclear accidents, with its transnational radioactive fallout making future catastrophic possibilities far more tangible.
This concurrently created a situation where the severity and widespread impact elevated international concern to a point that extreme measures — the speedy negotiation of binding international conventions on Early Notification and Assistance — became politically feasible.
As a warning shot6, Chernobyl dramatically and clearly pointed to the problem of inadequate international nuclear accident response. It created an opportunity where ready-made solutions (IAEA’s existing capacity to draft treaties and provide expertise) met favourable political conditions (international pressure and Soviet glasnost7). This allowed the IAEA leadership under Director General Blix and concerned member states to swiftly enact new international law.
It was arguably this combination that explains Chernobyl’s impact in driving swift international legal and institutional change in nuclear safety, distinguishing it from other serious incidents like the 2011 Fukushima accident, which also rated INES-7 (the maximum on the International Nuclear and Radiological Event Scale) but produced only a non-binding Action Plan.
“11th Meeting of Representatives under Early Notification and Assistance Conventions, Vienna, June 2022” by Dean Calma/IAEA, CC BY 2.0
Could This Have Happened Without the IAEA?
Probably not, or at least much less efficiently. Without the IAEA, states would likely have needed to create new forums from scratch, likely taking years rather than months. This is because the first challenge would have been convening hostile parties during the Cold War. The Soviet Union had just caused the disaster—would they attend a Western-convened conference? Would Western states trust a Soviet-convened one? The IAEA’s status as a UN-affiliated, non-aligned agency gave it unique convening power no single state possessed. Crucially, the USSR was already an IAEA member with established relationships and procedural buy-in, making cooperation through this existing channel far more likely than through any newly created forum.
History shows institution-building takes years. The Vienna Convention on the Law of Treaties took 20 years from UN proposal (1949) to entry into force (1969); the UN Convention on the Law of the Sea took nine years just to negotiate (1973-1982). While these were admittedly more complex treaties than emergency notification protocols, establishing new international institutions from scratch typically requires months—if not years—even for simpler agreements, unless there are exceptional circumstances and streamlined procedures (for example, the Mine Ban Treaty took 14 months despite using innovative fast-track diplomacy).
Technical credibility would be another obstacle. The IAEA had spent decades building an expert network (like its International Nuclear Safety Advisory Group - INSAG, which played a key role in analysing Chernobyl) trusted by both blocs. The IAEA Secretariat had a combination of nuclear technical expertise and legal treaty-drafting capabilities — competencies not typically found together in other international bodies. Without this, states would need to identify experts, verify credentials, ensure all parties accepted their legitimacy, and develop treaty-drafting procedures—all major hurdles during the midst of the Cold War.
Alternative institutions also had their own limitations:
Other UN bodies: Security Council could be paralysed by vetoes; General Assembly lacked nuclear expertise and rapid treaty-drafting procedures. The IAEA Secretariat and expert groups rapidly drafting the Chernobyl conventions was a capability not typically found in the broader General Assembly.
Office of the United Nations Disaster Relief Coordinator (UNDRO) (now OCHA): Created 1971, focused on disaster relief/humanitarian assistance, not technical standards or notification protocols.
UNEP: Focused on non-urgent environmental expertise, not nuclear operations (e.g, Montreal Protocol took 14 years from warnings to agreement).
OECD: Western membership excluded the Soviet Union.
Without the IAEA, the international response would likely have been far more difficult to coordinate. Based on how states typically handle transnational environmental issues without established forums, we might expect some combination of bilateral negotiations between affected states (e.g., USSR-Sweden, USSR-Germany) and potentially regional responses through bodies like the European Community. Any attempt to work through the UN General Assembly would face the typical challenges of that body – the (relative) lack of technical expertise, slower procedures, and tendency toward non-binding resolutions rather than treaties.
Whether the Soviet Union would have participated as readily in alternative forums during the Cold War remains uncertain. What seems clear is that achieving two binding conventions within five months – as actually happened through the IAEA – would have been far less likely through any alternative path.
Ozone Layer Depletion and the Montreal Protocol
Summary: The Montreal Protocol was effective because when the ozone hole was discovered in 1985, nearly everything (institutions, solutions, and political support) needed for action was already in place. Crises don't automatically create cooperation; you need the groundwork laid beforehand.
The 1987 Montreal Protocol on Substances That Deplete the Ozone Layer stands as a landmark achievement in international environmental law, often hailed as the most successful multilateral environmental agreement in history. It is frequently cited as a model for global cooperation, having achieved universal ratification and phased out 98% of ozone-depleting substances (ODS) globally compared to 1990 levels. However, viewing this success as an inevitable response to a clear and present danger would likely be a misinterpretation. The journey from a scientific hypothesis in 1974 to a binding global treaty was a precarious, multi-decade process that could have failed at numerous points.
The problem of ozone depletion emerged in 1974 with the Molina-Rowland hypothesis, which framed the threat in terms of human health. For 10 years, the chemical industry successfully lobbied against regulation until the 1985 discovery of the Antarctic "ozone hole" essentially acted as a warning shot that created public and political urgency. This event was only effective because a mature set of solutions was waiting: the UN Environment Programme (UNEP) had already established the necessary institutional framework; chemical giant DuPont had developed profitable CFC alternatives, turning a key opponent into a supporter of regulation; and the proposed treaty included sophisticated governance tools like a "start-and-strengthen" design, trade sanctions, and a Multilateral Fund for developing nations.
These prepared solutions, however, still required a favourable political climate. In the mid-1980s, this condition was fulfilled. The staunchly anti-regulatory Reagan administration surprisingly championed a strong treaty, a shift driven by key internal policy entrepreneurs like Secretary of State George Shultz, who was convinced by the scientific evidence. This leadership was enabled by the public reversal of the chemical industry; with profitable alternatives secured, its economic self-interest now aligned with a global phase-out, removing the most powerful political blocker. On the international stage, Mostafa Tolba, the Executive Director of UNEP, also acted as a policy entrepreneur, navigating Cold War geopolitics to broker the compromises needed for consensus. The crisis created by the ozone hole allowed these entrepreneurs to connect the now-urgent problem with ready-made solutions, leveraging the aligned political will to create the Montreal Protocol. Its success was not an accident of crisis, but a testament to how prepared actors can act decisively when the problem became urgent, solutions were ready, and politics aligned.
Therefore, the lesson from the ozone case is not to simply and naively wait for a crisis (i.e., a warning shot), but to proactively build the institutional, technological, and political capacity necessary to act decisively when a policy window opens. The ozone layer was saved not simply by the shock of a sudden discovery, but by the slow, deliberate, and often contentious work that preceded it.
The COVID-19 Pandemic
Summary: Previous pandemic warning shots like SARS and Ebola led to calls for better international coordination but not binding agreements. COVID-19 prompted unprecedented cooperation efforts, including an international agreement, but the resulting frameworks have significant limitations due to complex geopolitical dynamics and enforcement challenges.
The 2003 SARS outbreak demonstrated how quickly respiratory diseases could spread globally, leading to unprecedented international collaboration that successfully contained the virus within four months. SARS led to the 2005 revision of the International Health Regulations, which strengthened reporting requirements and WHO authority, but remained within existing frameworks rather than creating new binding treaties.
The 2014 Ebola outbreak in West Africa further exposed gaps in international pandemic preparedness. WHO was criticised for declaring Ebola a Public Health Emergency of International Concern at a very late stage, and experts noted that each outbreak “should have served as a wake-up call to the importance of preparedness” and called for better international coordination. However, these calls did not translate into new binding international agreements.
COVID-19 represented the largest global health crisis since these earlier warning shots. Unlike SARS and Ebola, it prompted unprecedented cooperation efforts, including theWHO Pandemic Agreement (2025), the revisedInternational Health Regulations (2024), and theAccess to COVID-19 Tools (ACT) Accelerator (which encompassed a vaccine distribution initiative). WHO member states first agreed to create a global pandemic treaty in December 2021, just as the Omicron variant was spreading globally
The WHO Pandemic Agreement stands out as the first legally binding global treaty focused on pandemic preparedness. It includes a new Pathogen Access and Benefit Sharing (PABS) system to enable quicker understanding of pathogens (e.g., to help speed up vaccine development), equitable sharing of medical resources, and other coordinated response mechanisms.
However, the treaty does have some serious limitations. CEPI (Coalition for Epidemic Preparedness Innovations) argues that the world is now better prepared “in some critical ways”, but key weaknesses persist. The treaty took over three years to negotiate, has no direct enforcement authority, lacks US participation, and includes unresolved issues around pathogen-sharing. Although the initial urgency of COVID-19 opened a window for policy action, momentum faded amid rising geopolitical tensions and populism.
“Closing session of Intergovernmental Negotiating Body (INB) finalising the pandemic agreement” © WHO/Christopher Black
COVAX provides another example of these limitations. It was intended to ensure equitable vaccine distribution globally, but ultimately fell short due to inadequate coordination, supply chain failures, and vaccine nationalism. Despite significant investment in vaccine development, wealthy countries secured disproportionate vaccine supplies, restricting global access and exacerbating health inequities. This seems to have happened not because the problem was unclear, but because political incentives favoured domestic constituencies.
Scientific cooperation, by contrast, was a relative success. Rapid vaccine development, real-time genomic data sharing, and extensive research partnerships highlighted what effective collaboration can look like. But at the political level, many countries focused on national priorities instead of international ones. Geopolitical tensions, notably between the US and China, further complicated collaboration through mutual suspicion and politicised disputes over the virus's origin. The attribution issue seems particularly important here. Unlike Chernobyl, where radioactive signatures clearly traced to a Soviet reactor, COVID-19’s emergence mechanism (i.e., natural spillover versus laboratory accident) became contested. This ambiguity over causation, combined with existing tensions, appears to have been a notable barrier to the kind of rapid, technical cooperation seen after Chernobyl.
It's also worth noting that COVID-19 presented uniquely difficult challenges compared to previous pandemics like SARS and H1N1. The severe, prolonged economic disruptions overwhelmed institutions designed for shorter, less complex crises, exposing gaps in preparedness and governance capacity.
COVID-19 shows that even the most seemingly clear and severe warning shots can result in mixed outcomes due to complex political factors. The existence of relevant institutions (WHO, existing pandemic frameworks) wasn’t sufficient when political complications pulled against cooperation.
Conclusion
These historical cases seem to suggest that AI warning shots could face serious (though not insurmountable) obstacles to generating international agreements. Even severe incidents only lead to coordinated action when specific conditions (previously outlined in the summary table) align.
This is likely to be especially true in the context of frontier AI systems, where attribution, ambiguity, and geopolitical competition all work against coherent international coordination. Consider attribution: if an AI system causes global financial market disruption or infrastructure failure, determining responsibility becomes complex. Was it model behaviour, deployment choices, adversarial use, or emergent properties? Unlike nuclear fallout that can be traced to its source, AI harms often involve much murkier chains of causation.
Furthermore, the current AI governance landscape lacks the institutional foundations that enabled rapid responses in successful cases. We have no body combining deep technical expertise with diplomatic legitimacy and convening power. Meanwhile, major powers are still very early in the process of building a shared understanding of what AI safety means — let alone how to act on it. A warning shot only matters if it lands in a context that can absorb and act on it. If powerful actors see strategic benefit in unilateralism, or if no trusted international institutions exist to coordinate a response, the political effect may be negligible—or even counterproductive. Even dramatic AI-related harms may not function as warning shots unless the public is already prepared to interpret them as such.
This context suggests that rather than waiting for a single dramatic warning shot to significantly contribute to comprehensive cooperation, international AI governance may emerge through narrow, reactive responses to specific incidents. The most effective path forward may therefore involve learning to work with the kinds of smaller warning shots we already know will emerge over time: publicly jarring capability jumps (such as ChatGPT’s release), demonstrably risky capability advances (e.g., publicising cases of models engaging in deceptive behaviour), or misuse/misalignment failures that grab public and policymaker attention. Each incident should be used to incrementally build response capacity, test coordination mechanisms, and develop “modular” policy solutions that can be rapidly and easily deployed if larger crisis moments create windows for targeted coordination.
However, there are reasons to be cautious about these takeaways. The conditions identified here overlap messily and depend on contingent political dynamics.8 This should be read as a rough map of what might matter, not a predictive model. But what seems clear is that even severe incidents require substantial groundwork to translate into effective action.
Our ability to adequately respond to future AI warning shots will depend on the foundations we lay today.
Acknowledgements
Thank you to Saad Siddiqui, Oliver Guest, Raven Witherspoon, Josh Thorsteinson, Peter Gebauer, and Thomas Van Damme for their extensive and valuable feedback on this post.
Do you have ideas relating to AI geopolitics and coordination? Send us your pitch.
For example, John Sherman, the former Director of Public Engagement at the Center for AI Safety recently said: “Honestly...I hope this year we get a warning shot. I hope something bad happens, and it's undeniable that AI caused it.” For more evidence of the concept being used in AI safety discourse, see: here, here, here, here, here, here, here, and here.
Though we also note when and how domestic policy changes, other forms of international cooperation, or voluntary initiatives occurred instead.
For a more detailed exploration of TMI’s domestic impact and its lessons for potential AI warning shots, see here.
For an analysis on how the INPO's mechanisms could inform safety/governance approaches within the AI industry, see p. 39 of “Voluntary Initiatives in AI Governance: Lessons from Aviation and Nuclear Power”.
While arguably not a perfect “warning shot” analogue (one could argue that nuclear disasters don’t get much worse than Chernobyl), the disaster's potency in leading to policy change provides insights relevant to understanding how crisis events can concretely lead to international agreements.
Broadly meaning “maximum openness” and the “inadmissibility of hushing up problems” in state activity.
Future research might benefit from looking at a wider range of cases that could be analogous to warning shots — including Acid Rain (1970s-80s), Oil Shock (1973), Cyber attacks on Estonia (2007), and more on the Cuban Missile Crisis (1962).



