AI-Driven Strategic Transformation: Preparing to Pivot
Prospects for deep AI-driven transformation of military and economic affairs call for rethinking national strategies. We must prepare to pivot when emerging realities force change.
The Stakes
The emergence of AI-enabled hypercapability will revolutionize how we design and deploy complex systems.1 This transformation brings unprecedented military risks, yet these risks can be averted through coordinated, verifiable deployment of defensive systems.2
We face a choice between two paths:
Continuing to pursue win-lose strategies that bring irreducible existential risks with little potential reward.
Continuing these policies in the near term,3 yet pivoting to lower-risk, win-win outcomes late in the game.
Clear contingency plans could enable a pivot toward safety when mounting risks force change. Those contingency plans are crucial, but are not yet in place.
From Capability to Hypercapability
Economic and military transformations typically unfold gradually, but AI development will be different: Cascading advances will drive fundamental shifts on much shorter timescales, radically expanding our capabilities.4
Prospects for deep transformation hinge on AI-driven expansion of general implementation capacity — the end-to-end ability to design, develop, deploy, and adapt complex systems rapidly and at scale.5 This expansion spans both physical and computational domains, including systems that range from defense and verification frameworks through with manufacturing, robotics, renewable energy, AI applications, and almost everything else. These advances point toward hypercapability: the ability to swiftly develop and scale solutions to an extraordinary range of complex problems, given coherent objectives.6
Strategic uncertainty is one of those problems.
Reliable, Durable Uncertainty
Deep uncertainties in AI development make winner-take-all strategies exceptionally risky. No competitor in an AI arms race can confidently predict:
The scope and pace of AI advances (not just LLMs7).
The novel military capabilities that their rivals might develop.
The outcome of kinetic conflict with adaptive AI on both sides.
This multi-level uncertainty appears robust and durable. While we can’t rely on winning,8 we can rely on uncertainty itself as a growing pressure for seeking alternatives.
The existential risks of win-lose strategies strongly motivate exploring cooperative, win-win approaches — not through advocating a strategic pivot today, but by developing contingency plans for a time when mounting pressures crack the previous consensus.
And the pressures of hypercapability will invite solutions based on hypercapability.
Transformative Options
In a hypercapable world, long-standing constraints dissolve. AI will enable rapid design and deployment of both defensive systems and verification frameworks, potentially defusing familiar security dilemmas. Key technological possibilities include:
Defensive stability through coordinated deployment of defensive systems at scale.9
Verification frameworks built on structured transparency relationships.10
Digital infrastructure security through verified supply chains and software.11
These point toward fundamentally different security architectures — arrangements that would otherwise be unthinkable. The benefits of success extend far beyond security: A hypercapable world promises globally shared gains in areas that range from economic abundance to climate remediation.12 As AI opens the door to robust defense and enormous economic growth, national interests will shift: Securing a share of massive benefits becomes more attractive than risking everything for dominance. Neither wealth nor security is a zero-sum game.
Accelerating Strategic Adaptation
AI can assist not just in building systems, but in planning and coordination:
Strategic analysis: AI can explore vast option spaces, helping assess complex interactions and potential force postures.13
Deliberation: AI systems can help diverse actors — from interest groups to nations — recognize shared interests and opportunities.14
Negotiation: AI can help establish shared understanding of win-win options, including details too intricate for unaided negotiation.
Confidence-building: AI can identify risk-reducing paths and help develop both defensive frameworks and verification systems.15
Preparing for Transformation
Today’s preparations shape tomorrow’s choices. By developing clear concepts for desirable outcomes, we can expand options available when strategic pressure peaks. This preparation need not trigger opposition — contingency planning for transformative AI is already on the table. Three parallel tracks deserve attention:
Technical & Institutional Analysis: Exploring how AI could enable swift deployment of defensive security and verification systems.
Political & Cultural Analysis: Understanding how perceived national interests could shift in response to new risks and opportunities.
Multilateral Bridge-Building: Promoting understanding of win-win possibilities among influential thinkers and analysts shaping strategic perspectives in rival governments.
The technical feasibility of a rapid pivot — late in the game, but before a crisis — makes early preparation valuable without requiring immediate policy changes. When capabilities will enable swift implementation, what matters most is having developed the right concepts and actionable plans in advance.
A Sketch of a Scenario
The discussion above outlines possibilities and incentives in abstract terms, but gives no clear picture of how change could occur. How could seemingly implausible changes unfold through real institutional and political processes?
1. PREPARING THE GROUND
Studies exploring AI-enabled defensive architectures stimulate innovative strategic thinking. As implications come into focus, analysis deepens, drawing lessons from US-Soviet arms control and current military-to-military relations. Track-two dialogues emerge organically as anticipated developments point to unexpected common ground among competitors. AI systems accelerate the development of concrete technical and strategic proposals aligned with diverse interests and institutional cultures.
Recognition grows that AI-driven change could swiftly transform military and economic realities. As detailed contingency plans take shape, this line of thinking penetrates institutions and shapes how emerging developments are understood.
2. TIPPING POINT
Mounting uncertainty forces strategic reassessment, while earlier track-two discussions frame multilateral negotiations. Security establishments embrace roles in developing new defensive systems. Contingency plans evolve into a new strategic consensus. AI systems accelerate analysis of detailed implementation paths, enabling swift action.
3. STRATEGIC TRANSFORMATION
Urgency overcomes institutional friction and bureaucratic routines. Verification frameworks enable confidence-building steps, while expanding implementation capacity enables rapid, coordinated shifts in force postures. As verified defensive systems and durable constraints alter the military balance, shared incentives facilitate further cooperation. The prize is security with prosperity.
4. A NEW EQUILIBRIUM
The transformed strategic landscape combines robust defensive architectures with ongoing technological adaptation. Great powers maintain autonomy while managing the challenges of domestic AI transitions. The equilibrium proves stable, based on solid foundations, and unresolved conflicts no longer pose existential risks.
By “advanced AI”, I mean a comprehensive range of highly effective AI services: “steerable superintelligence” considered as a resource. See “Why intelligence isn’t a thing” and “How to harness powerful AI”.
“The Platform: General Implementation Capacity” explores how AI will expand our ability to design, develop, deploy and adapt complex systems at scale.
“Continuing these policies in the near term”: Proposing to reverse current trends would be unrealistic, though tweaks are possible and desirable. If conflict is not inevitable, actions that would poison relationships look more costly.
Note that expectations for eventual swift progress do not necessarily imply early calendar dates.
Credible timelines for AI-driven transitions in force postures might be measured in years rather than decades, while realistic scenarios might be compressed into months. Fortunately, preparing for longer timelines can motivate most of the work needed for faster scenarios (see “Toward Credible Realism”).
“Coherent objectives” are problematic, of course. Consider “wicked problems”, but note that exanding capabilities could often relax the trade-offs that make problems wicked.
I expect to see LLMs serve as human-facing front ends to a wide range of task-focused machine learning systems (including specialized LLMs). If progress in LLMs were to stall, progress in AI capabilities would continue (and maybe increase?). Here are some samples of recent, diverse AI developments and reviews, following the convention of considering “AI” to be almost any sufficiently impressive system that is trained rather than programmed:
Generalist robotic models: “Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers” (2024)
Diverse tasks in manufacturing: “Large Language Models for Manufacturing” (2024)
Planning complex actions: “Mastering Diverse Domains through World Models” (2024)
Modeling the physical world based on video data: “Sora as an AGI World Model? A Complete Survey on Text-to-Video Generation” (2024)
Agents that act in simulated 3D worlds: “Scaling Instructable Agents Across Many Simulated Worlds” (2024)
Modeling materials in atomic detail: “MatterSim: A Deep Learning Atomistic Model Across Elements, Temperatures and Pressures” (2024)
Protein fold prediction: “AlphaFold” (GitHub)
Protein fold design: “Machine learning for functional protein design” (2024)
Challenging math problems using a joint symbolic/neural system: “Solving Olympiad Geometry Without Human Demonstrations” (2024)
Mathematical discovery: “Mathematical discoveries from program search with large language models” (2024)
Writing code: “A Survey on Large Language Models for Code Generation” (2024)
Accelerating AI development: “How AI Can Automate AI Research and Development” (2024)
Committing to win-lose strategies might make sense if one could be confident of winning an AI arms race. However, the deep technical uncertainty in AI development makes such confidence impossible until late in the game—if at all.
Note that if one power did become confident of winning a race for decisive AI advantage, an adversary might regard this as an existential threat that calls for a preemptive strike. (As a rule of thumb, if a strategy calls for subjugating a nuclear superpower, consider less risky alternatives.)
“Security without Dystopia: Structured Transparency” discusses of how structured transparency relationships can enable verification while protecting sensitive information.
Note that massive deployment of defensive systems, together with verification, can both neutralize existing offensive systems and preclude the deployment of new offensive systems. The concept of “defense-dominant technologies” is misleading: It tacitly assumes approximate symmetry in the scale of offensive and defensive deployments, but massive asymmetries will become feasible.
“Breaking Software Bottlenecks” examines how AI could enable development of verifiably secure software and systems at scale.
Verification of supply chains might seem to require verified verification systems, but confidence can be bootstrapped from an imperfect base.
In a hypercapable world, constraints on supplies of both energy and materials will be dramatically relaxed, and this can be leveraged to improve environmental quality.
Note that this does not require that individual AI systems be impartial or trustworthy. Diverse systems with different biases can improve exploration and cross-checking.
Recognition of shared interests doesn’t guarantee cooperative action, but makes attractive options visible and actionable. For discussion of how expanded capabilities can align interests toward mutually beneficial outcomes, see “Paretotopian Goal Alignment”.
Note that open exploration of options together with multilateral, AI-enabled analysis would enable massive red-teaming.
I thank the indefatigable Claude Sonnet for assistance in preparing this article.