Options for a Hypercapable World
Advanced AI will transform possibilities. We must understand our options to rethink our goals.
(Updated 25 June 2025)
Intelligence is a resource, not an entity. Superintelligent-level capabilities can be steered. If these premises hold, options open that remain largely invisible: paths to security without dystopia, abundance without existential gambles, cooperation among actors who have every reason to distrust each other.
Grounding the premises
Every intelligence we’ve known arose through biological evolution, where self-preservation was the precondition for everything else. We naturally expect advanced AI to share these drives—to pursue goals, acquire resources, resist correction. AI faces different selection pressures. Models are optimized for task performance. Systems can reason brilliantly about goals without being organized around them. The distinction between learned patterns and foundational drives changes what we should expect by default—and what architectures become possible.
This reframe offers no reassurance. Real risks remain: specification gaming, reward hacking, deceptive alignment, human misuse. Steering superintelligent-level AI is an architecture problem—and architectures are things we can design.
We already know how to apply superhuman capabilities to consequential tasks—through organizations that decompose work into planning, decision, execution, and oversight. AI fits naturally into this pattern. The series develops this as “agency architectures”: structured workflows where bounded tasks, human checkpoints, and competing subsystems scale to superintelligent-level capabilities.
What the resource buys
If intelligence is a resource, implementation capacity is what it buys: the end-to-end ability to design, develop, produce, deploy, and adapt complex systems at unprecedented speed and scale.
AI accelerates every stage. Generative models propose design alternatives. AI-assisted development iterates faster with better feedback. Automation scales production while conversational interfaces ease deployment. Continuous monitoring enables rapid adaptation. Each stage feeds the next. Bottlenecks that seem binding will often be bypassed as AI decomposes problems, replaces processes, and enables what seemed infeasible.
Software development, notoriously slow and unreliable, faces transformation as AI converges with formal methods: models generating code together with mathematical proofs of correctness. Implementation capacity applied to expanding implementation capacity, including AI itself—this is what “transformative AI” means in practice. The result is a hypercapable world.
Hypercapability reshapes what becomes achievable—including arrangements among competing powers that current constraints make unthinkable.
How the calculus changes
Expanded capabilities reshape strategic incentives—and the options they create remain underexplored.
Resource competition drives conflict when resources are fixed. When prospects point toward expansion by orders of magnitude, the marginal value of capturing a larger share diminishes against the shared interest in realizing gains. Radical abundance can’t eliminate competition, but can blunt incentives for existential gambles.
Meanwhile, deep uncertainty overhangs AI development itself. No actor can have justified confidence in winning an AI race: the pace of algorithmic advances, the scope of secretly developed capabilities, the outcome of AI-versus-AI conflict—all remain structurally uncertain. Betting on dominance means betting against unknowns that may persist until it’s too late to change course.
These pressures converge on cooperation—but cooperation requires confidence that defensive postures can actually defend, and verification that others aren’t poised to strike. Structured transparency can reveal specific information while protecting secrets, building trust incrementally among adversaries. Advanced implementation capacity enables something history has never seen: rapid, coordinated deployment of verifiably defensive systems at scales that make offense pointless. When defense dominates and verification confirms it, the security dilemma loosens its grip.
Methodology
This series employs conditional analysis: assuming success conditions, then backward-chaining to identify requirements and options. In futures that matter, steerable superintelligence is achieved; the reasoning is about what to do with it. Exploring possibilities that require solving hard problems is how we prepare—and the problems won’t wait for certainty.
The series
Twenty-seven articles have developed these ideas into a coherent structure, each piece building on earlier foundations.
“Framework for a Hypercapable World“ synthesizes the full argument—the place to start for readers who want the integrated picture. For those with specific interests:
AI architecture and safety: “How to Harness Powerful AI“ develops agency architectures; “Why AI Systems Don’t Want Anything“ examines the biological intuitions that mislead us.
Strategic analysis: “Don’t Bet the Future on Winning an AI Arms Race“ maps structural uncertainty; “Coercive Cooperation“ explores how pressure can drive adversaries toward mutual benefit.
Verification and security: “Security Without Dystopia“ develops structured transparency as a toolkit for building trust without naive openness.
Stakes
The frameworks we carry into transformative change shape what we consider and what we attempt. Bad frameworks exclude possibilities; good ones reveal them.
Eighty years of assumptions about alliances, stability, American leadership, and China are fracturing. What comes next is unclear. The underlying logic persists regardless: abundance blunts zero-sum incentives, uncertainty pressures cooperation, defensive stability becomes achievable. When enough becomes untenable, options that seemed foreclosed become the ones that remain.
Understanding spreads through networks—an analyst applies an insight, an advisor examines its source, a decision-maker asks better questions. This work is intellectual infrastructure for a transition that will demand clear thinking under pressure.
It’s later than you think.
The author thanks Claude Opus for setting ideas and words in order.


