Incoherent AI scenarios are a threat
Coherent strategies for a hypercapable world call for coherent scenarios. Incoherence could be lethal.
The prospect of a world deeply transformed by AI calls for scenarios that reflect predictable capabilities and their predictable implications. In thinking through these scenarios we will need to revise deep assumptions about what is possible, and reconsider what is desirable. We must fight the temptation to imagine changes piecemeal, as if change will play out in a more or less status quo world.
Strategies based on incoherent scenarios — piecemeal changes in hypercapable future — would incur unfathomable risks and opportunity costs. If conventional strategies would lead to large, irreducible risks, we must explore alternatives. If AI-enabled capabilities could deliver economic abundance while escaping security dilemmas, we should seek practical paths forward.
This article aims to carve out intellectual space for distinct, coherent scenarios: distinguishing between gradual, piecemeal change and change that eventually becomes swift and pervasive. Let’s call these scenarios “incremental” and “radical”. Incremental scenarios merit attention: Change today is incremental and could plausibly continue that way. But radical scenarios also merit attention: It is plausible that AI development will feed back to accelerate AI itself toward deeply transformative capabilities. Radical futures are a realistic, challenging contingency that calls for preparation.
What I will suggest is conditioned on the eventual emergence of a hypercapable world. Let’s assume that this contingency won’t be taken seriously until late in the game, and that preparations will look like contingency planning, with costly action following — not preceding — perceived urgency.
Prospects for a hypercapable world
Previous posts have outlined a range of crucial AI capabilities and implications, and these can be condensed into three key prospects and a key consideration (credible realism) in discussing them. With links to the anchor posts:
Highly capable, steerable AI: Continued AI development on a broad front (more than LLMs) will lead to strong, steerable, high-level AI capabilities with comprehensive applications.1
Greater software implementation capacity: AI will break software development bottlenecks, enabling rapid production of verifiably correct software systems.
Greater physical implementation capacity: AI will accelerate the end to end design, development, production, deployment, and adaptation of large-scale sociotechnical systems.
The importance of credible realism: Policy can be oriented toward realistic problems based on credible prospects even before realistic prospects have entered the Overton window. Credible realism can help align policy with challenges posed by implausible realities.
Predictable consequences of transformative AI
AI scenario planning must begin with technology drivers. The most predictable consequences of AI are its fundamental capabilities, not how, when, by whom, or for what purposes they’ll be applied. Let's consider two levels: basic enablements and their radical implications:
Abundant material wealth and services: Expanded production capacity can enable material abundance, while AI capabilities can translate material abundance into abundant physical and information services.
Abundant renewable energy: Expanded production capacity can accelerate the scaling of wind, solar, and energy storage systems.
Ample material resources: Abundant energy and capital goods can enable the use of abundant, low-concentration ores along with reduction of environmental harms.
Provably-correct software at scale: AI can enable swift production of software with mathematical proofs of security and correctness, enabling deployment of trustworthy software at scale.
Greater scope for verifiable international agreements: Capabilities leveraging new hardware and software can extend structured transparency for robust monitoring of military capacities and deployments.
Swift transitions in military force postures: Expanded implementation capacity (both design and production) can enable rapid shifts in the architecture of global security.
These capabilities, taken together, sketch a hypercapable world that challenges fundamental assumptions about the future. Prospects for a hypercapable world call for reconsidering options, interests, strategies, and policies. In my view, the task for today is exploration and analysis, seeking good contingency plans. The task for tomorrow will be to recognize those contingencies and act. Today sets the stage for tomorrow.
Incremental vs. radical expectations
The gap between incremental and radical expectations for AI capabilities leads to radically divergent prospects for policy concerns:
AI capabilities
Incremental: Prepare for piecemeal advances creating a range of problems and opportunities.
Radical: Prepare for steeply accelerating capabilities with pervasive, interlocking consequences.
Domestic economies
Incremental: Prepare for shifts in employment across multiple sectors.
Radical: Prepare for a world where human labor is optional.
Climate crisis
Incremental: Seek to mitigate AI’s growing energy consumption and CO2 emissions.
Radical: Prepare to leverage new productive capacity to make renewables dominant and fossil fuels obsolete.
Resource competition
Incremental: Secure long-term control over strategic minerals and fossil fuels.
Radical: Anticipate a world where minerals and fossil fuels lose strategic value.
Geopolitics
Incremental: Work within a status quo of slow change, opacity, and conflict.
Radical: Explore options for changing the game through improved goal alignment, structured transparency, and verifiable cooperative actions.2
National security strategy
Incremental: Race to develop new offensive and retaliatory capabilities, plan for preemption or warfighting in a world with growing technological uncertainty.
Radical: Explore options for risk reduction and strategic stability through coordinated deployment of overwhelmingly effective, verifiably defensive systems (while assuming that this won’t happen soon).
Incoherent scenarios are the enemy
Strategic thinking about AI’s potential to reshape our world calls for exploring distinct internally coherent technology scenarios, both incremental and radical, and with this, to reconsider objectives in the context of a hypercapable world.3
This approach doesn’t demand priority for one scenario over the other: It allows for disagreement about AI’s trajectory (incremental or radical change) while demanding that transformative AI be considered as a whole, not piecemeal.4 Planning premised on incremental change is inevitable and perhaps wise, yet it would be irresponsible to neglect contingency planning premised on prospects for radical (yet widely-anticipated) AI advances. Speculative probabilities (10%? 90%?) are beside the point.
Incremental scenarios present their challenges within a familiar frame of economic competition, military doctrine, and international relations; means and ends remain much the same. Radical scenarios present deeper challenges across the board, but within a context of unprecedented capabilities that could be leveraged to meet unprecedented challenges in unprecedented ways. Both potential means and rational ends are different in a hypercapable world.
Radical scenarios can be explored as contingency plans. This approach doesn’t call for immediate policy changes, expenditures, or public advocacy of particular expectations. Instead, it calls for thought, collaboration, and increasingly concrete contingency planning. Then, when radical developments force change, coherent strategies can be ready for action.
And at the threshold of a hypercapable world, delayed action could be implemented with unprecedented speed.5
What to do?
Strategic thinkers and policy analysts will play a crucial role in shaping discourse around AI’s potential. By exploring potential capabilities and options, and by developing and promoting coherent scenarios, thought leaders can challenge muddled thinking and expand the range of serious discourse.
There is no need to advocate immediate policy changes, or to boost expectations for transformative AI. With recent shifts in opinion, radical contingency planning will encounter little opposition and create more reputational opportunity than risk. Future events can do the work of persuasion, provided that the ground is prepared and seeded.
There is also no need to be an insider: A broader public can contribute to the evolution of ideas and help to shape the climate of opinion. We need to shift the Overton window toward coherent prospects and policies that are better aligned with reality and our shared interests. This is a whole-society task that rewards even partial success.
Regarding steerability, consider how we apply super-human intelligence to large, consequential tasks today: Institutions (companies, agencies) consider alternative plans, make choices, perform short-term actions (with budgets, reviews, and completion dates), and then revise their plans based on experience. The agency model of AI for large, consequential tasks has the same basic structure, where generating alternative plans is a task for competing generative models; choosing among plans is a task for humans with AI advisors; actions are specific tasks (again with budgets, reviews, and completion dates), and plans are again revised based on experience. Most plans (like typical outputs of other generative models) are discarded or revised, and task optimization calls for reducing (not maximizing) resource consumption. Powerful, unitary, willful agents have no natural role in this picture, and have little comparative instrumental value. AI doomers take note.
See “How to harness powerful AI” and “Reframing Superintelligence”.
Regarding the practicalities of dealing with actors entrenched in lose-lose strategies, consider “coercive cooperation” — coercing adversaries to recognize and act in line with their actual interests.
For example, it would be a mistake to bet the future on dominating a nuclear superpower (with unknown weapons) without seeking options for confidence-building measures in a transition to mutual defensive security.
Which is to say, “considered as a whole as best we can with limited knowledge,” which is to say, not ignoring known, major considerations.
At the threshold of a hypercapable world, delayed persuasion could also be effective. AI can boost analysis, planning, and communication. The intellectual limitations of today’s LLMs shouldn’t be mistaken for limitations of humans aided by a range of future AI resources.