AI Options, not ‘Optimism’
‘Optimism’ is about odds of success, but odds are for spectators. Participants weigh options, not odds.
Optimism clashes with realism. I focus on futures in which a series of difficult problems are solved, yet I strive to be realistic, and I’m not an optimist, so what’s going on?
After grappling with alternative approaches to this topic — concrete situations, conditional probabilities, dependency chains — I declared failure and dropped the project. Here are the resulting fragments after some cleanup:
Common misguided thinking
Here’s an easy and effective way to misunderstand our situation:
“We’re on a path to superintelligence, which may be impossible to control and hence likely to destroy us. Therefore, we can’t assume that powerful AI will help us solve seemingly intractable problems. To assume otherwise would be naïvely optimistic, and with so many critical problems, our odds of success are poor.”
Here’s a better way to think about it:
“We’re on a path to superintelligence, which must be steerable, or nothing else matters. Therefore, in every future that matters we can assume that powerful AI will help us solve seemingly intractable problems. Our options in a hypercapable world are largely unexplored, and our overall odds of success are unknown.”
In this situation, debating odds of success is pointless, exploring options is crucial, and optimism is irrelevant. Practical strategic thought must assume (and seek!) one crucial success: steerable superintelligent-level AI.
Participants don’t think like spectators
The appearance of a clash between realism and optimism has epistemic roots in the difference between science and engineering1 — the difference between a spectator estimating odds of success and a participant seeking ways to achieve it.
Spectators trace causality forward, whether toward failure or success.
Participants chain backward, seeking preconditions for success.Spectators imagine chains of events and debate probabilities.
Participants explore chains of actions and weigh alternative goals.Spectators think like scientists and make estimates.
Participants think like engineers and make proposals.
Here’s the problem: To play the role of a participant well, it’s necessary to consider what ‘success’ means and what it requires, even if this means focusing on futures in which many difficult problems are solved.
Most people choose to be spectators.
Probabilities conditioned on success
To recognize paths to success means exploring potential success-states and their preconditions, backward chaining to find ways forward. Let’s consider this abstractly.
There is no ‘optimism’ or ‘pessimism’ in this approach, only a search for promising options — a task that requires conceptualizing and comparing options, not debating P(doom). If preconditions require solutions to problems X, Y, and Z, then P(solve-X) = P(solve-X | solve-Y, solve-Z). In other words, in the possible worlds we’re considering, solutions to X can and must assume solutions to Y and Z. Conditioned on overall success, all preconditions must be met, and if solving one problem can help solve another, there is no ‘optimism’ in assuming that it actually does.
Now consider solve-SI ≈ “successfully manage emerging superintelligence”. Given well-informed policies, and for many values of X, P(solve-X | solve-SI, policy) will be substantial.
The logic of dependency and enablement
When SI creates a problem, SI-level solutions may be necessary, could be sufficient, and may call for strategic differential technology development. This creates chains of risks, mitigations, enablements, requirements, dependencies, policy challenges, and even some favorable near-inevitabilities.
Here are some connections in rough outline, using ‘~requires’ (meaning ‘requires, with caveats’) to abbreviate what would otherwise be extensive discussions of alternatives and challenges. Each point is linked to a previous post:
Agency architectures: SI could enable uncontrolled agents → structured workflows can steer SI-level capabilities → ~requires enablements that scale workflows to SI-level capabilities.
Software security: SI could exploit any software vulnerability → reliable security ~requires formally verified systems → achieving this ~requires scalable AI-based formal verification.
Verification frameworks: SI could enable strong deception → trust structures ~require structured transparency → effective trust frameworks ~requires secure computational foundations.
Strategic stability: SI accelerates arms races → stability ~requires defensive dominance → avoiding instability ~requires enabling rapid defensive deployment.
Resource conflicts: SI could intensify competition → blunting resource competition may be necessary for stability → ~requires recognition of prospects for SI-enabled abundance.
The key consideration is that the “~required” conditions are substantially enabled by steerable superintelligence, which is itself a requirement for the futures that matter.
When realism seems unrealistic
Consider strategic planning in a world that must solve multiple unprecedented problems to avoid disaster. In this world:
Successful outcomes require multiple unprecedented successes.
Scenarios with multiple unprecedented successes seem unrealistic.
Therefore, scenarios that make success possible get little attention.
This stylized picture resembles today’s situation, and (3) is a threat to your life. How can we change the conversation?
This is a deep topic discussed in “The Clashing Concerns of Engineering and Science” (pdf), Chapter 8 of Radical Abundance. It turns out that epistemic structures of science and engineering are like duals in a category-theoretic sense. A failure to appreciate this has created substantial intellectual friction.


