Why intelligence isn’t a thing
Intelligence has two separable facets, and even superintelligent-level AI need not take the form of a willful being.
Contrary to the usual AI story, intelligence is a resource, not an entity, and what we call “intelligence” splits into two quite different properties. Misunderstandings of the very nature of intelligence have obscured prospects for advanced AI. Fortunately, the key points are very simple.
“Intelligence” has two different meanings
We call children intelligent because of what they can learn, not what they can do, and we call adults intelligent because of what they can do, not what they can learn. A child might have few skills, yet learn quickly, while an adult might show great expertise, yet learn nothing (a consequence of anterograde amnesia).
In AI development, learning and doing are typically separate, yet a vast safety literature assumes that advanced AI systems will be deployed as self-modifying entities and then warns us of what might go wrong. In normal humans, action and learning are always combined, but AI is different.
I blame ingrained anthropomorphism: Confuse machines with humans, get AI wrong.
Intelligence isn’t an entity
Intelligence isn’t an entity like person, it’s a capacity, a property of intelligent systems, a resource for solving problems.
Some AI systems solve mathematical problems,1 others design protein molecules.2 LLMs can solve a range of problems that involve writing text in response to a prompt. Models can delegate tasks to other models, vision models can be mixed and matched with vehicle controllers and language models. Each model can be copied and run millions of times on thousands of machines, and each model can be updated (trained, tested, and deployed) with aggregated and filtered data from everywhere.
All this adds up to what I’d call intelligence, but I don’t see a distinct, intelligent entity. The state of the art is a pool of resources, not a thing, and I see no sign that AI systems will condense into a single entity.
Yet people speak of “The AI” as if intelligence implies an autonomous being, an entity like a person. Again, I blame ingrained anthropomorphism.
Understanding what AI can become
We can train AI systems on filtered, aggregated data and control the content of learning — train and test, then deploy. Real AI systems are more stable and easier to manage than “the AIs” of folklore.
We can apply AI systems to a boundless range of tasks without creating individual AI entities that act as omnicompetent agents. Again, real AI systems can be more stable and easier to manage than “the AIs” we’ve been expecting.
Note that “self improvement” only seems exotic if it’s something “the AI” is supposed to do to its “self”. In the real world, AI research and development is just a bunch of tasks, and more and more of these tasks are being performed by AI.
The main takeaway:
Because even superintelligent-level AI can take the form of a manageable pool of resources, the key question is what we should do with AI, not what “it” will do with us. Intelligence isn’t a thing.3
Recent examples include AlphaGeometry (“Solving olympiad geometry without human demonstrations”, January 2024) and FunSearch “Mathematical discoveries from program search with large language models”, December 2023).
Reviewed in “From sequence to function through structure: Deep learning for protein design”, 2023.
Powerful AI agents are possible, of course, and will pose challenges for security, but their value is surprisingly small because alternative ways to organize intelligence are equally capable and more tractable (see “The Open Agency Model”, 2023). I’ll return to this soon for a deeper dive.