There’s a little trick you can play on yourself in a city.
Stand at a street corner and stare at the world like it’s the only world that exists. The traffic light becomes a moral system. The sidewalk becomes a politics. The next ten meters of pavement become destiny.
Then pull out a map.
Suddenly the street corner is not a universe. It’s a pixel.
And if you zoom out far enough, the city itself becomes a dot on a continent, and the continent a smudge on a sphere, and the sphere a faint blue argument against despair.
This is the third post in our series about Universe = AGI.
In the last two posts we talked about two commitments:
Today we talk about the next pillar—the one that turns “a rule that updates state” into something that can host intelligence:
Locality and Scale.
Because intelligence is not just about thinking. It’s about where thinking lives.
If you are building a world for learning, the first ethical choice is about fairness:
Will the world let causes stay near their effects?
In a world without locality—where anything can influence anything instantly—nothing is stable enough to learn. There are no reliable mechanisms, only coincidences. Agents don’t become scientists; they become gamblers.
Locality is the opposite:
Locality makes the world honest.
It means that if something happens “over there,” you don’t get the benefit of it “over here” unless something actually carried that influence across the boundary.
That boundary is where knowledge is born.
And yet: a world that is only local can be cruel.
Imagine a planet where the only interaction is “touch your nearest neighbor.”
You can still compute. You can still build patterns. But you’ve made communication, coordination, and long‑range structure unbearably expensive. Everything becomes slow diffusion. Culture becomes a rumor that takes a lifetime to cross town.
Nature didn’t choose that world.
Nature chose multiple scales:
Scale is how complexity becomes manageable.
And if we want a Universe‑Machine that can grow intelligence, we need the same gift: a way for the world to be local and to support long‑range structure without cheating.
Engineers already know the secret name of scale: hierarchy.
We use it everywhere:
Scale is what lets systems grow without becoming impossible to understand.
And intelligence, at its core, is a compression engine:
So one of our guiding questions becomes:
Can we build a world where abstraction is not merely a mental trick, but a native part of the world’s geometry?
When most people hear “space,” they picture Euclidean distance: meters, coordinates, geometry class.
But for a computational world, there is another kind of distance that matters more:
How much do two places share a common story?
Think about phone numbers.
Two numbers that start with the same country code and area code are “near” in an administrative sense. The prefix is shared structure. It’s a kind of locality, even though it isn’t physical.
Or think about a family tree.
Two cousins are “near” because they share an ancestor not too far back. The deeper the shared ancestor, the closer the relation.
This idea—“distance equals how far back until we share a common prefix”—is one of the simplest ways to make scale real.
It creates a world where:
In such a world, zooming is not a metaphor. It’s an operation.
Once a world has multiple scales, something profound happens:
Agents don’t need to invent abstraction from scratch. The world already has it.
They can build models that start coarse (“what happens in this region?”) and refine only when needed (“what happens in this corner?”).
This is how human intelligence works. We don’t track every atom. We choose a scale appropriate to the task.
You can keep local rules at the micro scale, while still allowing information to move at higher scales through structured channels—like roads, postal codes, and backbone networks.
Not instant telepathy. Not global broadcasts. Just architecture.
Tools are compressed actions.
When you have scale, you can have gadgets that operate locally but compose into larger mechanisms. You can have “macros” that mean something at the city scale while still being made of neighborhood‑scale parts.
This is the beginning of culture: reusable artifacts.
Scale lets you ask the kind of questions that make real debugging possible:
If you can’t ask those questions, you can’t trust your Universe.
Locality and scale are not only design choices. They’re values.
A world with locality respects its inhabitants.
It doesn’t punish you with invisible global consequences. It lets you build understanding where you stand, and then carry that understanding outward. It gives you a chance to learn, to craft, to cooperate, to make meaning that survives.
Scale, in turn, is compassion for finite minds.
It says: you don’t have to hold the whole universe in your head to live wisely inside it.
Locality is how we prevent hidden channels.
If “everything talks to everything,” you can’t tell whether an apparent law is real or whether information is leaking through the back door.
Scale is how we keep complexity from becoming mythology.
It gives us a disciplined way to say what we mean by “near,” what counts as a mechanism, and how influence is allowed to propagate.
Rigor is how we keep the world honest enough for intelligence to be earned.
Once you have:
you’re ready for the next question:
How do agents act in a world like this, and how do their actions become tools that outlive them?
In the next post we’ll talk about actions, instruments, and culture—how a Universe‑Machine can make “doing” as fundamental as “knowing,” and why that’s the bridge from intelligence to civilization.