Huly Labs: Why We’re Building a Universe for AGI

Jan 1, 2026

It started with a discomfort that’s hard to explain unless you’ve built systems for a living.

We didn’t want to “add intelligence.” We wanted to earn it.

And the more we looked at the way intelligence actually happens—babies learning physics by knocking cups off tables, scientists building instruments, teams building tools and culture—the more a simple idea kept returning:

Intelligence is not a thing floating in space. It is something that grows inside a world.

So at Huly Labs we’ve chosen a goal that is both romantic and brutally technical:

Universe = AGI.
We want to build a Universe‑Machine where intelligence can arise, learn, build tools, and keep growing.

This series of posts is our attempt to explain why that goal makes sense, what it means in practice, and how we stay grounded while reaching for something enormous.

Why a Universe at all?

Most AGI work tries to scale up a mind—bigger models, more data, more compute.

That approach has produced remarkable results. But it also bumps into a wall: real intelligence is not just pattern recognition. It is agency under uncertainty, where the world pushes back. It’s prediction plus action plus memory plus social reality.

If you want general intelligence, you eventually face questions like:

Our answer is not “we know.” Our answer is “we can build a world where these questions have concrete meaning.”

The kind of Universe we mean (and what we don’t mean)

When we say “Universe,” we do not mean we’ve discovered real physics, or that we’re simulating reality.

We’re borrowing physics language as a clean engineering vocabulary: a law that updates the world, step by step.

What we do mean is:

Think less “perfect simulation” and more “a minimal world that can host open‑ended learning.”

The core design principle: no magic

Here is a principle we treat as sacred:

If something looks random, it should be random for a reason.

In most simulations, “randomness” is an oracle you call. That’s fine for games, but it’s a problem for a Universe‑Machine. If you can inject arbitrary entropy at will, you can fake anything. You can also break causality, reproducibility, and long‑term structure.

So we aim for something stricter:

The deepest layer of the Universe should be reversible—able to run forward and backward.

This isn’t mystical. It’s a software engineering stance:

Once the foundation is reversible, you can still get “stochastic” behavior at the observed level—because an observer never sees all the microscopic details. Which brings us to one of the first big ideas we proved to ourselves:

Determinism can look like probability

If a world is deterministic but contains hidden degrees of freedom—think of them as a reservoir of bits—then an agent who can’t see that reservoir experiences a probabilistic world.

That’s not philosophy. It’s a construction.

It means “physics kernels” that look like Markov processes can be implemented as deterministic micro‑laws, without summoning randomness from outside the Universe.

This matters for AGI because it connects three things that usually live in separate boxes:

Why geometry shows up (without the heavy math)

If you build a Universe‑Machine, you quickly discover that the hard part isn’t “computation.”

The hard part is structure:

This is where modern geometry becomes surprisingly practical. The friendly version of the idea is:

Don’t start from coordinates. Start from what can act on what, and let “space” be what those actions imply.

We push this further by imposing a constraint we love because it makes everything more honest:

Act as if Hilbert spaces were never discovered.

Hilbert spaces are a powerful mathematical language for continuous systems (they’re everywhere in quantum theory), but they can also become a kind of fog machine: everything fits, nothing is forced.

So we try to rebuild the story using finite objects and constructive checks first—machines, graphs, kernels, symmetries—before we allow ourselves any analytic comfort.

What we’re building (the plan, in one paragraph)

We’re building a world that can grow four things together:

  1. Models (what the world tends to do),
  2. Actions (how agents can intervene),
  3. Agents (policies that choose actions),
  4. Culture (shared tools and persistent artifacts).

The goal is not to “solve intelligence” in a vacuum. The goal is to create a substrate where intelligence can be earned—where better compression, better prediction, and better tools are naturally rewarded.

What success looks like (in human terms)

In the end, we want to be able to point at something concrete and say:

Not because we scripted it. Because the world made it possible.

The lab’s promise: Love and Rigor

“Universe = AGI” is a big sentence. Big sentences are dangerous. They can become marketing. They can become religion. They can become an excuse to stop doing the hard work.

So we hold ourselves to two commitments:

Love

We’re doing this because we want a future where intelligence amplifies humanity rather than replacing it; where learning systems are not opaque gods but understandable worlds we can live with. We want to build something that is beautiful, that invites participation, that respects the people who will inherit it.

Rigor

We do not treat inspiration as evidence. We turn ideas into small, testable statements. We build internal “proof‑by‑construction” whenever we can. We insist on reproducibility. When something fails, we keep the failure and learn from it.

This is the culture we want for the lab—and, eventually, for the Universe‑Machine itself.

What to expect next

In the next posts we’ll make the vision concrete without drowning you in symbols:

If you’ve ever felt that the world is the missing piece in AGI—and that building the world is an engineering problem—we’re building for you.