Posts


Jan. 10, 2026

Purpose and Value: Steering a Universe Toward Wisdom

There’s a mistake that shows up in almost every first attempt at “alignment”:

You pick a value. You encode it. You optimize it.

And then the system becomes brilliant at destroying everything else.

This is not a failure of morality. It’s a failure of world design.

If we’re building Universe = AGI, then the deepest question is not “What should the agent want?”

The deepest question is:

What kind of world makes good futures stable?

Jan. 9, 2026

Resilience and Repair: A Universe That Can Heal

The first time you lose data, you learn a new kind of fear.

It’s not the fear of failure. Engineers fail all the time.

It’s the fear of irretrievable failure: the moment when you realize the system didn’t just break—it forgot. The state is corrupted, the timeline is confused, and there is no clean path back to truth.

That fear is not paranoia. It’s respect for entropy.

If we want Universe = AGI, then we are not building a demo. We’re building a world that must survive:

Jan. 8, 2026

Memory, Identity, and Institutions: A Past You Can Live With

Everyone has met someone who remembers everything.

At first it feels like a superpower. They can quote conversations from years ago. They never lose a detail. They can win any argument by replaying the past like surveillance footage.

And then, slowly, you realize what it costs.

Nothing fades. Nothing heals. No mistake becomes a lesson; it becomes a life sentence. Every moment is dragged forward into the present with the same weight it had on the day it happened.

Jan. 7, 2026

Evidence, Communication, and Alignment: Trust as a Physical Phenomenon

There’s a moment in every distributed system where philosophy becomes engineering.

Two services disagree.

One says the payment succeeded. The other says it failed. Users are angry. Everyone is certain. Nobody is right.

And then someone asks the only question that matters:

“What do the logs say?”

Not because logs are perfect. But because logs are the closest thing we have to a shared past.

This is the next post in our Universe = AGI series, and it’s about something that doesn’t sound like “AI research” until you’ve lived through enough real systems:

Jan. 6, 2026

Cost, Compression, and Selection Pressure: The Currency of Intelligence

There’s a simple reason games are addictive and lectures aren’t:

In a game, every move costs something.

Time. Attention. Opportunity. Risk.

And because it costs, it matters.

This is the fifth post in our Universe = AGI series, and it’s about a question that hides under everything we’ve said so far:

If intelligence is going to grow inside a world, what is intelligence paid in?

Our answer is not “points.” Not “reward.” Not “a magic objective function.”

Jan. 5, 2026

Actions, Instruments, and Culture: How a World Teaches Agents to Build Tools

There’s a reason the story of intelligence is also the story of tools.

Not because tools are convenient, but because tools change what the world is to you.

A rock is a rock until it becomes a hammer.

A flame is a flame until it becomes a stove.

A sound is a sound until it becomes language.

If Universe = AGI is our goal, then the question is not “How do we make a mind smarter?”

Jan. 4, 2026

Locality and Scale: How a World Learns to Compress Itself

There’s a little trick you can play on yourself in a city.

Stand at a street corner and stare at the world like it’s the only world that exists. The traffic light becomes a moral system. The sidewalk becomes a politics. The next ten meters of pavement become destiny.

Then pull out a map.

Suddenly the street corner is not a universe. It’s a pixel.

And if you zoom out far enough, the city itself becomes a dot on a continent, and the continent a smudge on a sphere, and the sphere a faint blue argument against despair.

Jan. 3, 2026

The Tick: A Universe You Can Rewind

There’s a kind of power programmers almost never get in real life:

You make a change, press a button, and time advances.

In the physical world you don’t get a stepper. You don’t get a debugger. You don’t get to roll back the last second and ask, “what exactly caused that?”

But if we’re serious about Universe = AGI, we’re not merely training a model. We’re building a world. And the first decision in building a world is deceptively simple:

Jan. 2, 2026

No Magic Randomness: Why Our Universe Doesn’t Call rand()

There’s a moment every engineer knows.

You’ve shipped a system that “works.” Then one day it doesn’t. Not because the logic changed, but because the world did: a different machine, a different timing, a different seed. The bug report is four words long and spiritually devastating:

“Can’t reproduce reliably.”

At Huly Labs we’ve learned to treat that sentence like a smoke alarm.

Because we’re not just building software that runs. We’re building a Universe‑Machine—a world with rules—so that intelligence can grow inside it.

Jan. 1, 2026

Huly Labs: Why We’re Building a Universe for AGI

It started with a discomfort that’s hard to explain unless you’ve built systems for a living.

We didn’t want to “add intelligence.” We wanted to earn it.

And the more we looked at the way intelligence actually happens—babies learning physics by knocking cups off tables, scientists building instruments, teams building tools and culture—the more a simple idea kept returning:

Intelligence is not a thing floating in space. It is something that grows inside a world.