← back

Software Is Memoization

Here is a claim that sounds reductive until you sit with it: solving problems is search, and software is memoization. Every library, framework, and tool you have ever shipped is a cached solution to a sub-problem that someone decided was not worth re-deriving from scratch. We build software so the next person — or the next run — does not have to search the solution space again.

This framing is not new. Dynamic programming works exactly this way: decompose a problem, solve sub-problems once, store results. What we call "the software industry" is dynamic programming applied to human coordination at civilizational scale. Someone figured out TCP, memoized it, and now nobody rederives reliable packet delivery from first principles.

What changes when AI agents enter the picture is not whether memoization matters, but what is worth memoizing.

The Cache Shifts

The conventional fear is that agents eliminate software. If an agent can generate a solution on the fly, why freeze anything into a package? But this misunderstands where the cost lies. Agents are powerful searchers, but search is never free. It costs tokens, time, and reliability. An agent that re-derives a correct OAuth flow from scratch on every request is burning resources and introducing variance — sometimes it will get the edge cases wrong.

So the question is not "do we still need memoized solutions?" but "which solutions are still worth memoizing versus which ones agents can reliably find just-in-time?"

The answer reshapes the cache. Things agents handle fluently — generating unit tests, writing CRUD endpoints, scaffolding boilerplate — stop being worth freezing into elaborate frameworks. You do not need a sophisticated test harness generator when one sentence ("use test-driven development") activates the right behavior in an agent that already knows how. Memoizing that would actually hurt: rigid tooling constrains the agent's ability to adapt to context. You are paying for generality and then throwing it away.

But the expensive stuff? The things where getting it wrong is costly, where the search space is treacherous, where subtle mistakes compound? Those are worth crystallizing. Pre-installed isolated environments. Browser automation with screenshot capabilities. Pitfall-avoidance guides for common failure modes. These are memoized artifacts that save an enormous amount of exploration time, not just for one agent invocation but for everyone doing similar work.


The cache shifts from solutions to search infrastructure: skills, constraints, verification tools. Thinner, but still essential — scaffolding for the agent, not a replacement for it.

What remains is not the thick application layer we are used to. It is something leaner. The memoized layer becomes search infrastructure: skills the agent can invoke, constraints that keep it in bounds, verification mechanisms that catch errors before they propagate. The software is still there, but its nature has changed. It serves the searcher rather than replacing the search.

The Strandbeest Analogy

Theo Jansen builds kinetic sculptures — skeletal structures that walk on beaches, powered only by wind. I first used this analogy when writing about the Blueprint pattern, where the focus was on how rigid linkages produce emergent flexibility. Here I want to look at it from a different angle: the relationship between the skeleton and the wind.

Software in the age of agents is the skeleton. The agent is the wind. Without structure, the agent's search energy dissipates — it hallucinates, loops, goes sideways. Without the agent, the structure just sits there, inert and brittle, handling only the cases its designers anticipated. Together, they produce complex adaptive behavior that neither could achieve alone.

And the skeleton should be minimal. Over-engineer it and you get rigidity. The whole point is that the wind is variable — the agent adapts to novel situations. Your job is to provide just enough structure to channel that adaptability productively: guard rails, not train tracks.

Evolving the Skeleton

Here is where it gets interesting. The skeleton itself can be evolved.

Jansen does this with his physical sculptures — he runs evolutionary algorithms to discover optimal leg linkage ratios. You can do the same with agent scaffolding. Ahead of time, you search the configuration space: what skills should be available? What verification loops catch the most errors? What domain-specific language best encodes the constraints of the problem? This is like training a model, except instead of backpropagation over weights you are iterating on the structure that shapes agent behavior.

I have been doing this with domain languages. You define a small formal language that captures the rules of a specific problem space — not a general-purpose programming language but a constrained vocabulary for expressing plans and checking them against domain rules. Then you let the agent use that language to verify its own reasoning. The feedback loop is tight: propose a plan, validate it against the DSL, catch violations, revise.

But the real move is letting the agent modify and evolve the language itself. The DSL is not handed down from on high; it is a living artifact that improves as the agent discovers new failure modes and new regularities. The memoized search infrastructure gets better over time, not through human engineering alone but through the agent's own exploration.

This gives you two layers of learning operating in two different spaces.

The first layer is meta-search: ahead-of-time exploration of the setup itself. Which feedback mechanisms have the best reliability? Which constraints prevent the most expensive failure modes? Which environmental provisions save the most exploration time? You can run this search offline, burning compute to find scaffolding configurations that maximize agent performance. Update the memoized search infrastructure, and just-in-time performance improves across the board.

The second layer is runtime search: the agent operating within whatever scaffolding currently exists, finding solutions to the actual problem at hand. This is where the tokens get spent, where the agent's generality pays off, where novel situations get handled.

These two layers are recursively coupled. Better scaffolding makes runtime search more efficient. Runtime failures reveal scaffolding gaps. Meta-search fills those gaps, producing better scaffolding. The system improves along both axes simultaneously, each layer creating leverage for the other.

What to Crystallize, What to Leave Fluid

I have spent the past year watching my own decade-old projects get fed into agent harnesses. Things I spent weeks building — put them into the right problem space with the right constraints and an agent makes significant progress in hours. This is simultaneously humbling and clarifying. The agent's online iteration capability is so powerful that the question "what is still worth solidifying as reusable software?" becomes genuinely urgent.

My working heuristic: crystallize what is costly to discover and cheap to reuse. Leave fluid what is cheap to discover and costly to generalize.

Costly to discover: reliable isolation environments, correct-by-construction security boundaries, feedback mechanisms that do not produce false positives, verified-correct domain constraint checkers. These represent expensive search results. Memoize them aggressively.

Cheap to discover: specific implementations of well-understood patterns. The agent already knows how to write a REST endpoint, a React component, a database migration. Freezing these into generators or templates is legacy thinking — you are spending engineering effort to restrict a system that would do fine with less guidance.

There is a middle zone that is evolving fast. The places where agents easily stumble and make errors today — if you find avoidance guides for those pitfalls and crystallize them, that is software with immediate value. But be clear-eyed: these human-made structures are transient. Future agents will absorb them. What is a carefully curated prompt template today becomes implicit capability tomorrow. The agent that currently needs to be told "set up an isolated environment first" will eventually start by giving itself a fully reliable sandbox without being asked.

This means the shelf life of agent scaffolding is shorter than traditional software. But that is fine. The scaffolding does not need to last forever. It needs to provide leverage now, while the gap between what agents can handle and what they need help with still exists. As that gap closes, the scaffolding evolves or dissolves.

The Shape of What Remains

So what does the steady state look like? Not the elimination of software, but its distillation. Software becomes thinner, more structural, more concerned with verification and constraint than with implementation. The thick middle layer of application logic — the part that most of us spend our days writing — gets progressively handled by agents operating within well-evolved scaffolding.

What remains is the skeleton: minimal, evolved, essential. The infrastructure of search rather than its results. The memoization layer does not disappear. It changes shape. It caches different things. And it keeps changing shape, because the agents keep getting better, and the meta-search over scaffolding keeps finding improvements, and the boundary between "worth crystallizing" and "leave it to the agent" keeps shifting.

If that sounds unstable, it is. But instability is not the same as irrelevance. The skeleton has to keep evolving because the wind keeps changing. That is not a reason to abandon the skeleton. It is a reason to get very good at evolving it.