← back

The Speed of the Loop

The biggest difference between school and industry is not that real systems are more complex. It is that in school, someone has already built your feedback loop for you.

You write code, you run tests, you get a grade. The signal is clear, the cycle is fast, and the metric is given. In industry, none of this is true. The system is noisy, the metrics are ambiguous, and nobody hands you a test suite for "did we build the right thing." The first skill you need is not how to optimize — it is how to build a feedback loop worth optimizing against.

I have spent the better part of two decades building and breaking systems, and the pattern that keeps recurring is this: the quality of your outcomes is determined less by how smart your decisions are, and more by how fast and accurately you can observe their consequences.

Survival by Iteration

In 2009, I joined Facebook. The company had fewer than a thousand people. I joined a team working on a messaging project — what they described as a Gmail competitor.

On my first day, I opened the backend codebase. It had fewer than two thousand lines of Java. System.out.println where logging should have been. Every user's data serialized into a single binary blob. If I had shown this code in a review at any large company, it would not have survived.

But that team had something that mattered more than clean code. We were small — about ten engineers, full-stack — and we were fast. Everyone sat in product discussions, understood the trade-offs, and could ship changes within hours. There were no specification documents, no two-week planning cycles. The process was: build it, ship it, see what breaks, fix it, repeat.

The product vision mutated constantly. It started as email, then absorbed chat, then SMS, then group messaging. I remember trying to reverse-engineer a unified model from the use cases. Three use cases — I assumed a triangle. A fourth appeared — maybe a rectangle. More kept arriving, and the geometry kept changing, until I realized there was no model behind the use cases at all. The product people were designing around scenarios, not schemas. The consistent abstraction was our job to discover, on the fly, under fire.

That codebase — the one that looked like a student project — survived the real world. It evolved. It scaled to serve over a billion users, all without a ground-up rewrite. You know it today as Facebook Messenger.

The original email feature was eventually removed entirely. If we had spent those early months building an architecturally pristine email system, we would have shipped something beautiful and irrelevant. The thing that won was not engineering excellence. It was evolutionary speed — the ability to iterate faster than the environment changed.


The system that survives is not the one with the best design. It is the one with the fastest feedback loop.

There is a card game called Dominion where the most powerful card is called Chapel. It does not attack your opponent or give you resources. It lets you discard your own cards. This sounds counterproductive until you realize that a smaller hand with higher signal density triggers more combos, more reliably, than a bloated hand full of dead weight. The engineering equivalent: you do not need more code, more features, more infrastructure. You need less — specifically, less of whatever is slowing down your loop.

The Right Signal

A fast loop is necessary but not sufficient. If the signal is wrong, you iterate confidently in the wrong direction.

Later, that same messaging system hit a scaling crisis. The backend storage ran on mechanical hard drives, and the system was saturating their physical I/O capacity. There was a metric for this — disk IOPS — and it was clearly the bottleneck. But nobody could act on it effectively, because the metric lived at the wrong layer.

The storage team watched their cache hit rates. The service tier watched its request throughput. The frontend teams watched their API latencies. Each team optimized its local metric. But local optimization of distributed systems is a well-known trap: you can improve every component while making the whole system worse, because nobody is watching the global constraint.

The fix was not algorithmic. It was observational. We generated a unique trace ID for every incoming request, propagated it through every layer of the stack, and had the storage layer report an estimated disk IOPS cost per traced request. Then we built a real-time leaderboard: every API endpoint, every access pattern, every team's code, ranked by contribution to the bottleneck.

Within a month, disk IOPS dropped over fifty percent. Not because anyone got smarter. Because everyone could finally see the same number.


A feedback loop is only as good as the metric it feeds you. The hardest part is not optimizing — it is finding the one number that, if everyone could see it, would make optimization obvious.

This experience taught me something I keep rediscovering: the breakthrough in a stuck system is rarely a better algorithm. It is a better measurement. When you cannot make progress, the first question should not be "what should we try next?" It should be "are we even looking at the right thing?"

The Limits of the Loop

So: build a fast loop, feed it the right signal, iterate. Is this the universal recipe?

No. Feedback loops have a fundamental limitation: they can tell you where you are relative to where you were. They cannot tell you where to go.

There is a well-known story about a tech executive who A/B tested forty-one shades of blue to find the one that maximized click-through rate. This sounds rigorous. It is also the kind of optimization that, taken to its logical extreme, produces local maxima — surfaces polished to a mirror finish in a direction that may not matter.

I once asked a data scientist at Facebook whether the company was truly "data-driven." He corrected me: Facebook was "data-aware." When Mark Zuckerberg wanted to build something and the data supported it, the data got cited. When the data did not support it, the thing got built anyway — and he kept pushing until the data caught up. Steve Jobs operated similarly. So does every founder who builds something that did not previously exist. This is not irrationality. It is a recognition that for genuinely novel products, the feedback loop you want does not exist yet. You have to build the thing first, then build the loop to evaluate it.

The distinction matters. Data-driven decision-making has an implicit assumption: the answer already hides in the data, and your job is to extract it. This works for optimization — choosing between forty-one blues. It does not work for creation — deciding whether blue is the right color at all, or whether the button should exist.

For creation, you need judgment. Judgment is not the opposite of data. It is what you use when the data has not yet been generated — when the feedback loop has not yet been built — and someone has to decide which direction to point it.


Data tells you where you are. Judgment tells you where to aim. You need both — and you need to know which one is operating at any given moment.

I once watched a colleague present experimental results with impressive confidence. Partway through, someone noticed the test and control groups had been swapped in the analysis. Without missing a beat, the colleague found new data points in the reversed evidence that supported the original conclusion. It was simultaneously impressive and alarming. We all do this to some degree — selectively gather evidence for what we already believe. The only reliable defense I have found is writing down predictions before seeing results. Memory flatters you. A notebook does not.

Closing the Loop

The pattern runs through everything I have learned about building systems and building skills:

  • Speed: tighten the loop. Ship, observe, adjust. The messier prototype that ships this week teaches you more than the clean architecture that ships next quarter.
  • Signal: find the right metric. If every team watches their own dashboard and the system is still stuck, the problem is not effort — it is visibility.
  • Density: prune the noise. The most powerful move is often removing what does not contribute, not adding what might.
  • Limits: know when the loop cannot help you. Optimization is not creation. Sometimes you have to trust judgment, build the thing, and let the data catch up.

These are not principles I arrived at through theory. They are scar tissue from years of building things that refused to behave as designed — systems where the feedback was late, noisy, misleading, or absent, and where the only way forward was to build a better loop and try again.