Optimize for Replacement, Not Extensibility
Why reversibility might be your best strategy against complexity
We talk a lot about extensibility in software design.
We build abstractions “just in case.” We generalize early. We aim for reusability. But too often, that mindset leads us into the exact trap we were trying to avoid: accidental complexity.
One of the most powerful principles I’ve come across — one I try to apply consistently as a CTO — is this: don’t optimize for extensibility. Optimize for replacement.
It’s a mental shift that has changed the way I think about architecture.
Taming Complexity with Reversibility
I first heard this idea in an article by Kent Beck called Taming Complexity with Reversibility, which was based on a talk by Luca Zaninotto.
In it, Zaninotto breaks down complexity into several forms:
State: the number of configurations your system can reach
Dependencies: on external systems or internal modules
Uncertainty: from the environment or business context
Irreversibility: decisions that can’t be undone
Each of these makes a system harder to understand, evolve, and maintain.
Some factors, such as state or external uncertainty, are hard to control. But irreversibility? That’s something we can design for — or rather, design against.
Software’s Superpower: Reversibility
Zaninotto gives a striking example: the Ford Model T.
Ford tamed complexity by minimizing the number of parts, configurations, and options. You could get the car in “any color as long as it’s black.”
This reduced state minimized dependencies and created repeatable, low-risk processes.
But in software, we don’t have the same physical constraints.
We can undo.
We can rebuild.
We can refactor, rewrite, and replace — if we design for it.
This is the key insight: reversibility is a unique superpower of software. And we waste it when we overfit for flexibility or premature reuse.
Optimizing for Replacement
So what does “optimizing for replacement” actually mean?
It means designing every part of your system — from a method to a class, from a microservice to a data pipeline — with the assumption that it might need to be swapped out later.
It doesn’t mean big bang rewrites. It’s the opposite.
It means building in a way that makes change safe, incremental, and predictable.
You do that by:
Designing simple, clear contracts (APIs, interfaces)
Keeping low coupling between components
Using tests as living documentation and safety nets
Avoiding over-generalization — you can generalize later if needed
It also means being suspicious of early abstractions. Just because something might need to support N use cases in the future doesn’t mean you should build for them now. Instead, build the version that works for your current need — but make sure you can replace it easily later.
Startups and the Cost of Irreversibility
This mindset is especially critical in startups.
In early-stage products, you’re still discovering your market, your business model, and even your problem space. You’re going to get things wrong — and that’s OK.
What matters is whether you can recover quickly and cheaply.
If you’ve optimized for replacement, you don’t need to keep evolving a design that no longer fits. You can build a better version and swap it in.
If not, you’re stuck trying to extend something that was never meant to handle your current scale, your new requirements, or your latest insights. That’s how tech debt accumulates. That’s how teams slow down.
This Is Not About Rewrites
Let me be clear: this is not an argument for big-bang rewrites.
Rewrites are risky, expensive, and rarely deliver on their promises.
Optimizing for replacement means building fractally replaceable systems. You can replace small parts without touching the whole. You can evolve the system piece by piece, in production, without drama.
When you do this well, your software becomes a living system — one that can grow, adapt, and renew itself over time. Not through heroics. Just through good design.
Why Deployment Reversibility Matters
Reversibility isn’t just a design principle for code and architecture — it’s just as critical in how we ship software.
In fact, one of the most practical ways reversibility shows up is during deployment.
When things go wrong — and they will go wrong — your ability to quickly roll back to a known-good state can make the difference between a minor hiccup and a major incident. That’s what gives teams the confidence to move fast without being reckless.
Reversible deployments aren’t a luxury — they’re a form of operational safety.
Whether it’s blue-green deployments, canary releases, or simple rollback scripts, the goal is the same: make sure you can undo a change in minutes, not hours.
Why does this matter?
Because the ability to revert fast is what allows you to iterate fast.
It’s what allows you to deploy at will. It’s what keeps incidents from snowballing. And it’s what gives engineers the confidence to ship without the fear of breaking production for hours.
So when we talk about optimizing for replacement, it’s not just theoretical. It’s a mindset that shapes everything — from how you write a function to how you push to prod.
Final Thought
Extensibility is seductive. It sounds like good engineering.
But extensibility assumes you can predict the future. You can’t.
Optimizing for replacement is a hedge against being wrong.
It acknowledges that the future is uncertain — and that the most powerful thing you can build into your system is the ability to change your mind.
And in a world where complexity is the enemy, reversibility is your best weapon.
