The Paradox of Control

When things go wrong, we tighten our grip.
The Paradox of Control

The last release failed.

Not catastrophically, but enough to spook everyone. Three micro-services: two gracefully rolled back, one stubbornly dragging its feet.

The response was immediate and predictable: “We need more control.”

A single end-of-sprint release was declared the only solution. Everything bundled together and neatly tagged. Nothing shipping without explicit approvals. The old rituals return.

It’s the paradox of control - every time something breaks, we tighten our grip. But software rewards trust, feedback, iteration, speed. Process adds friction and, once added, can never be removed.

The alternative is unremarkable.

Move fast.
Prototype early.
Forgiveness comes after the client loves the demo.

Progress through motion, not memos and minutes.

This feels like the inevitable end-state for large organizations.

An empire of permissions.
Strategic alignment sessions.
Pre-approval committees.
Layers of carefully stacked policy.

Nothing moves until every stakeholder has blessed it - and by the time that happens, the context has shifted. The decision must be revisited. The whole saga restarts.

Feedback loops are replaced with approval loops. Progress becomes performance art.

Eventually, the questions stop.

Not because everything is resolved,
but because asking them no longer changes the outcome.

What is visible is fear.
Fear of blame.
Fear of mistakes.
Fear of being the one who signed off.

So responsibility is diffused. Procedures multiply. Accountability blurs just enough to feel protected.

Control promises stability. In practice, it delivers paralysis. The grip tightens until it chokes.

The trap is mistaking the status quo for safety.
Never exploring. Never venturing forward.
Forgetting that the world around them never stops.

Every team must eventually face the question: how much certainty is enough?

If you’ve been shipping safely for six months, a year, longer - that momentum is a form of proof. Not a guarantee, but evidence. Trust that the team knows what is expected and can handle it. A track record that says the system works.

Does one bad release outweigh all the quiet successes that came before it?

A single failure shouldn’t trigger a halt. It should trigger a fallback.
A rollback plan.
A circuit breaker.
A way to fail safely and keep moving.

Stopping everything isn’t discipline. It’s panic.

If a team cannot fail safely, it will never push, never explore - and isn’t that the true failure of leadership?

The real work isn’t adding more gates. It’s building systems that can absorb mistakes without losing their nerve.

The healthiest teams don’t avoid failure.
They expect it, contain it, and recover while learning.

Human systems need room to breathe.

Addition
Older post

Addition

The Efficiency Paradox: A Post-Mortem of a Success

What distinguishes you from other developers?

I've built data pipelines across 3 continents at petabyte scales, for over 15 years. But the data doesn't matter if we don't solve the human problems first - an AI solution that nobody uses is worthless.

Are the robots going to kill us all?

Not any time soon. At least not in the way that you've got imagined thanks to the Terminator movies. Sure somebody with a DARPA grant is always going to strap a knife/gun/flamethrower on the side of a robot - but just like in Dr.Who - right now, that robot will struggle to even get out of the room, let alone up some stairs.

But AI is going to steal my job, right?

A year ago, the whole world was convinced that AI was going to steal their job. Now, the reality is that most people are thinking 'I wish this POC at work would go a bit faster to scan these PDFs'.

When am I going to get my self-driving car?

Humans are complicated. If we invented driving today - there's NO WAY IN HELL we'd let humans do it. They get distracted. They text their friends. They drink. They make mistakes. But the reality is, all of our streets, cities (and even legal systems) have been built around these limitations. It would be surprisingly easy to build self-driving cars if there were no humans on the road. But today no one wants to take liability. If a self-driving company kills someone, who's responsible? The manufacturer? The insurance company? The software developer?