Addition

How rules accumulate, and why no one ever subtracts
Addition

No startup begins life with a thousand-page HR policy.

In the beginning, there is no HR. No compliance team. No handbook. Policy can be summarized as:

don’t be a dick

And for a while, the “don’t be a dick” rule works.

Until one day, someone is an asshole.

Strictly speaking, being an asshole isn’t explicitly covered by the “don’t be a dick” rule. So now we need a “don’t be an asshole” rule as well.

Then there’s that incident.
We realize suddenly that being creepy isn’t technically covered by the aforementioned dick or asshole rules.

So we add another one.
Did we need a creep rule? No… not until the exact moment we wished we had one.

And so it goes on.

Every rule, in isolation, makes total sense.
None of them added maliciously.
Each one a rational response to a real mistake, made by a real person, at a real moment in time.

It’s only in the aggregate that we start to suffer.

Take a local university, for instance.
Faculty are required to buy their own office furniture. Fine.
But there’s also an explicit carve-out: professors cannot have a sofa in their office.

There’s only one reason a rule like that exists, and it wasn’t the sofa’s fault.

Whatever happened should have been covered by our general “don’t be creepy” principle, not encoded into furniture policy.

The rule was added with student safety in mind.
The downstream consequences, never considered.

This is exactly how it works in tech and Big Corporate.

Every time I go before a change release committee - seeking sign-off from solution architects, the network team, the firewall team, and a dozen others…
I know that none of them are acting in bad faith.
They’re all just trying to do their jobs.
They’re all white-knuckling it, whispering to themselves, “please not this week.”
Trying to protect their teams from shrapnel when something explodes.

But in the aggregate…

Now I have seventeen committees.
Each with bespoke concerns.
Each with comments to address.
Each a carried scar.

It’s easy to add.
More rules, more guardrails… feels like safety.
Nobody wants more risk.

No one ever subtracts.
Subtracting requires understanding the system holistically.
Subtracting accepts responsibility for an uncertain future.

Multiply these micro-decisions across an organization of thousands, or decades of history, and it’s a miracle that anything ever gets done.

Maybe this is why consultants exist.
A parallel lane outside the normal channels.
Change at arm’s length.
Not because it’s better - but because someone, temporary and expendable, is willing to hold the risk.

But consultants still require a champion.
Someone who understands the system.
Someone to navigate the storm.
Someone willing to push for better.

Change will come.
Not cleanly. Not safely.
But because, eventually, someone chooses yes.

Sometimes, that someone is an asshole like me.

2025 review - health & fitness
The Paradox of Control

What distinguishes you from other developers?

I've built data pipelines across 3 continents at petabyte scales, for over 15 years. But the data doesn't matter if we don't solve the human problems first - an AI solution that nobody uses is worthless.

Are the robots going to kill us all?

Not any time soon. At least not in the way that you've got imagined thanks to the Terminator movies. Sure somebody with a DARPA grant is always going to strap a knife/gun/flamethrower on the side of a robot - but just like in Dr.Who - right now, that robot will struggle to even get out of the room, let alone up some stairs.

But AI is going to steal my job, right?

A year ago, the whole world was convinced that AI was going to steal their job. Now, the reality is that most people are thinking 'I wish this POC at work would go a bit faster to scan these PDFs'.

When am I going to get my self-driving car?

Humans are complicated. If we invented driving today - there's NO WAY IN HELL we'd let humans do it. They get distracted. They text their friends. They drink. They make mistakes. But the reality is, all of our streets, cities (and even legal systems) have been built around these limitations. It would be surprisingly easy to build self-driving cars if there were no humans on the road. But today no one wants to take liability. If a self-driving company kills someone, who's responsible? The manufacturer? The insurance company? The software developer?