How to hire well: what actually matters (beyond skills & stacks)

Because skills fade, stacks change, and frameworks rot - how to hire for the curiosity, ownership, and adaptability that build successful teams.
How to hire well: what actually matters (beyond skills & stacks)

Last time, we talked about my “baby CTO” and the three simple rubrics I use to decide whether we should hire someone at all:

  • The Three Questions – Can they do the job? Do they want the job? Can I work with them?
  • Hell Yeah or No – If it’s not a full body yes, it’s a no.
  • No Sevens – Rate them 1–10, but you can’t use 7. Force an opinion.

Those tools help you avoid obvious mistakes.

This post is about the next layer down: What actually makes someone a good hire in the first place? Not the buzzwords. Not the certificates. Not “10+ years of X.


1. Hiring is the most dangerous thing you can do

Very few mistakes in a company are permanent. Hiring comes close.

A bad hire is like cancer. Some cancers kill fast, some kill slow, some just quietly consume resources. Even the “mostly harmless” ones drain you, and you’re never quite sure when they’ll turn.

In Big Corporate, a bad hire can hide in the gaps for years - looking busy, producing nothing, quietly dragging on the organization.

In a startup, underperformance is harder to hide. But the damage is also faster and more visible. One wrong addition can wreck a whole team.

So let’s skip the metaphor and be concrete. A bad hire hurts you across four dimensions.

Emotional

Sometimes it’s open conflict: combative behaviour in meetings, undermining decisions, constantly reopening questions already settled.

Sometimes it’s quieter: the person who does nothing while everyone else is drowning. Every time someone walks past their desk and sees Facebook instead of work, they start wondering why they’re killing themselves if that person isn’t held to the same standard.

And PIPs don’t just impact the person on them. Everyone nearby starts acting like they might be next - right when you want them focused and executing.

Operational

Maybe they’ve fallen in love with a shiny framework or some pet pattern. Maybe they obsess over performance bugs that only matter if you 10,000x your customer base next month.

Worse, they can soak up everyone else’s time: pulling productive team members for “quick questions” and “just a 30-minute design review,” then calling the same meeting again two weeks later because nothing moved forward.

Meanwhile, your actual roadmap is sitting in the corner, quietly catching fire.

Strategic

If you tolerate low standards, they become the new normal.

Crappy code gets pushed “just to show progress.” You wave it through because at least they’re doing something - and now the rest of the team sees that as acceptable.

If you push back, you buy yourself:

  • More revisions
  • More questions
  • More meetings
  • More emotional noise

Either way, you lose. Quality or velocity. Usually both.

Financial

Let’s keep the math simple.

Say your rockstar senior costs $200k/year. Loaded cost after tax and benefits? Roughly double that. That’s about $200/hour or $8,000/week.

How many deals do you have to close to pay for that seat? What else could you do with that money? How much runway does it burn?

And that’s before we even account for the reduced velocity of the rest of the team or the cost of a botched client interaction. The wrong hire in the wrong meeting can cost you millions.

Think I’m exaggerating? Here’s what I walked into when I returned from parental leave:

“Can you just sit in on a couple of meetings with the new team lead - check it’s all good, maybe give a few tips?”

Eight weeks into a twelve-week engagement, the project was on fire.

  • The codebase had been entirely rewritten, over a weekend, twice.
  • The senior dev had been sidelined.
  • The data scientists were being dismissed as incompetent.
  • Our SME said a 1% lift was the best he’d ever seen in that domain - the client had been promised 4%.

Over a few coffees with people I’d worked with before, I learned the whole team was actively interviewing or negotiating offers.

One hire. Total collapse.

That is the cost of getting it wrong.

If team building feels terrifying, good - that means you’re taking it seriously.

Startups aren’t killed by slow hiring. They’re killed by the wrong hiring.


2. Smart people > right skills

Most job descriptions are wishlists: “10+ years in React, Kubernetes, Spark, Terraform, Airflow, Snowflake, Kafka, Go, Python, Rust… and the ability to foam a cappuccino.”

We pretend we’re hiring for efficiency. Really, we’re hiring for plausible deniability - casting a wide net because we don’t actually know what we want, and only realizing it once the wrong candidate walks through the door.

The problem is: that’s not how startups work.

Curiosity is the real differentiator

Skills are teachable. Curiosity is not.

I had a boss who said his dream hire was “someone who built their own computer as a kid.” As the kid who destroyed a GPU with a leaky water-cooling loop, I kind of agree.

Startups (and tech in general) are just interconnected puzzle boxes. You want people who:

  • Notice when something doesn’t make sense
  • Aren’t satisfied with “that’s just how it is”
  • Take things apart (mentally or literally)
  • Experiment until they understand

Curiosity is the precursor to:

  • Owning problems
  • Fixing systems instead of patching symptoms
  • Leadership (in the stewardship sense, not the title sense)

You don’t want ticket closers. You want the ones who ask: “Why are we doing this every month? Why don’t we fix it properly once?”

A quick aside on job descriptions and self-selection:

  • If I match ~60% of a JD, I’ll apply. Worst case: they say no.
  • I’ve watched my wife match 100% of the “requirements” and most of the “nice-to-haves” and still say, “I’m not qualified.”

Gender and cultural differences in how people read JDs are real. If your job ad is a laundry list, the best candidates will self-select out.

“10 years of X” is not the same as senior

I’ve interviewed some very impressive résumés.

In Vancouver, we were down the road from Amazon and Microsoft and often met people willing to take a pay cut to escape on-call life.

I once interviewed a core dev on the S3 team - yes, that S3.

On paper: unreal. In practice: completely wrong for what we needed.

They were:

  • Incredibly risk-averse
  • Steeped in rare, internal languages and tooling
  • Optimized for nanoseconds at hyperscale - vital at Amazon, irrelevant in a B2B app with 12 users

I’ve had similar experiences in Montreal’s banking scene.

Some divisions are modern and cloud-native. Others are still running COBOL, Fortran, and internal Vagrant clones. I’ve met devs with 20 years of experience who had never used Git for anything meaningful.

Could they ramp up? Maybe. Would I bet my startup on it? No.

Doing something for a long time is not the same as:

  • Doing it well
  • Doing it in varied environments
  • Doing it with strong mentors
  • Being adaptable

Years of experience are a crude, often misleading proxy.

Certificates are paper, not medals

Bootcamps? I respect those. They show intent and sacrifice.

Cloud certs? Vendor badges? LinkedIn Learning marathons? Mostly marketing - free advertising for the platform and a way to feel productive without producing anything.

Do I ignore certs? No. But:

  • One or two to support a narrative: fine.
  • Three or more cloud certs and nothing shipped? Red flag.

Put it this way, when your bathroom is flooding, do you want:

  • The plumber who apprenticed for 500 hours?
  • Or the one who aced a multiple-choice exam but has never touched a pipe?

It’s the same in tech. The solution architect with perfect test scores but no systems experience is a liability.

Show me something real. I can’t debug a certificate.

Quality vs quantity (of years)

Just because someone has done something for a long time doesn’t make them good at it.

Your average office worker might use Word 25 hours a week for 10 years and never touch styles, templates, or mail merge. They “know” Word - but are they an expert?

Likewise:

  • “10 years of Java” might just be “one year of Java repeated ten times.”
  • People plateau.
  • People coast.
  • Environments can insulate mediocrity for decades.

What you actually care about is:

  • Depth
  • Breadth
  • Mentorship
  • Scars from carrying something to production and back

3. How to evaluate learning velocity (AKA: how I actually hire)

If years, stacks, and certificates are noisy signals, how do you evaluate the thing that matters?

You test how they think and how they learn, not whether they’ve memorized your toolchain.

You’re trying to answer:

  • Can they get things done in the real world?
  • Can they reason about new problems?
  • Can they grow with you?

FizzBuzz for adults

At the cancer hospital, I needed platform engineers (we called them DevOps/SRE at the time). A lot of candidates talked a good game about “infrastructure as code” and “reliability,” but couldn’t actually write a basic script.

So I’d ask them to implement FizzBuzz in any language.

“Count from 1 to 100. If divisible by 3, print ‘Fizz’. If divisible by 5, print ‘Buzz’. If both, print ‘FizzBuzz’.”

Is it a senior-level challenge? No. Did it map to my reality? Absolutely.

We had a barely managed 200-node cluster where even a trivial update required 200 manual SSH commands. A simple loop with a couple of conditionals would save hours of toil.

I wasn’t testing elegance. I was testing:

  • Can you automate something mundane?
  • Can you write anything that maps to a real-world pain?
  • Will you roll up your sleeves, even if it’s “beneath” your title?

The problem with take-homes

I’ve used take-homes a lot. I prefer them to whiteboard coding - but they have real issues:

  • They penalize people with full lives.
  • They’re brutal for single parents.
  • The “expected time” is always wrong.
  • Anything long should be paid… and almost never is.

If your process assumes someone can vanish for 8 unpaid hours, you’re not filtering for commitment. You’re filtering for people without constraints. That’s not what you think it is.

My favourite pattern: a simple, real problem with many directions

For years, we’ve used the same light take-home:

We give candidates two Wikipedia pages, for example:

  • List of cities by international visitors
  • List of countries by GDP

We ask them:

“Do a basic analysis: is there a correlation between tourism and GDP?”

Why it works: real-world data is messy.

  • City and country names appear in different spellings or native forms
  • GDP is reported in different currencies
  • The pages are updated every year, so the underlying data changes over time

It reveals how they approach:

  • Cleaning messy data
  • Handling outliers (think: Singapore vs the Vatican)
  • Presenting results (tables vs charts vs nothing)
  • Labelling and explaining their findings clearly

And in the interview, you can pivot anywhere:

  • “How would you automate this?”
  • “What if the GDP data were replaced with Visa/Mastercard transaction streams?”
  • “How would you turn this into an API?”
  • “What if this API hit the front page of Hacker News tomorrow?”

There is no perfect answer. You’re watching:

  • How they react to new constraints
  • Whether they lean in with curiosity
  • Whether they think in trade-offs and MVPs
  • Whether feedback makes them curious or defensive

If the first tangent makes them say, “That’s interesting, could we…?” you’re onto something.

If the first bit of feedback triggers an ego meltdown, you just saved yourself six painful months.

Pair programming for seniors

If you really want to go deep with a senior hire, the best test I’ve found is to sit them with one of your devs, open a fresh bug from your backlog, and see how they work. Your dev drives; the senior guides.

If you’re worried about code exposure, have them sign an NDA. A real senior won’t blink.

Will they fix a random bug in an hour on a brand new codebase? Probably not.

But you’ll see:

  • How they ask questions
  • How they handle being out of their depth
  • How they debug
  • How they collaborate
  • How they narrate their thinking
  • Whether they start mentally mapping the system and slipping into ownership

In a world of LLMs and generated code, you can’t rely on the output alone. A perfect solution with no reasoning or understanding behind it is a black box - and a liability.


Hiring will never be risk-free. There’s no perfect process.

But if you:

  • Treat hiring as dangerous (because it is)
  • Hire for curiosity over checkboxes
  • Test learning velocity, not keyword recall

…you dramatically increase your chances of hiring people who can grow with your company instead of holding it back.

The stacks will change. The tools will change. The frameworks will rot.

Smart, curious people who can learn? They’re the only durable advantage you have.

Hire like a CTO: three simple rubrics to save you from 'maybes'
Newer post

Do the thing

Do the thing

What distinguishes you from other developers?

I've built data pipelines across 3 continents at petabyte scales, for over 15 years. But the data doesn't matter if we don't solve the human problems first - an AI solution that nobody uses is worthless.

Are the robots going to kill us all?

Not any time soon. At least not in the way that you've got imagined thanks to the Terminator movies. Sure somebody with a DARPA grant is always going to strap a knife/gun/flamethrower on the side of a robot - but just like in Dr.Who - right now, that robot will struggle to even get out of the room, let alone up some stairs.

But AI is going to steal my job, right?

A year ago, the whole world was convinced that AI was going to steal their job. Now, the reality is that most people are thinking 'I wish this POC at work would go a bit faster to scan these PDFs'.

When am I going to get my self-driving car?

Humans are complicated. If we invented driving today - there's NO WAY IN HELL we'd let humans do it. They get distracted. They text their friends. They drink. They make mistakes. But the reality is, all of our streets, cities (and even legal systems) have been built around these limitations. It would be surprisingly easy to build self-driving cars if there were no humans on the road. But today no one wants to take liability. If a self-driving company kills someone, who's responsible? The manufacturer? The insurance company? The software developer?