GETTING TO
KNOW ME
A collection of things I actually think — about work, craft, and how the world tends to operate.
"Complexity is usually a symptom of unclear thinking, not a sign of sophistication."
The most impressive systems I've seen are the ones where everything is exactly as complicated as it needs to be and no more. Simple is hard. It requires a willingness to cut things that took effort to build, and a kind of intellectual confidence that doesn't need the complexity to signal seriousness.
I'm suspicious of systems that require extensive documentation to understand. Good design should be mostly self-evident — not because it's trivial, but because the thinking has been done for you.
"Strongly held, loosely held is a cliché — but it's the right cliché."
People who are unwilling to take positions aren't being careful, they're being cowardly. The best collaborators I've worked with have opinions — clear ones, stated plainly — and are genuinely willing to update them when faced with better evidence. The combination is rarer than either half alone.
The alternative is a kind of performative open-mindedness that actually resists change because it never commits to anything worth revising. You can't update a belief you never held.
"I'm not an AI skeptic. I think it's genuinely changed the way we work — mostly for the better."
AI has made me faster at the parts of the job that used to slow me down the most. Planning, thinking through architecture, stress-testing an idea before committing to it — these are areas where having something intelligent to think alongside has real value. It's like having a tireless thought partner who's read everything and has no ego about being wrong. That's not nothing. That's actually quite a lot.
The tool is powerful. I think it's going to help us build things we couldn't have built before, and do more with smaller teams. I'm genuinely excited about that.
But powerful tools have always required judgment about when and how to use them. The question I keep coming back to isn't whether AI is useful — it clearly is — it's whether we're being thoughtful enough about what we hand off to it, and who ultimately bears the cost when something goes wrong. The person paying for a service doesn't care how the code was written. They care whether it works, whether it's up, and whether their data is safe. Those expectations don't change because the development process got faster.
We're still figuring out the right relationship between AI as a tool and engineers as the people responsible for what ships. I think that's the real conversation — not whether to use it, but how to use it in a way that doesn't quietly erode the things our users are counting on.
"Lines of code is not a measure of productivity. It's a measure of how much code you wrote."
Every experienced engineer knows this intuitively: the more code you have, the more surface area for bugs, the more maintenance burden, the more things that can go wrong. The best pull request I've ever seen deleted more than it added. The best engineers I've worked with are ruthless about this.
In the AI era, this has become a genuinely important conversation. There's a lot of bragging happening — lines generated, PRs closed, velocity numbers going up. What I'd love to see alongside those stats: defect rates, incident counts, time spent in review, how much of that code survived six months of production. The people publicly citing their line counts tend not to show that part. And when others have gone and read the actual code, the quality tells a different story.
I think we're starting to see the downstream effects of this in production. SLAs have quietly tanked for some of the most critical services in the industry — GitHub, Claude, Cloudflare, AWS, and others. Services that paying customers depend on have been going down with a frequency that would have been considered embarrassing a few years ago. When infrastructure this foundational starts failing at a record pace, it's worth asking what changed.
I'm not saying AI is the cause. But it's a reasonable question: what happens when you can generate code faster than your review process can keep up? When PR volume doubles but reviewer bandwidth doesn't? When engineers are fatigued from reading AI-generated output that looks plausible but hasn't been deeply reasoned through? Review fatigue is real. And the consequences of a reviewer waving through something they didn't fully understand tend to show up at 3am.
Output is easy to measure. Reliability is harder. The two are not the same thing, and conflating them is one of the more expensive mistakes an industry can make.
"More people doesn't mean faster. It often means the opposite."
Every person you add to a team doesn't just add their output — they add communication nodes. Two people have one connection. Five people have ten. Ten people have forty-five. At some point the coordination cost of keeping everyone aligned starts eating into the work itself.
I believe in keeping teams small and keeping them independent. Small teams move quickly because they can make decisions in a single conversation. Independent teams move quickly because they don't have to wait on anyone else. The goal isn't to build a big team — it's to ship things. If you can do that with fewer people, that's not a resource constraint, that's an advantage.
"Most things are worth doing well, even the things nobody notices."
There's a version of pragmatism that collapses into sloppiness. I don't buy it. The attention you give to the invisible parts of a thing — the error messages nobody reads, the loading state that lasts 300ms, the copy in a modal — accumulates into something. Users feel it without being able to name it.
Craft doesn't mean perfectionism. It means caring enough to notice when something is a little worse than it could be, and fixing it even when you're the only one who would have noticed.