Choose Boring Technology, Revisited
Ten years ago, I wrote about Dan McKinley’s classic blog post Choose Boring Technology and its resonance with my own development philosophy. My conclusion then was simple: when spinning up a new project, I consider whether I’m using it as an excuse to learn something new, or trying to solve a problem. Learn something new? Fine, but limit it to one unknown. Trying to solve a problem? Stick with what you know.
A decade later, my opinion hasn’t changed. If anything, the advent of LLMs and agentic AI coding tools has made this principle even more critical.
McKinley’s core argument was that companies have limited “innovation tokens” and should spend them strategically on established, well-understood technologies rather than exciting but unproven ones. The math is straightforward: boring technologies have known failure modes, well-understood capabilities, and proven operational reliability. When something breaks at 3 AM, you want to be debugging a technology with Stack Overflow answers, not pioneering uncharted territory.
This was true in 2015, and it’s true today. But there’s a new wrinkle: AI coding assistants.
Here’s where things get interesting—and dangerous. Modern AI coding tools are remarkably good at generating plausible-looking code for almost any technology stack you can imagine. Give Claude or Copilot a prompt about implementing microservices with Kubernetes, GraphQL federation, or the latest JavaScript framework, and you’ll get back code that looks professional, follows conventions, and might even run.
The problem is that when you’re using two or more technologies that are unknown to you, you have no way to verify whether the AI is bullshitting you. And LLMs, despite their impressive capabilities, absolutely do hallucinate when it comes to technical details.
I’ve watched engineers accept AI-generated code that used deprecated APIs, implemented security antipatterns, or created subtle performance problems that wouldn’t surface until production load. The code looked right. It followed naming conventions. It had proper error handling. But it was wrong in ways that only someone familiar with the technology would catch.
When you combine unfamiliar technologies with AI-generated code, you’re not just adding unknowns—you’re multiplying them. You don’t know if the framework choice is appropriate. You don’t know if the AI’s implementation follows best practices. You don’t know which parts of the generated code are boilerplate versus critical business logic. You don’t know what failure modes to watch for.
This isn’t cargo-culting anymore—it’s cargo-culting times 2,356.
But here’s where boring technology shines with AI tools: when you understand the underlying stack, AI coding assistants become incredibly powerful. I can ask Claude to generate Rails code (with help from context7!) because I know Rails well enough to spot when it suggests something questionable. I can use Copilot for JavaScript because I understand the language’s quirks and can factcheck its suggestions.
The AI becomes a force multiplier for technologies you already understand, rather than a crutch for technologies you don’t.
Practical Guidelines for the AI Era
So how do you apply “choose boring technology” in a world of AI coding assistants?
First, when evaluating new technologies, ask yourself: “If an AI tool generated implementation code for this, would I be able to adequately review it?” If the answer is no, you probably shouldn’t be using that technology for anything mission-critical.
Second, when you do choose to learn something new (remember, you get one innovation token), spend real time understanding it deeply enough to factcheck AI suggestions. Don’t just copy-paste and hope for the best.
Third, resist the temptation to use AI tools as an excuse to take on multiple new technologies simultaneously. The AI might make it feel like you can handle a new language, new framework, and new infrastructure all at once, but you can’t properly verify any of it.
The original “choose boring technology” argument was about reducing operational complexity and cognitive overhead. Those concerns remain valid. But in the AI era, there’s an additional risk: the false confidence that comes from having an AI tool that can generate seemingly professional code for any technology stack you throw at it.
The stakes are actually higher now because the quality of AI-generated code makes it harder to spot problems. Bad code used to look bad. Now, problematic code can look quite good until you understand the domain well enough to notice the subtle issues.
So my advice remains unchanged: use what you already know when you’re trying to solve a problem. Learn new things when you’re trying to learn new things. Don’t mistake AI-generated code for understanding.
The most boring technology in your stack might just be the one you understand well enough to know when the AI is wrong.
And in a world where an AI can confidently generate thousands of lines of code in technologies you’ve never used, that understanding is more valuable than ever.