AI Engineering: Doing the Right Thing vs Doing the Thing Right — And Why It Matters

Posted by & filed under , .

Peter Drucker coined the phrase “Management is doing things right; leadership is doing the right thing“. While this quote is about management vs leadership, it can be easily translated into engineering especially nowadays to separate 10X’ers from the rest.

Remember when you spent three weeks refactoring that authentication system to use the “proper” design patterns, only to have your PM ask why the dashboard still wasn’t shipped? Yeah, me too. Awkward.

Welcome to the eternal struggle of engineering: doing the thing right versus doing the right thing. And in 2024, with AI coding assistants that can scaffold an entire API in 30 seconds, this tension has never been more critical–or more dangerous.

The Classic Engineering Dilemma (Now on Steroids)

Let’s get our definitions straight before someone starts a flame war in the comments.

Doing the thing right means implementing the technically superior solution. We’re talking clean architecture, SOLID principles, comprehensive test coverage, perfectly abstracted interfaces, and code so beautiful it could make a senior architect weep. It’s the “if we’re going to build it, let’s build it properly” approach.

Doing the right thing means taking a pragmatic path that achieves the necessary results within your actual constraints–time, budget, resources, sanity. It’s shipping the feature with a reasonable implementation that solves the customer’s problem, even if it makes your inner perfectionist cringe a little.

Before AI tools exploded onto the scene, this trade-off was already tough. Now? It’s a whole new ballgame.

How AI Rewrote the Rules

Here’s what changed: AI coding assistants have made “doing the right thing” almost too easy.

Need a quick data pipeline? Claude or Copilot will scaffold it in minutes. Need to integrate a payment system? ChatGPT will give you working code faster than you can say “technical debt.” Want to add authentication? There’s a prompt for that.

The barrier to shipping features has dropped through the floor. And that’s both incredible and terrifying.

On one hand, we can now deliver value to customers at unprecedented speeds. MVPs that used to take months can launch in weeks. Proof-of-concepts that justified dedicated sprint planning now happen over lunch. The velocity is intoxicating.

But here’s the catch: this speed comes with a massive asterisk. AI makes it easy to do the right thing, but only if you know enough to guide it consciously.

The House of Cards Problem

Picture this: A junior developer uses AI to generate code for a critical service. The code works. Tests pass (well, the ones AI wrote). It deploys cleanly. Everyone high-fives.

Six months later, the service needs to scale. Or handle a new edge case. Or integrate with a different system. And suddenly, nobody understands how any of it actually works. The abstractions are leaky. The dependencies are tangled. The assumptions are buried in AI-generated comments that nobody read.

You’ve built a house of cards.

This is the dark side of AI-assisted development without solid engineering fundamentals. The tools will happily generate code that works, but working code and maintainable, scalable, understandable code are very different beasts.

The key insight: AI tools are accelerators, not substitutes for engineering judgment. They’re like power tools–in the hands of someone who understands carpentry, they’re incredible. In the hands of someone who doesn’t know why joints matter, you get furniture that collapses the first time someone sits on it.

Speed Is the New Constraint

Let’s talk about the elephant in the room: AI hasn’t just changed what’s possible–it’s changed what’s expected.

Industry delivery timelines have compressed dramatically. What used to be a reasonable six-week timeline now feels sluggish. Competitors are shipping features in days, not months. Your stakeholders have noticed. Your customers definitely have.

This creates a fascinating paradox: as an engineer in 2024, you have to think even harder about whether “doing the thing right” is actually the right thing to do.

And here’s the kicker: more often than not, “doing the right thing” is the right thing to do! (Yes, I know how that sounds, but stay with me.)

Why Pragmatism Usually Wins

The market doesn’t wait for perfection. While you’re architecting the ideal solution with proper domain boundaries and event-sourced persistence, someone else is shipping a working product with a few database tables and some well-placed API calls.

They’re learning from real users. Iterating on actual feedback. Building traction and revenue. And you’re still in design review.

This doesn’t mean “move fast and break things” is suddenly good advice again. (It never really was.) It means understanding that perfect is the enemy of shipped, and in competitive markets, shipped is what matters.

So When Do You Choose Which?

Here’s the framework I use to navigate this tension:

Choose “Doing the Thing Right” When:

  • The foundation is critical: Core infrastructure, authentication systems, data pipelines that everything depends on. Get these right or regret it forever.
  • You’re solving a known problem: If you’re building something that resembles an established pattern, there’s usually a “right way” that’s worth the investment.
  • The cost of failure is high: Financial systems, healthcare applications, anything where bugs cause real harm. No shortcuts here.
  • You have actual time: If the timeline is genuinely reasonable and stakeholders understand the value of quality, invest in it.

Choose “Doing the Right Thing” When:

  • You’re validating assumptions: Building an MVP or proof-of-concept? Ship it scrappy. Learn if anyone cares before polishing.
  • Time-to-market is existential: If a competitor is breathing down your neck or a market window is closing, pragmatism wins.
  • Requirements are unclear: Why perfect something that might pivot next week? Good enough + flexible beats perfect + rigid.
  • The problem is novel: Sometimes you don’t know the right way because there isn’t one yet. Experimentation requires speed and iteration.

The AI-Assisted Middle Path

Here’s where it gets interesting: AI tools have created a new sweet spot between these extremes.

You can now ship pragmatic solutions faster while maintaining enough quality to avoid the house-of-cards scenario. The key is using AI as a force multiplier for good engineering, not a replacement for it.

Practical Tactics:

Use AI to scaffold, then refactor consciously: Let AI generate the boilerplate and basic structure. Then apply your engineering judgment to refine the architecture, add proper error handling, and ensure maintainability.

Generate tests aggressively: One area where AI truly shines is test generation. Use it to achieve coverage faster, then review and enhance the test quality.

Prototype fast, then evaluate: Use AI to build a quick proof-of-concept. See if it works. Then make a conscious decision about whether to productionize it or rebuild it properly.

Document your technical debt: If you’re choosing pragmatism over perfection (and you should, often), at least document what you’re compromising and why. Future you will thank present you.

Maintain your fundamentals: Keep learning core computer science and software engineering principles. They’re what enable you to recognize when AI is leading you astray.

A Real Example (From My Recent Life)

Last quarter, we needed to add a complex data transformation pipeline for a new integration. The “right” solution involved setting up a proper ETL framework with Apache Airflow, building reusable operators, and creating a comprehensive data quality monitoring system.

The timeline? Two weeks.

Instead, we used AI to generate a series of well-tested Python scripts that ran on a cron job. Not elegant. Definitely not what I’d architect in a perfect world. But it worked, it was maintainable enough, and it shipped on time.

Three months later, we migrated it to a proper pipeline once we had validation that the integration was valuable. The quick-and-dirty version bought us the time to learn what we actually needed before over-engineering.

That’s doing the right thing in action.

The Competitive Reality

Here’s the harsh truth: while you’re debating architectural purity, someone else is eating your lunch.

AI has democratized development speed. Startups with tiny teams are shipping features at enterprise velocity. Competitors are iterating daily. The cost of perfectionism has never been higher.

This doesn’t mean abandoning quality. It means being strategic about where you invest in technical excellence versus where you optimize for speed and learning.

The engineers who thrive in the AI era aren’t the ones who reject AI tools as “cheating” or embrace them without critical thinking. They’re the ones who use AI to do the right thing faster, with enough engineering discipline to avoid building garbage.

The Bottom Line

AI coding assistants are incredible tools that have fundamentally changed the speed-versus-quality equation in software engineering. They make “doing the right thing”–shipping pragmatic solutions quickly–easier than ever.

But they’re also powerful enough to help you build elaborate houses of cards if you don’t apply them thoughtfully.

It used to be said about C vs C++ that C makes it very easy to shoot yourself in the foot and by contract C++ makes it difficult to do so but when you do, it blows away your whole leg. There’s a similarity here: AI arguably makes it hard to make mistakes but when those are being made prepare for hours and days of digging.

The winning strategy isn’t to always do the thing right or always do the right thing. It’s to make conscious, informed decisions about which approach fits your context, backed by solid engineering fundamentals that prevent you from building unmaintainable messes.

In most cases? Doing the right thing (shipping pragmatically) is the right thing to do. The market moves too fast for anything else.

Just make sure you know why you’re making that choice, and what you’re trading away. That’s the engineering judgment that AI can’t replace.

Your Turn

How are you navigating this tension in your work? Are you seeing AI tools enable faster, more pragmatic development on your team? Or are you dealing with the aftermath of AI-generated technical debt?

Drop your thoughts in the comments. I’d especially love to hear war stories about times when doing the right thing saved you or doing the thing right came back to bite you.

And remember: the best code is the code that ships and solves real problems. Everything else is just architecture astronautics.