January 8, 2026
    Ai

    From Skepticism to Advocacy: Turning Senior Engineers into AI-First Leaders

    The biggest barrier to AI adoption isn’t tools, it’s senior engineers who are already experts in their domain without them. If they don’t trust the benefits of adopting AI, adoption stalls across the organization. At Liatrio, we turn skeptics into advocates, unlocking AI-native velocity.

    VP of Innovation

    With every client and within Liatrio, the same pattern shows up: the biggest barrier to AI adoption isn’t the tools, it’s the people who are already very, very good without them.

    Senior engineers, leads, and architects have built their careers on being the person who “just gets it done.” They’re promoted for deep expertise, strong opinions on quality, and the ability to keep teams shipping. When they hear “AI-native” or “AI-first,” the quiet subtext can feel like: “Change how you’ve always worked, on a tight timeline, with tools you don’t fully trust.”

    So they say what a lot of experienced engineers are thinking:

    “I don’t have time for this. I just need to ship.”

    At Liatrio, we’ve learned that if you don’t win this group over, you never really become an AI-first organization. But when you do, everything accelerates.

    This post breaks down how we’re turning senior skeptics into AI advocates using structured workshops, and why that unlocks adoption across the whole company.


    Why senior engineers are the hardest (and most important) to shift first

    When it comes to engineers hesitant to adopt AI practices, we see two broad groups:

    • Skeptical seniors
    • The curious-but-fragmented crowd

    The paradox: these senior engineers are the ones everyone else looks to. If they don’t trust AI, they either block it explicitly (“we’re not using that for production code”) or quietly (“I’ll just fix this myself”).

    So our strategy is simple:

    • Raise the organizational baseline with a repeatable AI-native engineering workflow.
    • Win over the seniors in public, in front of their peers.
    • Turn them into the people saying, “Show me why AI can’t help here.” rather than "Prove to me why this will work."

    The foundation: Building a repeatable and trusted AI-native workflow

    Our approach starts with a structured, hands-on workshop built around Claude and Spec Driven Development workflow (SDD). This isn’t a “here’s what AI can do someday” session. It’s:

    • Take a real codebase or problem.
    • Use a specific, opinionated workflow.
    • Ship something by the end.

    A few design principles:

    1. Show, don’t tell

    We don’t start by pitching AI. We start by working.

    • We load up an actual product code repository.
    • We walk through our SDD flow: a simple markdown spec that Claude uses to generate and evolve code.
    • We let the model make mistakes in front of everyone and debug them live.

    This “show, don’t tell” pattern does a few things:

    • It demystifies the system: “It’s just a markdown spec. I can read this.”
    • It shows limits and failure modes openly instead of hand-waving them away.
    • It lets skeptical seniors do what they’re best at: poke holes, ask hard questions, and see whether the process holds up.

    In one session, for example, a senior architect hit model issues mid-workshop: wrong model, missing spec usage, weird behavior. Instead of hiding it, we walked through:

    • How to see that the spec was skipped.
    • How to correct the setup.
    • How to regenerate everything correctly.

    By the end, he wasn’t saying “this is broken”; he was saying, “I’m going to try this on our massive monorepo and see where it breaks.” That’s a different mindset: from judgment at arm’s length to experimentation in context.

    2. Make disposability and regeneration the default

    Traditional engineering culture says:

    • If it’s broken, patch it.
    • If it’s ugly, refactor it.
    • Don’t throw away working code. (It's too valuable!)

    AI-native engineering flips that traditional mindset on its head:

    • If the spec is wrong, fix the spec and regenerate.
    • If the output is deeply off, treat it as disposable.
    • The amount of time and effort (and cost) it takes to “start over” is lower than ever before.

    We model this behavior live:

    • Delete bad or inconsistent AI generations. (Happening less and less as models and context improve)
    • Adjust the spec-driven workflow.
    • Regenerate entire slices of functionality.

    For many senior engineers, this is the real mindset shift. They’re used to nursing imperfect code forward. Watching whole flows get replaced in minutes - while quality actually improves - reshapes their intuition about what “expensive” or “wasteful” means.

    A story we reference often: one engineer rebuilt the same app eight times with the SDD flow, each time experimenting with different approaches and technologies. Historically, nobody has the time or budget to do that. With AI-native workflows, it’s suddenly possible, and that opens up real comparative thinking:

    • Which stack is actually better for this problem?
    • Which layout or architecture is most maintainable?
    • Which version do we want to standardize on?

    3. Normalize “prove it with a product” - not a slide deck

    We lean heavily on a simple rule:

    Don’t tell me it won’t work. Show me where it’s failing.

    If someone says, “AI can’t help with my use case” or “this model isn’t good enough for our standards,” the next step is not a long debate. It’s:

    • Pull up their repo or a close proxy.
    • Apply the AI workflow.
    • See what actually happens.

    In practice, this does two things:

    1. It turns skepticism into testable hypotheses.
    2. It gives seniors an arena where their expertise is still central: they’re the ones who define “good enough,” not the model.

    Over time, this becomes a cultural norm:

    • “We tried it on X, Y, Z; here’s where it shines, here’s where it falls down.”
    • “These are the guardrails and patterns we use when it’s not good enough out of the box.”

    That’s how you get from abstract fear to concrete practice.


    Why winning over tech leads and architects changes everything

    The moment a senior engineer becomes convinced, their behavior changes in ways that ripple across their team and the organization:

    • They stop blocking and start demanding AI use
    • They become teachers and advocates, rather than code review gatekeepers
    • They generate real, credible success stories

    These are the stories that convince the next wave of skeptics, both inside Liatrio and at our clients.

    And because these stories come from the people who used to say “I don’t have time for this,” they carry more weight than any vendor case study ever will.


    What this unlocks for our enterprise clients

    Internally, this approach has been about helping Liatrio adopt AI-native engineering. For helping our clients drive AI adoption and transformation, it becomes a repeatable playbook:

    • Run a structured, hands-on workshop for a mixed audience (product owners, architects, senior engineers).
    • Let your senior skeptics stress-test the process in front of everyone.
    • Normalize fast feedback, small batch, regeneration, and disposability.
    • Anchor everything in “show, don’t tell” with their own repositories and constraints.

    From there, we can:

    • Stand up AI enablement workstreams alongside traditional engineering work.
    • Assign AI ambassadors to client teams to keep them on the “AI-first rails.”
    • Help their own architects become the internal advocates who drive the cultural shift.

    Closing thought

    The future of AI-native engineering isn’t going to be replacing engineers. It's now about retooling them with a new mindset and approach to building solutions with systems thinking.

    When the people doing code reviews, setting patterns, and mentoring juniors become fluent in AI-first workflows, the whole organization changes. Velocity goes up, but so does experimentation. The set of things that feel “too big” or “too weird” to try shrinks.

    And the turning point, almost every time, looks the same:

    A senior engineer who used to say “I don’t have time for this” starts saying, “Show me why AI can’t help here.”

    That’s when you know they’ve crossed from skepticism to advocacy - and that’s when AI adoption stops being something new or foreign and starts being how the organization actually works.

    Continue Reading

    PREVIOUS
    When the Calendar Becomes the System: How Reasonable Coordination Choices Quietly Break Engineering Teams

    When the Calendar Becomes the System: How Reasonable Coordination Choices Quietly Break Engineering Teams

    When meetings and check-ins pile up, focus disappears and progress slows. This post explores how well-intentioned coordination breaks shared context, why visibility isn’t progress, and how protecting focus helps teams move faster with less effort.

    Andrew KhouryJanuary 8, 2026