AI Native Development: From Random Prompting to Systematic Engineering

Most dev teams I’ve observed start strong with AI coding agents—amazing hackathon demos—but hit roadblocks when moving to production and larger codebases.

After talking with dev leads managing thousands of engineers, I discovered why AI coding assistants fail at scale—and more importantly, how to fix it.

This isn't another "prompt engineering tips" talk. This is about the systematic framework that transforms AI from an unpredictable toy into a reliable engineering tool. You'll learn why treating AI like Google search guarantees failure, and how treating it like a junior developer with specific constraints unlocks its potential.

Through real-world failures and solutions from enterprise teams, I'll show you:

  • Why your perfectly crafted prompts work today but fail tomorrow (and the Markdown structure that fixes this)

  • How to turn one-off AI wins into reusable "Agent Primitives" your whole team can leverage

  • The context engineering strategies that prevent AI from drowning in large codebases

  • Live demo: Building an Agentic Workflow that handles complex tasks reliably

You'll leave with a concrete framework (open-sourced at github.com/awesome-ai-native) and the AWD CLI tool that implements it, ready to transform how your team works with AI—whether you're a solo developer or part of a 2,000-person engineering org.

This is frontier work. We're not just using AI; we're engineering the future of how developers will work for the next decade.

Langue English
Niveau 0
Technologies

Speaker

Daniel Meppiel
Daniel Meppiel

Hi, I’m Daniel. I’m a Senior Global Black Belt in Developer Productivity at Microsoft, where I help some of the world’s largest engineering teams adopt AI coding assistants, scale modern dev practices, and build secure, reliable software. I’m pass...

Détails