AI Giants Recap: English is the New S-Tier Language and Agent Experience the New DX
Dana Lawson has one of the least typical paths in tech leadership. Art school, the US Army, stints at New Relic and InVision, VP of Engineering at GitHub during the Microsoft acquisition, and now CTO at Netlify.
In this episode of AI Giants, host Jaime Jorge, CEO of Codacy, sat down with Lawson to talk about what it actually takes to lead engineering through the AI shift, why agents need their own UX, and why she thinks your flaky tests are a bigger problem than your AI strategy.
P.S. Catch Jaime’s recap of the episode on X here
What is Netlify?
Netlify is a platform that provides an all-in-one workflow for building, deploying, and scaling modern web applications. It's often associated with the Jamstack architecture (JavaScript, APIs, and Markup). It offers services like global hosting, serverless functions, form handling, and continuous deployment (CI/CD) from a Git repository, allowing developers to focus on coding while Netlify handles the infrastructure.
TLDR For Engineering Leaders
-
Agents are first-class platform users now. Design your APIs, error handling, and workflows for non-human consumers, not just developers.
-
Keep friction where it counts. Security, code review, deployment gates, and compliance still need human checkpoints.
-
Communication skills beat raw coding ability. If you can't write a clear prompt, you're burning tokens and time.
-
Start practicing AI on the boring stuff. Flaky tests, version upgrades, and legacy refactoring are safe, repeatable entry points.
-
Don't trust AI to review AI. Use different models to check each other, not the same one reviewing its own output.
GitHub Was the Hardest Job She Ever Had
Lawson joined GitHub right after the Microsoft acquisition closed. She describes it as trying to run a startup with 2,000 people under a company that's been around longer than most of your employees have been alive.
"Do we assimilate, or do we be us?" That was the central question. You can't go back to what you were before an acquisition, and you can't fully become the acquirer either. You have to build a new identity at scale, with new relationships, new business structures, and constant reorgs.
The biggest takeaway here is to be open-minded and expect failure. You're operating in uncharted space even when you know the cast of characters.
Agent Experience Is The New Developer Experience
Lawson has been pushing the concept of Agent Experience (AX), the idea that AI agents are becoming first-class users of platforms, not just tools that developers use. She walked through what this actually looks like.
An error hits an API, triggers a dashboard alert, a human gets paged, looks at the code, writes a fix, submits a PR.
But now an agent catches the error, identifies the root cause, writes the fix, pushes a PR, and the error rate drops. You might never even know it happened.
That's agent experience. The same workflow, but now the "user" on the other end is a nondeterministic agent with its own constraints. You have to design your error handling, your APIs, and your tooling for that consumer.
But Lawson is clear about one thing: we're not there yet. The self-healing dream is real, but agents are still error-prone.
"Maybe one day the AI won't write errors, but man, it really sucks right now in a lot of ways."
Keep the Friction, Lose the Busywork
Asked where friction should stay in the software development life cycle, Lawson didn't hesitate: security, compliance, data governance, and code review.
"It would be almost foolish to just unleash everything without any checkpoints."
She's especially pointed about not trusting a single AI to police itself. "Maybe you shouldn't trust Claude to check Claude. Maybe you need Claude and Codex to hang out, have them check each other." Layer different models for review, write clear criteria for what agents should check, and keep humans in the approval loop.
The flip side is she's all for using agents to scan thousands of lines of code for vulnerabilities. That's a job where agents genuinely outperform humans. The key is being intentional about where you automate and where you add oversight.
Your Flaky Tests Are the Best Place to Start with AI
Jaime raised a question that came up in earlier AI Giants episodes: AI fluency takes deliberate practice, but how does a senior developer responsible for maintaining a massive legacy monolith with tons of customers actually find time for that? Especially in environments where AI tools might not even be allowed yet.
Lawson's first point was that the burden of staying current has always been on you. "There was a time in my life where I had to learn Rust. I still am learning Rust for the record." Learning new tools isn't new. You've always had to adapt.
Then she got blunt.
"Don't tell me you ain't got twenty minutes. You probably spent twenty minutes right now listening to this nonsense when you should've been coding."
Here’s a quick checklist of her specific, practical starting points:
- Fix flaky tests.
- Upgrade that monolithic backend that's two versions behind.
- Handle the repeatable, tedious work
These are safe entry points because they're controlled, repeatable, and don't touch production IP.
Specialists Aren't Going Anywhere
Some guests on AI Giants have argued that agents make everyone a generalist. Lawson disagrees.
"I'm just seeing deep nerds in their respective areas actually just get better and faster."
Agents unlock specialists to cross boundaries more easily. A backend engineer can now handle some JavaScript with an agent's help. A frontend dev can connect the dots to backend systems. But at scale, when problems get complex, you still want the expertise of someone who's seen it before.
The concept of frontend and backend teams isn't obsolete. It's just that the walls between them are getting shorter.
The S-Tier Language Is English
Asked for her programming language tier list, Lawson didn't hesitate: S-tier is English. Everything else is F-tier.
The reasoning tracks with everything she said about soft skills. Communication, critical thinking, and concise natural language are what make you effective with AI tools. Every time you can't make your point clearly, you're burning tokens and money.
His advice to the mid career engineer with a mortgage and kids is not to carve out a 60 hour learning block. It is to pick up the “new computer” in thirty minute sessions, keep experimenting, and notice when you are too excited to sleep because of what you can now build. That is the signal you actually get it.
"If you can't be an effective communicator, it costs money, and nobody wants you wasting any darn money."
Netlify, Vercel, Cloudfare: What About Platform Convergence?
With Vercel pushing frameworks, Cloudflare going deep on infrastructure, and Netlify focused on composability and agent experience, Jaime asked whether these platforms converge in five years or fragment permanently.
Lawson believes that in five years, we probably won't care about the framework or the backend. The bigger concerns will be sustainability, performance at scale, privacy, governance, and how to power all of this without destroying the planet. The platforms that make it easiest for agents and people to get ideas onto the internet will win.
Codacy Helps You Ship Secure AI-Generated Code
As AI agents produce more code and the review challenge grows, the need for automated security becomes critical. Codacy Guardrails scans AI-generated code in real time, catching SAST vulnerabilities, hardcoded secrets, and insecure dependencies before they reach your repo.
Learn how Codacy Guardrails can protect your AI-accelerated development workflow at codacy.com
AI Giants is Codacy's podcast series featuring conversations with leaders building the future of AI coding. Watch the full episode with Dana Lawson on YouTube.