The Creator of OpenCode Thinks You're Fooling Yourself About AI Productivity
Six people. That's the entire team behind OpenCode, one of the fastest-growing coding agents on the market.
In this episode of AI Giants, Codacy CEO Jaime Jorge sat down with Dax Raad, creator of OpenCode and Zen, co-founder of Terminal, and one of the sharpest voices in AI coding.
The conversation went somewhere unexpected. Instead of hyping up what AI agents can do, Dax spent most of the time warning developers about what they can't: tell the difference between feeling productive and actually being productive.
If you're not familiar with OpenCode, here's a quick glance:
OpenCode is a terminal-first coding agent that launched in July 2024. Built from day one as a server-client architecture, it's designed to connect to any frontend: terminal, desktop, web, or mobile.
Zen is their inference provider, which runs at cost to maximize adoption and drive down prices. They now offer GPT-5 at 15% cheaper than OpenAI's own pricing.
TLDR; What Developers Can Learn
- The feeling of productivity is not the same as actual productivity. Be honest with yourself.
- Running tasks sequentially with a faster model often beats parallelizing with slower ones.
- Well-organized codebases perform dramatically better with LLMs than messy ones.
- Optimizing for benchmarks changes how you think, and not in a good way.
- If you build developer tools, you lose the right to have strong opinions about workflows.
- Long-running background agents work for some tasks, but most work benefits from staying in the loop with fast feedback.
Why Parallel Agents Feel So Good But Don't Work
Opening eight agents and watching them all run at once feels incredible. You feel like you're accessing something nobody else can do. Dax called this "the sinister thing about multitasking."
But when he looks at who's actually shipping useful stuff, it's not these people. The productivity feeling is real. The productivity isn't.
Dax works in chaotic open source environments with constant Discord pings and Twitter mentions. If anyone should be good at juggling multiple agents, it's him. His conclusion: you'd probably ship more doing things one at a time with a faster model.
Benchmarks Will Poison Your Thinking
Dax doesn't trust benchmarks, including compliments about his own product.
He's seen users claim OpenCode solved problems that Claude Code couldn't, when he knows they work almost identically under the hood. People hallucinate capabilities into products they like.
Academic benchmarks are worse. They test weird coding puzzles in three-file repos that look nothing like real work. Companies optimize for these benchmarks, then market their rankings. Dax immediately distrusts anyone who does this.
The deeper problem: once you start optimizing for benchmarks, you convince yourself that's what matters.
"You start to think if I do better on these benchmarks, I'll get more users. But people can't tell. They literally cannot tell."
His team is building benchmarks using real PRs from commercial open source projects like Sentry. Give the agent an actual task, see if it produces what was actually shipped. Real work, not academic puzzles.
Your Codebase Quality Matters More Than Your Model
LLMs perform way better on well-organized code. If your file structure is clean and components are properly separated, you can guess where functionality lives just by looking at the tree. That's helpful for humans. It's very helpful for LLMs.
OpenCode has built-in LSP support, which most coding agents don't. When the agent changes a function name and breaks something elsewhere, it gets immediate feedback and can fix it. Typed languages help for the same reason: deterministic error messages tell the LLM exactly what it broke.
Stop Showing the Process Before the Outcome
Dax is frustrated by how developers talk about AI tools. Everyone shows the "how" first: which frameworks, which agents, how many parallel instances. Nobody leads with what they actually built.
His ask: show the amazing thing first. Make people ask "how did you do that?" Then explain the process. Too many developers flex their multi-agent setups without shipping anything meaningful. The entire discussion gets stuck on tooling instead of outcomes.
There's No "Correct" Way to Use OpenCode
When Jaime asked how developers should use OpenCode, Dax's answer was honest: "I'm waiting for our users to tell me."
Don't expect these tools to replace software engineering. Layer them into your existing workflow. Be honest about whether you're actually more productive or just feeling more productive.
And remember that sometimes doing things manually is faster. For certain work, the process of doing it yourself is how you figure out what needs to be done in the first place.
Codacy Helps You Ship Secure AI-Generated Code
Whether you're using OpenCode, Cursor, Copilot, or any other AI coding tool, the code still needs to be clean and secure before it hits production. Codacy Guardrails scans AI-generated code in real time as it's being written, catching SAST vulnerabilities, hardcoded secrets, and insecure dependencies before they ever reach your repo.
Learn how Codacy Guardrails can protect your AI-accelerated development workflow at codacy.com
AI Giants is Codacy's podcast series featuring conversations with leaders building the future of AI coding. Watch the full episode with Dax Raad on YouTube.