Defusing the AI Coding Time Bomb: Key Takeaways from Codacy's Latest Showcase

In this article:
Subscribe to our blog:

Codacy's latest product showcase opened with a reality check from VP of Technology Kendrick Curtis: after a decade of professionalizing software engineering with versioning, PRs, code reviews, agile, scrum, and test-driven development, AI coding tools have thrown us right back into chaos. The Wild West is back.

So how do we tame it? 

The showcase launched two new features to help, the AI Risk Hub and the AI Reviewer

But it also brought together two experts from opposite ends of the development lifecycle to dig into the bigger question. Aruneesh Salhotra, co-author of the OWASP LLM Top 10 and lead of the OWASP AI BOM project, brought the governance and security perspective. Luca Rossi, creator of the Refactoring newsletter (read by 170,000+ engineers) and former CTO of Wanderer, represented the developer experience side.

The Governance Perspective: Compliance Is No Longer Optional

Salhotra drew from his work across multiple OWASP initiatives to explain where AI governance stands today.

The short version: frameworks are still catching up. With over 100 AI regulations emerging globally and the EU AI Act already in effect, organizations cannot treat AI compliance as an afterthought.

 

"It needs to be done in a continuous compliance way instead of point-in-time. Whether you're in security or development, you have to ensure all the existing policies you had in traditional software development are getting carried over along with new AI governance requirements."

Three key points from the governance discussion:

  • Prompt injection remains the #1 threat. It topped the OWASP LLM Top 10 in both published versions and will likely stay there for years to come. With multimodal applications, the attack surface keeps growing.
  • AI BOM is about operationalization, not documentation. The goal is not to generate a bill of materials and shove it into your GitHub or GitLab. The point is connecting threat intelligence feeds and enabling real-time reaction. As Salhotra put it: "Gone are the days where the reaction time could be one or two hours. Now we're talking minutes if not seconds for the impact to go widespread."
  • Agentic AI will escalate risks. As AI agents gain more autonomy, security implications multiply. Expect the threat landscape to shift significantly.

 

The Developer Experience Perspective: AI Amplifies What You Already Have

Rossi offered the engineering leadership view, drawing from his community of developers and managers.

His point: AI amplifies your team's existing behavior, good and bad. If you had disciplined engineering practices before AI, you'll use AI to raise the floor on quality. If you were reckless before, AI will make you more reckless.

 

"People who were concerned about quality and security before are going to be concerned that AI reduces control over the code. But if you weren't concerned before, I don't think AI is going to change your mind."

Research from the Refactoring community turned up something counterintuitive about how different roles interact with AI:

  • Individual contributors want tighter control. Engineers tend to maintain hands-on workflows with tight feedback loops, reviewing AI-generated code line by line.
  • Managers are often more reckless. They're more likely to run agents in the background during meetings without maintaining the same level of control over outputs. Curtis admitted to being guilty of this himself.

 

The Technical Debt Time Bomb

Both experts landed on the same concern: the coming wave of AI-generated technical debt.

 

"The apparent technical debt is going to increase multifold," Salhotra warned. "The code being generated may not be the most performant or secure. Unless you have something catching at least 90% of issues at the IDE level before commit, technical debt is definitely going to increase."

Curtis put it in concrete terms: if everyone's writing 3x more code and your baseline is one security vulnerability per thousand lines, you've tripled your security vulnerabilities. The only way out is guardrails that work at the speed of AI-assisted development.

 

Should We Still Read the Code?

The panel tackled one of the more uncomfortable questions in modern software development: If AI writes it and AI reviews it, do humans still need to read the code?

Rossi advocated for nuance over dogma. Instead of blanket rules, invert the question: What could go wrong if you don't look at lines of code? The answer varies based on context.

  • Critical path code: Maintain tight human control for the foreseeable future.
  • Non-critical features: Experiment with higher autonomy. Maybe push code optimistically and review asynchronously afterward.
  • The middle ground: Build mini autonomous workflows for areas where it's better to ask for forgiveness than permission.

"It's not one-size-fits-all," Rossi noted, referencing earlier work with Codacy on the topic. The idea of code reviews at different depths depending on how critical something is to the business? Still relevant.

But regardless of automation level, one principle holds: someone needs to put skin in the game. As Curtis put it, accountability cannot be delegated to AI. Someone human must still say, "Whether I've read it or not, I am accountable for this code."

 

The Path Forward: Invest in Automation Before Expecting AI Benefits

Salhotra pointed to mature engineering organizations as the model to follow. He referenced a top-tier US bank that invested heavily in automation and infrastructure years ago, to the point where they could experiment with flipping entire data centers. Organizations like that are positioned to reap AI's benefits immediately. Those that skipped the automation investment face a steeper climb.

 

"Organizations where engineering has been ingrained into their DNA, they're the ones that will reap the benefits of AI on day one. If you haven't invested in automation, this is the time to ensure you invest heavily in automation and integration testing across the board. If you miss out on one particular thing, it can bite you in the back."

 

Looking Ahead: Will Software Engineers Still Exist in 10 Years?

The panel closed with the question everyone's asking.

Rossi offered a measured take: "A lot of anxiety comes from this idea that there's a singularity moment where engineers aren't needed anymore. That's misguided. Even if these systems improve indefinitely, it's going to be a gradual process. We'll adapt and shift our role continuously."

Salhotra took a more pragmatic view. He pointed to billion-dollar companies running with fewer than 20 employees as evidence that the landscape is already shifting: "Does it eliminate the need for engineers or architects? Absolutely not. But instead of 100 people in technology, you might be able to do it with far fewer. That number could be 50%, it could be 80%."

The consensus was engineering roles will evolve, headcount may shrink, but the need for human judgment, accountability, and expertise is not disappearing.

 

What Codacy Launched

Product Engineer Luís Ventura demoed two new features designed to address the challenges discussed by the panel:

  • AI Risk Hub is a governance suite for engineering leaders to set organization-wide AI policies, track risk scores, and enforce automated safeguards across all repositories. It covers unapproved model usage, AI safety patterns (like invisible unicode detection), hardcoded secrets, and vulnerability scanning.
  • AI Reviewer is a hybrid code review engine that combines deterministic static analysis with context-aware reasoning. By feeding in coverage data, complexity metrics, and security findings, it delivers targeted feedback rather than noise. (Currently available for GitHub, with other providers coming soon.)

 

The Bottom Line


Tools matter, but they are not magic. Success with AI coding requires organizations to:

  • Invest in automation and testing infrastructure before expecting AI benefits
  • Treat compliance as continuous, not point-in-time
  • Maintain human accountability even as AI handles more of the work
  • Right-size review processes to risk level rather than applying blanket rules

The AI coding Wild West will not tame itself. But with the right combination of governance, tooling, and culture, teams can move fast without losing control.

Want to try the new features?

Start a free trial or book a demo with our team.

RELATED
BLOG POSTS

Why Shift Left is Failing: Key Takeaways from Codacy’s Latest Showcase
"Imagine vibe-coding in your favorite LLM, without the vibe migraine."
Codacy Product Showcase January 2025
Welcome to the first quarterly Codacy Product Showcase event of 2025! Let’s dive right into the new features we have to share.
Codacy Product Showcase July 2024
Welcome to the third quarterly Codacy Product Showcase event of 2024! Let’s dive right into all of the performance enhancements and new features we...

Automate code
reviews on your commits and pull request

Group 13