Why Your Company Needs AI Governance Now
Developers have fully embraced AI coding tools, and they are not planning to let go. They ship more, debug faster, and automate the tedious parts of the job, and that's a good thing.
In a study of radiologists using AI assistance, accuracy dropped from 80% to 20% when the AI provided false positives. Even experienced radiologists saw their accuracy plummet to 40%.
This shocking finding from Ray Eitel-Porter's new book "Governing the Machine" reveals a critical blind spot as companies race to adopt AI tools, especially in software development.
In the latest AI Giants episode, Codacy CEO Jaime Jorge sat down with Ray Eitel-Porter, who built Accenture's internal AI compliance framework and wrote the book on practical AI governance. Their conversation couldn't be more timely as AI coding tools become ubiquitous.
What is AI Governance?
AI governance establishes processes and guardrails to ensure AI systems deliver promised value without causing harm. It includes risk assessment frameworks, accountability structures, and systematic approaches to identifying where AI is being used and what could go wrong.
TLDR; What Engineering Leaders Need to Know
- Automation bias is your biggest hidden risk: The better AI gets, the less likely developers are to catch its mistakes
- Start with identifying where AI is already being used: Most companies don't even know the full extent of AI adoption in their codebase
- Context determines everything: A returns bot for €5 products needs different governance than safety-critical systems
- The EU AI Act is already in effect: Training requirements kicked in February 2024; high-risk provisions hit August 2026
- Small companies need "cognitive speed bumps": Deliberate friction points that force humans to think, not just click accept
- Human-in-the-loop beats human-on-the-loop: Collaborative AI use outperforms pure automation with oversight
The Trust Paradox: Why Companies Struggle to Deploy AI
"One of the reasons that companies are struggling to actually implement AI is precisely because they don't necessarily trust the outcomes," Eitel-Porter explains.
This creates a vicious cycle: without governance, there's no trust; without trust, there's no adoption; without adoption, there's no ROI.
The solution is to build confidence through systematic risk assessment.
As Eitel-Porter puts it: "AI governance is what helps you make sure that you can actually deliver on the results of AI."
The Automation Bias Trap
Here's the counterintuitive problem: The more reliable AI becomes, the more dangerous it gets.
"If your AI is making mistakes 25% of the time, you're going to be fairly alert to it," Eitel-Porter notes. "But if your AI is only making a mistake once or twice in a hundred, you're much more likely to overlook it."
This is particularly critical in software development, where AI coding assistants are approaching that dangerous sweet spot of being right often enough that developers stop scrutinizing their output. Those one or two mistakes per hundred could be security vulnerabilities, logic errors, or performance bottlenecks that slip through because developers have been lulled into complacency.
Eitel-Porter highlights one company's approach to combat this: "Salesforce talks about something called cognitive speed bumps - designing processes or systems in a way that tries to make people think, so you don't get lulled into a routine." He compares it to dummy phishing attacks that security teams run — deliberate tests that keep people alert rather than operating on autopilot.
The MVP of AI Governance
For smaller companies wondering where to start, Eitel-Porter offers practical advice:
- Follow the money - "If you're going to introduce AI, it either means you're going to spend money on some third party product or dedicate internal resources"
- Simple risk triage - Ask: Is this high-risk or low-risk? Low risk: proceed. High risk: dig deeper
- Document the basics - Who approved it? What questions were asked? What tests were run?
One innovative approach came from a company that couldn't afford a central governance team. They made their risk assessment process completely transparent, allowing anyone in the organization to review AI projects. This crowdsourced oversight had an unexpected benefit: teams discovered similar projects elsewhere in the company, reducing duplication and fostering collaboration.
The Regulatory Reality Check
Companies operating in Europe must already provide AI literacy training to anyone using AI tools. By August 2026, high-risk applications (healthcare, financial services, HR, safety systems) will face strict compliance requirements.
Will it become the global standard like GDPR? Eitel-Porter, who would have said yes a year ago, now has doubts: "With the new Trump administration in the US being against federal level regulation, I think there is less chance that the EU AI Act will be adopted globally."
What This Means for Software Development
As engineering teams increasingly rely on AI for coding, review, deployment, and monitoring, the traditional checks and balances are being replaced by AI systems checking other AI systems. The key is maintaining meaningful human oversight without destroying productivity.
The path forward is to build systematic approaches to identify, assess, and manage AI-generated code risks. Because in a world where AI can make 80% of experts wrong, the companies that survive will be those that build governance into their DNA, not bolt it on as an afterthought.
This is precisely why Codacy developed Guardrails to automatically detect and prevent security vulnerabilities in AI-generated code before they reach production.
Take the Free Assessment
AI-assisted coding is here for good. The teams that succeed will be the ones building the right guardrails and culture around it.
Take the AI Coding Risk Assessment below, see your benchmark, and implement custom recommended practices to move toward safer, more compliant AI-assisted development.
How does your organization stack up?