1

New Research Report - Exploring the 2024 State of Software Quality

Group 370
2

Codacy Product Showcase October 8th - Sign Up to Learn About Platform Updates

Group 370
3

Spotlight Whitepaper by IDC on Importance of Automated Code Review Technologies

Group 370

Evolving DevSecOps to Protect Against New Threats Associated with AI and ML

In this article:
Subscribe to our blog:

It’s genuinely hard to overstate AI's effect on software security. Even taking away AGI and the paperclip problem, AI opens up so many new vectors for attack within codebases and software that it requires DevSecOps professionals to rethink how they will protect their applications.

It's equally difficult to overstate AI's effect on software development in general. According to our 2024 State of Software Quality survey, 64% of developers have already integrated (AI) into their code production workflows, and 62% use AI to review their code.

DevSecOps has to evolve to match this new threat. But how? Let’s look through three categories of AI threats and how DevSecOps operators can adapt their processes to defend against each threat.

The Threat From Poor Code

This is the immediate threat from AI to development, and it's a doozy. As AI-powered code generation tools like GitHub Copilot and OpenAI Codex become more sophisticated and widely adopted, we'll see a lot more code written with the help of AI assistants.

While these tools can significantly boost developer productivity, they also introduce new risks. AI models can generate code that looks plausible but contains subtle bugs or security vulnerabilities. For example, a model might suggest a code snippet for user authentication that seems to work fine but contains a hard-to-spot logic flaw that attackers could exploit.

The problem is compounded by developers' tendency to trust AI-generated code too much. They might assume it must be secure and bug-free because it comes from a tool like Copilot. As a result, they may not scrutinize it as carefully as they would human-written code.

There's also the risk that AI could be used to deliberately generate malicious code. Imagine an attacker feeding a code generation model examples of subtle backdoors and exploits. The attacker could then use the model to create innocent-looking code snippets containing hidden vulnerabilities and try to incorporate these into open-source projects or share them online for unsuspecting developers to use.

To defend against these threats, DevSecOps teams need to adapt their processes in several ways:

  1. Treat AI-generated code with extra scrutiny. Establish clear guidelines that any code suggested by AI tools must be carefully reviewed and tested before being incorporated into projects. Don't assume AI code is secure by default.

  2. Enhance code review practices. Train developers to spot common types of vulnerabilities that AI might introduce, such as race conditions, unsanitized inputs, or weak cryptography. Automated static analysis tools scan for these issues.

  3. Harden testing and validation. Run AI-generated code through rigorous security testing and validation processes before deploying it. To identify potential weaknesses, perform fuzz testing, penetration testing, and other security checks.

  4. Monitor for suspicious code patterns. Use tools to monitor codebases and repositories for signs of AI-generated malicious code, such as code that matches known vulnerability templates or exhibits unusual behavior.

As AI becomes more integral to the development process, DevSecOps must evolve to address the new risks it introduces. By treating AI-generated code with appropriate caution and enhancing secure development practices, teams can harness the power of AI while keeping their applications safe. It won't be easy, but it's a challenge we must rise to meet.

The Threat From AI Agents

DevSecOps also has to consider attacks promulgated by AI. AI agents are autonomous or semi-autonomous systems that can perform tasks, make decisions, and interact with their environment without direct human control. As these agents become more sophisticated and widely deployed, they present new security challenges that DevSecOps teams must address.

One alarming scenario is using AI agents for intelligent vulnerability scanning and exploitation. Imagine an AI system that continuously learns and adapts its methods to find and exploit software weaknesses. Such an agent could rapidly discover zero-day vulnerabilities and craft targeted exploits with a level of speed and precision that human hackers can't match.

Another concern is using AI for adaptive distributed denial-of-service (DDoS) attacks. An AI-powered botnet could automatically optimize its attack patterns based on the defenses it encounters, making it much harder to block with traditional DDoS mitigation strategies. It could also coordinate swarms of IoT devices to amplify its impact.

To counter these threats, DevSecOps teams need to fight AI with AI:

  1. Deploy AI-powered intrusion detection and prevention systems that can match the speed and adaptability of AI attackers. These systems should use machine learning to continuously model normal behavior and detect anomalies.

  2. Integrate AI into red teaming and threat-hunting processes to proactively identify vulnerabilities and attack paths that AI agents might exploit. Automated penetration testing tools can help here.

  3. Use AI to automate and accelerate incident response. AI systems can help triage alerts, investigate incidents, and suggest optimal containment and remediation actions.

DevSecOps teams must adapt by leveraging AI in their defense strategies. This will require a significant investment in AI security skills, tools, and processes, but it's an investment we must make to stay ahead of the AI attack curve.

The Threat from AI Models

What if you are incorporating an AI model into your application? AI and ML models are notorious for being complex, opaque, and challenging to debug. This complexity makes it too easy for critical vulnerabilities to slip through the cracks unnoticed. 

For example, consider a deep learning model used for fraud detection in a financial application. If there's a subtle flaw in the model architecture or training data, attackers could learn to generate adversarial examples that bypass the model's fraud checks. This could lead to massive financial losses before the issue is even detected.

The breakneck pace of AI development also works against secure coding practices. The pressure to ship new models means less time for thorough security audits and penetration testing. I've seen teams deploy models to production that have only been tested for accuracy without security validation. It's a recipe for disaster.

So, how can DevSecOps adapt? First, development teams must prioritize security as a first-class concern in the AI development lifecycle. This means baking automated security tests into CI/CD pipelines, conducting rigorous code reviews focused on security, and performing regular security audits on AI codebases and open-source dependencies.

Secondly, invest in tools for explainable AI and model interpretability. Understanding and auditing model decisions is critical for identifying potential security flaws. This likely means choosing open-source models such as Mixtral or Llama over closed-source, proprietary LLMs.

Finally, ensure your incident response playbooks are adapted for AI-related threats. Be prepared to rapidly roll back problematic models, issue security patches, and communicate with stakeholders in the event of an AI security breach.

The Threat From AI Users

Finally, if you deploy AI models, DevSecOps must consider malicious end users. Malicious prompt engineering can manipulate your AI to generate harmful, biased, or misleading outputs.

For example, an attacker could try to inject malicious instructions into a chatbot's prompts, tricking it into revealing sensitive information or generating spam and propaganda. By experimenting with different prompts and observing the outputs, attackers can "jailbreak" the AI system and bypass its safety constraints.

Moreover, as instruction-tuned LLMs like Claude or GPT become more widely used in business settings, malicious employees could use prompt engineering to extract confidential data, generate insider trading recommendations, or even impersonate executives. The risks multiply quickly.

To defend against malicious prompt engineering, DevSecOps teams need to take several proactive measures:

  1. Implement strict input validation and filtering to prevent potentially dangerous prompts from passing to the AI system. This may involve blocklists, rate limiting, and language analysis techniques.

  2. Carefully audit prompts and outputs in testing and production to identify attempts at prompt engineering and jailbreaking. Anomaly detection tools can help surface suspicious patterns.  

  3. Harden the AI system's training to make it more resistant to manipulation. This can involve conditioning the model to refuse specific requests, incorporating ethical constraints into the reward function, and training on adversarial prompts.

  4. Implement secure authentication and access controls to limit who can interact with the AI system. The principle of least privilege should be applied to restrict access to the minimum required.

By baking prompt engineering defenses into the AI development and deployment process and continuously adapting to new threats, DevSecOps teams can help ensure that AI systems are used for their intended purposes and societal benefit. This will require collaboration across disciplines, from AI and security researchers to policymakers and ethical experts. But it's a critical challenge we must tackle head-on to realize AI's promise while mitigating its risks.

Evolving DevSecOps for the Age of AI

The threats are varied, complex, and rapidly evolving, ranging from AI-generated vulnerabilities to autonomous attack agents to malicious prompt engineering.

To stay ahead of these threats, DevSecOps must undergo a radical evolution. This evolution will require a fundamental shift in mindset, prioritizing AI security as an existential imperative rather than an afterthought. It will demand new tools, processes, and skills, from adversarial testing and explainable AI to continuous monitoring and adaptive incident response.

Most importantly, it will require a culture of proactive vigilance and relentless adaptation. As AI capabilities advance, so will the creativity and determination of malicious actors seeking to exploit them. DevSecOps teams must be ready to meet each new challenge with agility, ingenuity, and unwavering resolve.

RELATED
BLOG POSTS

What Is DevSecOps? Shift Security Left in Your DevOps Lifecycle
Security is a critical component of modern software development, and development teams are well aware of this. According to our 2024 State of Software...
How Agile & Container technology led to the rise of enterprise DevSecOps
New development processes and open-source technologies have shifted the technology security landscape for enterprises. Previously a separate security...
DevOps vs. DevSecOps: Understanding the Difference for Enhanced Security
Our 2024 State of Software Quality research confirms what many organizations already know: software security is paramount. The research shows that 84%...

Automate code
reviews on your commits and pull request

Group 13