Into the Breach: Business Survival in the Age of Accelerating Cyber Threats

In this article:
Subscribe to our blog:

We spend around $215 billion per year on cybersecurity between vendors, practices, and tools, and this has been increasing.

2024-04-09 11_32_29-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

This is a vast sum, but it pales in comparison to the costs of cybersecurity:

2024-04-09 11_34_21-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

Not only are the costs of cybersecurity already far greater than what we spend fighting it, but the rate of growth is also greater–40% faster than what we're spending every year to try to prevent it.

At Codacy, we’re dedicated to helping organizations fight security issues and fight costs. But it is an uphill struggle, one that will only get worse with the advent of AI.

Offensive AI: Why Costs Are Going to Keep On Increasing

Cybersecurity and cybercrime are a standoff between attackers and protectors. Someone is trying to get privileged access to your system, and someone is trying to protect that same system.

Now, though cybercrime has been successful, it has also been rate-limited by two factors:

  1. It’s time-consuming: Building the infrastructure, automation, and data required for attacks is difficult and requires extensive time and resources.

  2. It has a low conversion rate: Most companies and individuals aren’t going to fall for a phishing email or trojan, so you need a high volume.

Let’s look at how AI will change these two attack vectors.

Phishing

In February, the FCC made AI-generated voice calls, voices, and robocalls illegal. This is a significant step forward, but with AI accelerating rapidly, it's already getting increasingly complex to identify what's real and what's not. 

Phishing will take advantage of this AI advancement by reducing the friction for would-be attackers. As we’ve said, these attacks are time-consuming and resource-intensive, and their low conversion rate usually makes them uneconomical for cybercriminals to run.

But this equation changes with AI. AI enables spear phishing at scale, creating tailor-made campaigns explicitly targeted at companies or individuals. 

2024-04-09 11_35_28-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

This works in two ways:

  1. AI can automate the process. A simple example is scraping social sites such as LinkedIn and feeding them into a large language model. Then, you can create a custom-made phishing attack for each individual or company. You can do this every hour of every day, every week of every month, every year, nonstop.

  2. AI will increase the conversion rate. As AI can collect huge amounts of data and allow cybercriminals to experiment, phishing emails will become more sophisticated. Just as you already can’t tell deep-fakes from genuine people, you won’t be able to tell real emails from fake, increasing their success rate.

So phishing really can be transformed. 

Another consequence of this is that online integration will become fragile. AI can generate complete sentences mimicking you from a quick snippet of your voice. Voice recognition services, such as those used by banks to protect your account, can be attacked and overcome. Soon, signing a document, signing up for a service, or even signing into a service will become burdensome as these companies try to limit their liability and stop sophisticated AI agents from gaining access to services.

Malware

Neural networks can already obfuscate and hide the intent of code

2024-04-09 11_37_05-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

This means that it will take time to understand how these attacks work and how to remove them from production code. We can imagine a future where AI-generated malware evolves faster than our ability to detect and contain it, lurking in systems and causing damage long before it's discovered.

Moreover, AI is also being leveraged to automate the process of finding vulnerabilities in software. LLMs can be used to identify weaknesses in code that would typically require human intervention. This means any vulnerability is more likely to be found. This helps build better security, but automation also means that any vulnerability in software is more likely to be found and exploited at scale by attackers.

As AI enables the creation of more sophisticated and harder-to-detect malware while simultaneously accelerating the discovery of software vulnerabilities to exploit, the speed and scale of attacks will increase dramatically. Cybersecurity teams will find themselves in an arms race, struggling to keep pace with AI-powered threats that evolve faster than traditional defenses can adapt.

Why Some Companies Suffer More Than Others

In this new world, it will be SMBs that suffer the most. 

In the past, an attacker would look at a large and small company and say, "Hey, well, it's going to take me longer to attack both or the same time; maybe I'll just go for the higher price target.”  They would invest more resources and seek a larger reward from the larger company. 

But now, the larger companies will be the only ones capable of mounting any defense against AI cybercrime. Additionally, with automation, the marginal cost of a single AI attack is drastically reduced, meaning the economics will move towards volume of attack. A recent study found that 43% of attacks are already aimed at SMBs, while only 14% of those SMBs are prepared for them. So, it goes without any surprise that attacks against SMBs are effective. 

Why have SMBs fallen victim? We work with hundreds of SMBs, so we asked them. It comes down to four core reasons:

2024-04-09 13_25_40-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

  1. Lack of resources. SMBs typically lack the financial resources to invest in robust cybersecurity tools, hire dedicated security personnel, or provide comprehensive security training for employees. This exposes them more to threats than larger enterprises with bigger budgets and specialized security teams.

  2. Not the target of security vendors. Many cybersecurity vendors prioritize selling to large enterprises, as they tend to have more complex needs and bigger contracts. As a result, SMBs often struggle to find affordable, SMB-focused security solutions that fit their specific requirements and budget constraints.

  3. Not enough time. For many SMBs, especially in the early stages, the focus is building the core business and acquiring customers. According to our 2024 State of Software Quality report, 58% of developers say not having enough time is the single most common challenge faced during code reviews. Investing time and effort into cybersecurity can feel like a lower priority than other pressing business needs, leaving potential vulnerabilities unaddressed.

  4. Protected by the herd. In the past, SMBs have relied on "security through obscurity"—the idea that they are less likely to be targeted by attackers than larger, more high-profile companies. However, with AI's increasing automation and scale of attacks, this "herd protection" is rapidly diminishing, putting SMBs at greater risk.

With AI enabling automation to scale, the attacker now thinks, “Hey, I can reach more people as an attacker, and sophistication increases my conversion rate, so any company is now a potential target, and SMBs don’t have the resources to fight back, so I’ll start there.”

Instead of being protected by the herd, AI enables attackers to think of the herd as profit.

Defensive AI: Using AI To Protect Against These New Threats

I believe that in the future, it will be hard to protect ourselves online. But one of the tools we now have at our disposal is the same one the criminals have–AI.

In the CompTIA State of Cybersecurity survey conducted among US technical and business professionals about their views on how AI could be utilized for various cybersecurity tasks, six possible use cases were defined:

2024-04-09 11_39_09-DeveloperWeek Presentation  - Feb 2024.pdf - Adobe Acrobat Reader (64-bit)

  1. Monitoring network traffic and detecting malware (53%). AI can be trained on vast amounts of network data to learn patterns of regular traffic and identify anomalies that may indicate malware or other threats. By continuously monitoring network activity in real-time, AI-powered systems can quickly detect and alert security teams to potential breaches.

  2. Analyzing user behavior patterns (50%). AI algorithms can analyze user activity logs to establish baseline patterns of normal behavior for each user or user group. By identifying deviations from these patterns, such as unusual login times, locations, or resource access attempts, AI can help detect insider threats or compromised accounts.

  3. Automating response to cybersecurity incidents (48%). When a potential threat is detected, AI can automatically initiate predefined security protocols, such as isolating affected systems, blocking suspicious IP addresses, or escalating alerts to relevant personnel. This rapid, automated response can help contain the impact of an attack and reduce the workload on security teams.

  4. Automating configuration of cybersecurity infrastructure (45%). AI can optimize the configuration of firewalls, intrusion detection systems, and other security tools based on an organization's specific needs and risk profile. By continuously learning and adapting to changes in the threat landscape, AI-driven configuration management can help maintain a robust security posture.

  5. Predicting areas where future breaches may occur (45%). By analyzing historical data on past attacks and current threat intelligence, AI models can identify patterns and vulnerabilities that attackers are likely to target in the future. This predictive capability allows organizations to proactively strengthen their defenses in high-risk areas.

  6. Generating tests of cybersecurity defenses (45%). AI can create and run simulated attacks against an organization's security controls, helping to identify weaknesses and gaps in defenses. By continuously testing and refining security measures, AI-driven penetration testing can help organizations stay one step ahead of evolving threats.

Most security experts see significant potential for AI to enhance various aspects of cybersecurity. 

Monitoring network traffic, detecting malware, automating incident response, predicting breaches, and automatic configuration are some of the ways we think AI can be used to protect companies, online behavior, and consumers. We think this will increase, and frontiers will be pushed as creative people find ways to use AI to protect companies, online behavior, and consumers. 

AI Battling AI

We're going to need more tools and certifications. There's certainly a focus on that in the US and Europe with legislation such as the Cyber Resilience Act

There's concern on a global level, and that connects with what we're doing at Codacy. We changed our mission and our posture in the world. We've always been a quality-driven company. We've always tried to help our customers produce excellent quality, clean code. But with all that is happening in the world regarding security, we believe today that software security is akin to a fundamental right for you as a business, large, medium, or small, because you're going to need it if you want to be in business.

So, we changed our mission. For more than 1,000 customers, we are now using static code analysis, composition analysis, secret scanning, pen testing, and AI-battling AI to ensure that every line of code is trustworthy for our customers and their customers.

If you’d like to join us on this mission, you can find out more about Codacy Security and sign up here. We are always looking for mission-driven individuals to join us on this journey to making the software world more secure.

 

RELATED
BLOG POSTS

The EU Cyber Resilience Act: A Complete Guide 
Safeguarding against cyber threats has become paramount for all businesses today, especially software development companies. According to our 2024...
How Will the Cyber Resilience Act (CRA) Impact the Open-Source Community?
The European Union Cyber Resilience Act (CRA), expected to take effect in 2024, aims to establish strict cybersecurity requirements for software and...
Codacy Product Showcase January 2024
Welcome to the first quarterly Codacy Product Showcase event of 2024! We’re excited to share with you all our recent enhancements and innovations to...

Automate code
reviews on your commits and pull request

Group 13