Home Code Quality Code Quality and CI/CD Success

Code Quality and CI/CD Success




On June 15th, we did a webinar called Code Quality and CI/CD Success webinar. In this webinar, guest speaker Zan Markan, Senior Developer Advocate at CircleCI, joined Kendrick Curtis, Director of Technology at Codacy, in discussing how to improve code quality while ensuring CI/CD success.

During this talk, they covered the following:

  • What is code quality, and is it relevant to DevOps/platform engineers?
  • How do code quality tools fit into your CI/CD pipeline?
  • What does code quality look like in an IaC world?
  • How to automate code quality across large organizations?

In case you missed the webinar live, don’t worry: you can (re)watch the recording here or below 👇

A live talk on code quality and CI/CD

You can check the detailed talk on our video recording – we even give you the specific time frames! But we’ve summarized the topics for you to read 🤓

What does code quality mean? (00:03:25)

For Kendrick, who comes from a developer background, code quality is about the code being readable, maintainable, functionally correct, and architecturally well-organized. Most of these operational aspects of code quality can be dealt with with the help of static analysis tools like Codacy Quality.

For Zan, who also has significant development experience but comes from the world of CI/CD, code quality needs to be seen from a team angle. The code needs to be maintainable and correct, not only from an individual developer level but also from a team and operator standpoint. As such, the code needs to be working for the team.

Another way of looking at code quality is to define the lack of it. For example, you don’t have good-quality code if you have security vulnerabilities. Plus, if your code is difficult to build, maintain, or deploy, it’s not high-quality code.

Is code quality only relevant to application developers? (00:06:42)

Code quality is relevant to different teams, not only application developers. For example, every piece of code needs to be tested, and that’s code as well. For QA or test engineering, we need to guarantee the tests are maintainable, which also helps us maintain other pieces of code, allowing us to increase the quality of our application’s code.

Then we also have the infrastructure, where we’re increasingly writing our code in an IaC (Infrastructure as Code) paradigm. We’re not clicking the AWS dashboard to define our production environments, we are describing them using either Terraform or another tool, and that’s code. That code needs to be robust, maintainable, secure, and high-quality.

As such, developers and operators should care, and they are both impacted by code quality. Plus, every organization nowadays tends to rely more on technology. Even if they are not directly a technical organization, if you rely on technology, you rely on code and on that code being of high quality.

Where does code quality tooling fit within CI/CD pipelines? (00:08:55)

When discussing CI/CD pipelines, we’re talking about the automation that takes your source code into production. Every step in this process can and should be as automated as possible: from running automated tests every time you push a change to security scanning or static analysis from linting to production.

In a CI/CD sense, the pipeline helps us maintain code quality by ensuring continuously we are not regressing.

What happens when the pipeline fails? (00:10:39)

Usually, by default, a pipeline fails if any of the components fail. 

In a CircleCI pipeline, you’ll find workflows, which are graphs of jobs. A job is a single unit of execution, whether that’s tests, linting, or some other type of validation. Ultimately, a job is a command line instruction set in a given environment.

Let’s say you have a linter job. If any of the steps that do the checking terminates with anything other than status code zero, it means the command failed. As such, the entire job is going to fail. Plus, the entire workflow will fail because that single job has failed. You can change this default behavior if you want. For example, every tool has severity thresholds, and you could sometimes turn off warnings.  

There are some schools of thought where every compiler warning is treated as a failure. However, it can be frustrating for developers, so it’s a matter of understanding the trade-offs. If there are no compiler warnings, that’s potentially good for the quality of the code itself but at the cost of developer happiness.

Are compiler warnings more or less serious than SCA warnings? (00:14:08)

They are both important. For example, if you disable a code-style warning once, you’re opening the doors to destroying your code consistency. As for the compiler warning, if you disable it for one little thing, at some point, you’ll get a big list of warnings, and one of them might be relevant, but you never know which one.

That’s also one of the big problems within code quality analysis, particularly when you start a project with a new tool. The tool will find a thousand errors, warnings, and other information in the first scan. Finding which ones are important can be a challenge.

What do you do when you get an error for a dependency deep within your dependency graph? (00:15:42)

Ideally, you would resolve everything, but, in reality, not every dependency tree is resolvable. Sometimes you just have to have to make difficult trade-offs. You might need to accept and ignore the warning for a particular issue just because no updated path would fix it, or there’s a dependency that’s been deprecated or not updated for many years.

When using CI/CD, what happens when a nightly build outside our main build chain fails? What do we fail on our main pipeline, if anything? (00:17:40)

This is something that every team needs to establish for themselves. 

If the nightly build fails, you can automate the process of making the main branch fail. In one case, you could have a production pipeline run the entire test to it again, but you would be making your production deployment take several hours, which is probably not what you want to do. 

Realistically, it’s better to establish good best practices as a ​​team. For example, you can say that you don’t deploy to production unless all of these tests have passed with the same kind of iteration of code as the latest run. That way, we know everything’s stable, and ultimately that’s just good housekeeping.

What is an acceptable time for your CD tooling to run? (00:21:58)

It depends, among others, on the project, team size, and ability. 

Every year, CircleCI creates a report called The State of Software Delivery, where they take a whole snapshot of everyone running pipelines on the platform. The latest report looked at 15 million jobs and concluded that the sweet spot for a pipeline duration is about 10 minutes. It’s a time frame that doesn’t block you too much if you are waiting on instant feedback as a developer and doesn’t block Pull Requests from being reviewed.

However, there might be situations where you need to accept a longer pipeline time. If you’re not dealing with a small service, but instead you’re testing hardware, you’ll need a longer running pipeline just because the nature of what you’re building is more complex. In that case, it’s more expensive to test, and failures are more expensive to fix, so it’s always a balancing act.

Regarding good practices, what should we do when linters or tests fail, so that people can come and attend to them? (00:25:02)

First, we need to ensure that all of the relevant information about the failure is surfaced. Ideally, if we can export the reports into things like JUnit, we can parse it on a CI/CD side and show you exactly which project side has an issue.

But we also need to inform the relevant people so that they can fix the issues, and we can take advantage of tools like Slack or email. Ultimately, a pipeline is a vehicle for automation. You have the execution mechanism and decide what needs to be executed. Slack is a great example; many teams use it, and creating Integrations and sending messages to relevant teams is very easy. For instance, if I know my team built something, it needs to be informed when a build has failed, especially if it’s one of the critical branches we’re looking at.

How do you manage builds and communication as you scale and start having multiple teams, products, and execution tools? (00:27:27)

Organizations have been increasingly starting to have a centralized approach. The platform movement is emerging and becoming very popular, where all best practices are nurtured in a central team that helps with the tooling. You’re looking at someone who knows the tooling inside out, whether that’s a CI/CD or language-specific tool. 

Let’s say you have a couple of C# projects, Python projects, and web mobile, and you want to centralize these best practices. Whoever is spinning up new projects needs to be involved with best practices to unify as much as possible.

One thing that’s very important to unify, especially when organizations grow, is naming conventions. If everyone has their naming conventions, there will be clashes and inefficiencies. Plus, enforcing best practices on a platform level takes a lot of work.

There’s a very good video by Mike McGarr about running their developer experience team, which we would identify now as a platform team, and creating what they call the “paved road of tools” for all the teams within Netflix to use. One of the reasons for building this platform team is to reduce some of the heavy mental load on engineers.

If you’re building a platform team, the point is to be an enabler. You’re giving people the tools and the abilities without overwhelming them. At Codacy, our QA team are enablers; they’re not here to write the tests or execute the manual test plan but to enable the whole team to take on those activities. In the CI/CD world, for example, fewer DevOps teams maintain pipelines, and developer teams own them.

What else is hot in CI/CD right now? (00:36:50)

CI/CD is here for everyone building and deploying, and automating everything in every possible way. 

One trend that we are seeing is generative AI, but it will not evade us as an industry. Instead, we will see more and more usages of AI, where we have generative tooling support rather than doing everything manually. Tools like that already exist for some use cases, for better or worse, so that’s definitely one of the hottest trends.

The other trend will be the number of events that could trigger CI/CD pipelines beyond the build-test-deploy or the code-commit-push flows. We’re going to start seeing more advanced Integrations. 

As teams build best practices around tooling, we’ll start thinking about what else should trigger builds. Maybe your observability tool should trigger a pipeline if it detects that the deployment was actually causing significantly more errors and why. Plus, we could automatically cause it to revert. These are some CI/CD-specific things that we’ll see in the next few years.

Q&A time

After the talk, we opened the floor to all the questions the audience might have. We’ve listed them here:

  • Besides unit tests, how else can I use CircleCI to ensure code quality? And how do I use Codacy to ensure this as well? (00:41:25)
  • Where do you see the split in how much the CI/CD should be checking and how much should be shifted left to the developer before the push to the pipeline? Is it best practice to do as much as possible on both sides or minor checks to be speedy pre-push and then intensive in the pipelines? (00:44:15)
  • What aspect of the CI/CD toolset help to improve quality at the integration layer? For example, how built microservices work together rather than the quality of the individual service (00:49:03)

Thank you to everyone who joined live and those who watched the recording later on! 


Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news, offers and special announcements.

Recent posts

21 AI Tools for Developers in 2023

A 2023 survey by Alteryx shows that 31% of companies using tools powered by artificial intelligence (AI) are using them to generate code. When asking...

Codacy Pioneers: A Fellowship Program for Open-Source Creators

Here at Codacy, we recognize the importance of the open-source software (OSS) community and are dedicated to nurturing and supporting it in any way...

AI-Assisted Coding: 7 Pros and Cons to Consider

According to a recent GitHub survey, 92% of developers polled said they are already actively using coding assistants powered by artificial intelligence (AI). AI-assisted...

Now Available. Centralized view of security issues & risk within Codacy

Codacy is empowering engineering teams to bring their security auditing process to the surface. Today we're giving all Codacy...