On February 2nd, we did a webinar called Level Up Your Team’s Code Reviews. Our Engineering Manager, Kendrick Curtis, and our Lead Account Executive, Mark Raihlin, talked about how you can help your team perform better code reviews. They discussed:
- Where and how code reviews go wrong;
- Building a new code review checklist;
- How automation can save time and improve code quality;
- Improving the developer experience in general;
- Open Q&A.
In case you missed the webinar live, donโt worry: you can (re)watch the recording here or below ๐
A live talk on code reviews
You can check the detailed talk on our video recording โ we even give you the specific time frames! But weโve summarized the topics for you to read ๐ค
Identifying the problem โ Kendrickโs personal experience (00:03:34)
This story might be familiar to you: code reviews take a long time, taking several days to complete, and some PRs might even be open for months! But, as an Engineering Leader, you need buy-in if you want change to happen. In Kendrickโs case, people complained about code reviews in retrospectives and Slack, so they found who was complaining and who those people trusted to lead change.
They made two changes: reduced the size of the teams and organized a code review checklist workshop. As a result, code reviews went down from multiple days to 8 hours. Letโs see why and how this worked.
At the time, they had eight developers in 2 teams. So the approach was to split them into four teams of 4 developers. With that change, people started taking more responsibility for what was happening in the team. An interesting book about software teamsโ organization is Team Topologies by Matthew Skelton and Manuel Pais. Give it a read!
The next step was to organize a code review checklist workshop with developers. Checklists are crucial in the code review process: they are not substitutes for expert knowledge but act as memory-jog in complex situations. If you want to learn more about the importance of checklists, check out the book The Checklist Manifesto: โโHow to Get Things Right by Atul Gawande.
Creating a code review checklist (00:14:08)
To organize a workshop that will allow your team to create a code review checklist that better serves their needs, just follow these steps:
- Divide your team into small groups, where each group will brainstorm ideas for things they do in code reviews.
- Each group will share their ideas with the entire group to have a more extensive list.
- Organize all the ideas into groups, and vote on the ones that will make the difference.
- Finally, review how you might accomplish those ideas.
In our webinar, we did this exercise on a smaller scale. Attendees wrote in the chat some of the ideas they came up with for a code review checklist:
- Is the code being merged to the correct branch?
- Does this code satisfy the requirements?
- Does the code have tests? Did you run the tests?
- Is the code optimized? Can the problem be solved in a more appropriate or optimal way?
- Does the code introduce security issues (e.g., OWASP top 10)?
- Does the code follow standard coding rules and naming conventions?
- Does the code have comments?
- Is the code readable and understandable?
- Were the docs updated?

When and how to automate code reviews (00:25:40)
Luckily, there are things you can automate in your code review process! And that is where Codacy Quality comes in, automating some of your checklist items directly in the Pull Request. As a result, your developers can focus on the more important and challenging things.
Letโs see an example of a particular PR in the Artemis public repository, where youโll find a section with the PR checks. Quality will inform us about the code quality, the defined standards, and how we can enforce those standards. In this example, the Quality checks failed, meaning the code didnโt meet some of the standards this team defined.

If we want to learn more, we can go to the details page, where we have an overview of everything that Codacy Quality detected in this PR before the merge.
For example, we can see how we are cleaning up our code and tackling technical debt and the evolution of code complexity. Other metrics that can appear here are duplication and code coverage. We recently released a feature about having a Coverage summary on your Git provider.

We can check out all this information in the Codacy Quality UI, where we open up the issues to better understand their meaning. We can also look at the bigger picture and understand the evolution of our code quality for that particular repository or any branch we are interested.ย

How can you measure success (00:33:19)
At the time, Kendrickโs team didnโt track the effect of their changes and the code review checklist. They used a time-in-state JIRA widget to track the time in code review and could watch it decrease, but they didnโt know about the DORA metrics. If they did, they could have been tracking cycle time.
We asked our attendees how they track Engineering performance, and the vast majority were between Sprint velocity (47%) and individual performance reviews (35%). Only 5% say they were using Engineering metrics, like DORA metrics.
The DORA team surveyed thousands of teams across multiple industries, and they identified the most effective and efficient ways to develop and deliver software. The DORA team’s research identified 4 key metrics that indicate software development and delivery performance.ย
Years of research culminated in annual reports like the Accelerate State of DevOps 2022 report. The research was also presented in the book Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations by Nicole Forsgren, Jez Humble, and Gene Kim.

Codacy Pulse is currently one of the easiest ways to obtain the DORA metrics by simply integrating the tools you already use in your development lifecycle, such as GitHub (or Bitbucket), Jira, and PagerDuty.ย

Q&A time
After the talk, we opened the floor to all the questions the audience might have. We were delighted to receive a tone of questions! Weโve listed them for you:
- Should I start code review meetings? Will this be effective?
- How do you find the balance between working on your own tasks vs. reviewing othersโ code?
- My team is too lazy to review others’ code. They keep on approving without actually reviewing. How can I deal with this?
- How can I “learn” from the code reviews?
- My team works in multiple projects at the same time, how could have measures of my team members along multiple repos and projects?
- About the code review workshop, what if you have a workshop and no one engages; people hate speaking in public?
Thank you to everyone who joined live and those who watched the recording later on! Letโs keep the conversation going in our community.