Home Security Functional and non-functional testing methods you should know about

Functional and non-functional testing methods you should know about




An important metric of code quality is how much of your codebase is covered by tests, as we saw in a previous article about code coverage. Different tests allow you to evaluate distinct aspects of your code, from a localized view to its global behavior, including end-user interaction and performance.

Today, we’ll cover some of the most common testing methods you can use. Each type has its characteristics, as well as pros and cons. Although this is not an exhaustive list, it can help you get started in the testing world, so keep on reading!

What’s the difference between manual and automated tests?

Before diving deeper, it’s important to distinguish between manual and automated tests.

In manual testing, a person specifies input or interacts with the software by clicking through the application or using APIs. Manual tests are expensive because they require someone to set up an environment and execute the tests. Plus, the process is slow and error-prone. However, it might be the only solution for edge cases and niche scenarios.

On the other hand, a machine executes automated tests, running a test script. These tests vary in complexity and are a great way to scale your QA process. They’re more robust and reliable than manual tests. However, the quality of your automated tests depends on how well you’ve written them.

Functional testing

Functional tests focus on checking the business requirements of your software. As such, when performing functional testing, you must test every feature against a set of requirements or specifications and see whether you’re getting the desired results.

In functional testing, you test all the features by giving the value, defining the output, and validating the actual output. Comparing actual outputs against desired behaviors provides a clearer picture than testing individual functionalities in isolation.

Unit testing

Unit testing checks individual components or pieces of code at a time. The main goal is to determine if an indivisible logic unit (like a single method, function, or object) does what it’s supposed to do. You can also use them for non-functional testing.

Unit tests are very low level and close to the source of an application. As a result, they’re generally cheap to automate and reduce the cost of bug fixes since they can be done early in the development lifecycle. Plus, a CI server can run them very quickly.

Integration testing

Integration testing checks multiple components together. The main goal is to ensure relationship integrity and data flow among different parts or units operating together.

Usually, you’ll first run unit tests to test the logical integrity of individual units. Then, you’ll run integration tests to ensure the interaction between these units is behaving as expected. However, integration tests are more expensive to run as they require multiple parts of your software to work.

System testing

System testing checks the whole system against the specified requirements. The main goal is to ensure that the entire application, as a unit, behaves how we expect it to.

After unit and integration testing, you can proceed with the system testing once you have a fully integrated application. System tests are expensive and can be hard to maintain. However, you can parallelize them to test several platforms simultaneously, schedule them or make them part of a CI/CD pipeline, and include tests of different user behaviors or edge use cases with simple parameters.

Regression testing

Regression testing checks that previously-functional features are working appropriately. The main goal is to ensure that old code continues to function after the most recent modifications have been made.

Regression tests are usually time-consuming and can be very expensive since they often include running the entirety of all unit, integration, and system tests to ensure no functionality has changed. However, they are essential to ensure that new code modifications do not have unintended consequences for current functionality.

User acceptance testing

User acceptance testing checks if the system satisfies users’ requirements and preferences under specific conditions. The main goal is to ensure that the application behaves how the users want.

Within acceptance testing, there can also be multiple phases, such as alpha or beta testing:

  • Alpha testing: Checks for all the errors and issues in the software. This test is done in the last development phase before launching the product. Alpha testing is not performed in the real environment but by creating a virtual one.
  • Beta testing: It’s done after the alpha testing and before the broad launch of the product. This test is done in a real user environment by a limited number of actual customers or users to ensure that the software is entirely error-free and works as expected. After collecting feedback from those users, you can make changes.

User acceptance has some limitations, like the limited number of edge cases and scenarios a person can come up with. In fact, many organizations avoid relying on user acceptance testing due to its unreliability, cost, and time consumption. Still, some user acceptance testing is a critical part of testing procedures for most software.

Non-functional testing

Non-functional testing focus on the non-functional aspects of an application, such as performance, reliability, usability, and security. You’ll typically perform non-functional tests after functional testing. However, as they’re more difficult to perform manually, you should use tools to automate the testing process.

With non-functional testing, you can improve your software’s quality and end-user experience. For example, performance and reliability under load aren’t functional components but can make or break the user experience.

Performance testing

Performance testing puts the software into expected and unexpected usage loads. The main goal is to measure the software’s scalability and the way it uses the resources available.

These tests help to measure, among others, the reliability, speed, scalability, and responsiveness of a system. For example, with performance tests, you can observe response times when executing many requests or determine how the system behaves with a significant amount of data. You can also determine if an application meets performance requirements, locate bottlenecks, and measure stability during peak traffic. 

Usability testing

Usability testing checks the quality of a user’s experience. The main goal is to ensure the application is user-friendly, meaning it must be easy to understand and pleasant to use.

Usability tests are mainly manual, and the process does not scale well. Regardless, a lack of usability testing often leads to unintuitive interfaces and can stop certain users from using the software altogether. 

One example of usability testing is accessibility testing. These tests determine whether the software is accessible for disabled people. Disability can mean deafness, color blindness, mentally disabled, blind, old age, and others.

Compatibility testing

Compatibility testing checks if your software can run on different configurations, databases, web browsers, operating systems, mobile devices, network environments, hardware, etc.

Backward compatibility is also important; it tests if a new or updated version of your software is compatible with the previous versions of the environments (such as operating systems and web browsers) on which the software runs. These tests ensure that users with older versions of a particular environment can still use your software.

Security testing

Security testing ensures the security of your application so that you can prevent security breaches. Security experts run these tests to determine your software’s security from internal or external threats and attacks.

Security testing can range from automatic scanning to periodic penetration testing, depending on the software’s level of exposure to potential threats. In penetration testing, every way an application can be compromised (cross-site scripting, unsanitized inputs, buffer overflow attacks, among others) is exploited to check how the system handles it.


There is no one-size-fits-all solution to testing software applications. You don’t need to perform all tests we mentioned, and there are many more we didn’t cover. The exact tests you should run will depend, among others, on the type of software you’re building, your requirements, and your kind of users.

Automated tools like Codacy can help you visualize files and lines of code that need more tests, allowing you to be more confident about your code quality. So if you’re looking for a static analysis tool that allows you to check your code quality, code coverage, and keep track of your technical debt, try out Codacy today.


Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news, offers and special announcements.

Recent posts

Codacy Quality: the best solution to ship higher-quality code

Did you know that the average developer spends 42% of their week working on code maintenance? Although unplanned work will always...

Becoming a Code Review Master

On September 20th, we did a webinar called Becoming a Code Review Master. Guest speaker Curtis Einsmann, former AWS engineer and...

Announcing our Series B

Hi there.  Today we’re glad to announce that we raised our Series B of $15M led by...

How to tackle technical debt (for teams using Scrum)

Technical debt happens in all software projects, regardless of the programming languages, frameworks, or project methodologies used. It is a common...

Who should care about code coverage

Code coverage tells us what percentage of our code is covered by tests. There are different code coverage types, and a...