A microservice architecture is made of single responsibility-focused services that are small, independently and deployable. A complete business application can be obtained aggregating and orchestrating this kind of service. These services are independent of each other, giving you the ability to easily test and deploy each one individually. . A single instance of a microservice must serve a single business responsibility in your business application.
Testing strategies for microservices
Software industries are now making use of the microservice-based architecture for every new development. Furthermore, many organisations are moving from monolithic to microservice-based architectures. Consequently, every single microservice must be tested before it communicates with other microservices.
As we know, a microservice is an architectural style that develops a single application with a suite of services. These services are independent deployable, have different data storage methods, and can also be used in different languages. They either have a bare minimum of dependencies or zero dependencies with centralized management. These services are built around business capability and can also be deployed by containers. Sometimes, this characteristic of the microservice style creates complexity for testers trying to test a microservice end-to-end.
The testing pyramid strategy
The traditional approach to the testing pyramid is shown in the following diagram:

As you can see in the diagram, the testing pyramid approach is extremely efficient for a monolithic application. For a long time, this strategy was used by most enterprises in the software world. However, in a microservice world, this approach is not much suitable for the best way of testing a microservice-based application. Because the biggest complexity is not within the microservice itself, but in how microservices interacts with other microservices. But the above approach much focuses on the unit testing rather than integration testing. That is why such an approach can be harmful to the microservice application.
The testing honeycomb strategy
In the microservice architecture, having too many unit tests with a small definition for each microservice is not the best choice. Each microservice is bound with a single business capability, not a whole business. This might simply be a small part of the business scope, so it is not worth writing too many unit test cases for a microservice. A better way of structuring our tests for microservices would be through the testing honeycomb, as seen in the following diagram:

As you can see in the preceding diagram, the integration part is greater than other parts that means our main focus will be on integration testing, rather than unit and integrated testing. It is worth noting that there are very few implementation detail tests and integrated tests.
Unit testing
When it comes to testing units for microservices, unit test cases only cover a single microservice. We could have a number of unit test cases for a microservice, where the unit test depends on what language we are using in the development of that microservice. It also depends on the framework being used in development.
A unit can consist of a line of code, a method, or a class. Unit testing refers to testing a particular unit for any bugs or issues. Optimally, the smaller the unit is, the better it is, because this allows testing on a more granular level, and gives a more accurate view of how well the overall code is performing. The most important factor of unit testing is that, by running many small tests, instead of one big test, you can complete the testing process in a matter of seconds or minutes, instead of hours, depending on the size of the code. See the following diagram showing a microservice that has unit tests in accordance with the number of units:

The preceding diagram is concerned with unit tests in all units of a microservice.
According to Robert V. Head, in his book Real-Time Business Systems, the programmer who writes the code should perform the unit testing, because of their knowledge of every niche of the program. They can easily access all the different parts of the program, and hence, make the testing process very easy to execute, while also saving a lot of time.
Unit testing can be used for many different applications. Here are some details of these other uses:
- Test-driven development:In this process, tests are written before any code that is to be tested. Based on the tests, programs are then written that are able to pass the test cases for further usage.
- Work check:While test-driven development is a nice concept, it is not everyone’s cup of tea. The unit testing is about to test our written codes and make corrections to them.
- Code documentation:Documentation is a strenuous task, and takes a lot of work to prepare, because of the continuous changes that are made to the code. Unit testing helps documentation by providing pieces of code that explain the product.
Unit testing alone doesn’t determine the behavior of the system,it have good coverage of each of the core modules of the system in isolation. To verify that each module correctly interacts with its collaborators, more coarse-grained testing is required.
Integration testing
An integration test is the opposite of a unit test. In the microservice architecture, integration testing is typically used to verify interactions between different layers of integration code and external components such as database and external REST APIs. Integration test can be used to test other microservices, including data stores and caches.
Each microservice must be verified and tested individually, with well-performed unit test cases. However, each microservice communicates with other microservices, so the proper functioning of inter-service communications is a very critical part of microservice architecture testing. Microservice calls must be made successfully with integration with external services.
Microservice integration testing validates that the distributed system is working together with external dependencies smoothly, and also checks that all external or internal dependencies between the services are present as expected. See the following diagram on integration testing:

In the preceding diagram, which looks at integration-testing a microservice, we test external dependencies, such as external services and external databases. In this diagram, we implement integration tests for gateway and persistence integration. The gateway integration test ensures that any protocol-related functionality is working properly, and also tests for protocol errors, such as missing HTTP headers, incorrect SSL handling, and so on.
The persistence integration test allows you to test database level errors, such as schema mismatches and mapping issues, as well as ORM tool compatibility.
In integration testing, different units of code are combined together in the form of a group and then tested. Through integration testing, we are able to recognize any errors in the interactions of different units with each other and/or the interface. In the case of small-scale software, integration testing can be done in a single step. However, for big software applications, integration testing is done in different phases. In this case, it can then take various steps. These phases may include the integration of modules into low-level subsystems, preparing them to be integrated into larger subsystems, and eventually completing the software. With integration testing, you can check all aspects of the software, such as its performance, functionality, and reliability.
While unit testing consists of testing different units of the code to isolate any errors in the system, integration testing comprises testing the system in little groups, made up of units, to check their interactions with each other, as well as the functionality of the whole code.
With integration testing, the following three kinds of strategy are commonly used:
- Big bang:With this strategy, modules are integrated into a process through which the whole software system is built. This is a high-risk approach because, to deploy it successfully, complete and accurate documentation is required. Otherwise, even the slightest mistakes could cause the whole system to fail.
- Bottom-up: With the bottom-up technique, testing starts at the bottom of the hierarchy with the low-level components. Building up from there, the testing continues to the top-level components, until all components are fully tested. Through this strategy, errors can be detected efficiently.
- Top-down:With the top-down strategy, top-level integrated modules are tested first, and subsystems are then tested individually. This way, any missing links can be detected easily.
We have previously discussed unit and integration testing for microservice-based applications. Unit testing alone doesn’t provide a sufficient output for the behavior of the distributed system, so we need to use integration testing as well. We also need some other approach to testing for microservices. Let’s discuss component-testing a microservice-based application in the next section.
Component testing
Once we have done unit and integration testing for all functions of the modules within a microservice, we need to test each microservice in isolation. A distributed system might be composed of a number of microservices. So, when it comes to testing a microservice in isolation, we have to create a mock of other microservices. Consider the following diagram on component-testing a microservice:

Component testing involves testing the interaction of a microservice with its dependencies, such as a database, all as one unit.
Component testing tests the separation of a component from a large system. A component is a well defined and encapsulated part of a large system, which can be independently replaced. Consequently, testing such components in an isolated system provides many benefits, such as the separation of concern among components of the application, as well as also testing the complexity of a microservice with external services. So, external services and external data stores must be replaceable with stub services and in-memory data stores respectively.
Contract testing
Contract testing is all about testing the contract between the consumer and producer services. Consider the following diagram showing contract testing:

As you can see in this diagram, each consumer has a particular contract with the producer. This contract is about the expected structure of input and output data between the producer and consumer services. Each consumer service has a different contract with the producer service, as per as its requirements. If services change over time, then contracts between the services must be satisfied.
Contract testing tests the input and output of service calls that contain certain attributes and also tests throughput latency. Contract testing is not a component test; it doesn’t test the component deeply, but rather, it only tests the data structure with the required attributes for the input and output of service calls.
In the preceding diagram, suppose that a producer service exposes a resource with three attributes: ISBN, name, and author. This resource represents book information that is then adopted by three different consumer services.
End-to-end testing
With end-to-end testing, an application is tested to check whether or not it has a complete flow from the beginning to the end. Through end-to-end testing, any system dependencies are weeded out and integrity between different components is maintained. Here, the intention is to verify that the system as a whole meets business goals, irrespective of the component architecture in use.
In a microservice-based application, end-to-end testing provides value by covering the gaps between the services.
End-to-end testing checks all the critical functionalities for any bugs or anomalies, such as communication within or outside of the system, the application’s interface, the database, network, and other components. There are two different ways of performing end-to-end testing:
- Horizontal end-to-end testing: This is the more common method for implementing end-to-end testing. In horizontal testing, the test adopts a user’s perspective and then navigates through the whole system. If any anomalies or bugs are found, then they are reported; otherwise, the system works exactly as it should.
- Vertical end-to-end testing:With this method, the testing is done in a hierarchical order. Here, all the components of the system are checked individually and thoroughly to ensure the quality of the complete code. This testing process is not as popular as horizontal end-to-end testing, as it is mostly only used for complex computing program testing.
To understand the different types of end-to-end testing on an application, take the example of an e-commerce web application. With a horizontal end-to-end test, the process will be to sign in, check the profile, use the search bar, add an item to a cart, save any item to be bought later, check out, add payment information, confirm the purchase, and sign out. However, a vertical end-to-end testing method is more likely to be used by a program without any user interface, or perhaps a more complex application than that of a simple e-commerce website. Let’s consider the following diagram showing the end-to-end testing of a microservice-based application:

As you can see in the preceding diagram, each service communicates with other microservices with some contract of input and output data structure.
UI/functional testing
In functional testing, each and every aspect of code is tested to make sure that it is working correctly. In simple terms, functional testing considers the system’s requirements and checks whether the system is fulfilling them. Anything that is done differently, or not done at all, will be listed as an anomaly. Consequently, functional testing is essential for looking at code execution and making sure that it is done right.
When performing functional testing, the process is as follows:
- First, data is input
- Next, it is determined what the output is supposed to be
- The test is then run with the relevant input
- Finally, the output results are compared with the expected results
In the end, if the results match, then it is clear that the system is working perfectly, but if they are different, then this means that bugs have been found.
In UI testing, the system is checked for bugs and anomalies with the help of a graphical user interface (GUI). UI testing is done in a hierarchical order, moving from tester’s frontend issues to backend issues, and checking everything along the way.
UI testing can be done accurately with the help of the following approaches:
- Manual-based testing:This approach is based on the knowledge about the domain and the application. Unless the tester knows what to test, it cannot be executed properly.
- Capture and replay:This method depends on having a user go through the system and capture all activity. All of these activities are then replayed to make sure that the user did not face any anomalies in the system.
- Model-based testing:In this method, all the events of the GUI are executed at least once.
References:
Hands-On-Microservices-Monitoring and Testing
by Dinesh Rajput