In my previous article, I delved into the concept of contract testing, discussing its purpose, application, and implementation. In this piece, I will shift my focus to constructing a custom solution, examining the limitations of available tools and exploring potential solutions.
Why might you opt for a custom solution?
In this article, I will briefly outline the limitations and shortcomings of existing tools such as Pact and .
In general, both tools:
- do not utilize bi-directional flows in a producer/consumer-driven approach. While this is possible with Spring Cloud Contract, it is much more difficult to achieve with Pact.
- have their own language for writing tests. Spring Cloud Contract can read Pact contracts, but not vice versa. This can be achieved through plugins or containerisation, but it adds an additional layer of complexity.
- do not automatically trigger builds when changes are made to the contract.
Specifically, for Pact:
- a self-hosted broker is required, which must be accessible to both parties. This can pose a security issue when working with external parties.
- only a consumer-driven approach is implemented.
- there is no official support for asynchronous messages in PACT, only a community-based solution.
For Spring Cloud Contract:
- an artifactory is required to store contracts in jar format. This also applies to sharing with external parties.
- it is and producer-driven by default.
- it utilizes a wrapped instance under the hood.
In addition to addressing these limitations, a custom solution can also be designed to tackle future bottlenecks by:
- storing contracts in Wiremock representation for easy migration and transition to new tools.
- using format to generalize contracts for all systems and make them with an easy for all teams.
- using custom stages and triggers in pipelines based on changes.
Objective
The aim of this project is to develop a highly scalable two-way method that is and allows for a smooth transition to a different tool in the future.
Overview of the Structure
A Detailed Explanation of the New Method
As depicted in the image above, there is a that manages the contracts between users. Ideally, each producer should have their own shared . This prevents the creation of a monolith and ensures a clear and responsibilities.
Let’s examine the new method from the perspective of the consumer.
The consumer must make code modifications to receive messages from the producer. The next step is to establish a new contract in a shared . The shared will then prompt the relevant producer to verify the contract, and if the pipeline is successful, we are ready to proceed.
The same process applies to the producer, with the only difference being that after making changes to a contract, it will activate pipelines for all consumers.
Exploring Further with Code Examples
As with any custom solution, it is helpful to have some guidelines to follow when setting up your first functional project and seeing it in action.
The folder structure should resemble
This allows for the customization of triggers and stages within the pipeline for each team and .
The contents of the 200. file may appear as straightforward as this
Received message. Here's a rephrased version of the text: This allows for the customization of triggers and stages within the pipeline for each team and .
The contents of the 200. file may appear as straightforward as this
To differentiate , a metadata object can be added.
With asynchronous messages, all that is required is to retrieve the appropriate contract and test it using your own , without the need for any additional setup.
On the producer side, automation is straightforward because the producer will have multiple consumers, and creating a test for each one would be time-consuming and repetitive.
Let’s examine a simple that can take all consumer tests and run them all at once without requiring any code changes on the producer side.
As shown in the image above, it cycles through all consumer contracts and performs basic validation, which can be further refined later. The code was written in , but it is possible to replicate it in any other language of your choosing because it retrieves the response from the local stub server.
Integration with Pipeline
We’ll begin with a shared repository that houses all of the producer’s contracts.
Establish a common template that everyone can utilize.
This is how the .gitlab-ci.yml file will appear when using the above template for each producer/consumer in this Git repository.
Now, let’s develop a template for producer/consumer projects to streamline the integration process.
As you may have noticed, I am utilizing sparse-checkout, which enables me to retrieve only the necessary files. From a consumer standpoint, you only need files that have been added or modified, and from a producer standpoint, you need all consumer files. This reduces the amount of data that needs to be fetched and allows for contract testing against specific contracts.
To complete the build pipeline on the producer/consumer side, you must incorporate a template for retrieving the contracts discussed earlier and add a dependency before beginning . To further separate the processes, you can also divide and into stages and run them concurrently.
Outcome using Pipelines In the event of a contract modification on the consumer side
And this is when the contract was altered on the producer side
Using pipelines, you can establish an alert mechanism to notify a designated team in the event of a pipeline failure.
Conclusion
This article outlines a highly scalable, custom solution that employs a bi-directional approach to contract testing. The solution can be easily expanded by simply incorporating additional stages into the Git pipeline, without increasing complexity.
Top comments (0)