DEV Community

Renan
Renan

Posted on

Dependency guards

Since Log4j everybody doesn't talk about other things in the tech world than how to protect your code against your dependencies. And there are multiple ways to do so, multiple approaches for each scenario... This article is another proposal, briefly:

Test your dependencies signature/behavior using test packages.

For package developers

We are used to building the source code repository of our packages including multiple quality assertions, unit tests, semver... But, even doing all these good practices, we're failing in providing that safety feeling for our consumers. They simply CANNOT trust our packages blindly, they need to perform additional validations before use it or update it to make sure that the unique signatures that they use are not broken, or even that the behavior in certain conditions remains the same.

Sounds like re-work since we already have done most of it in our repository, and that's exactly the point. Our quality assertions should be packaged in separate packages (and maybe in separated repositories?) to allow re-use. Let's bring some points to the discussion:

  1. A test package simulates the consumer code which uses our dependency package (peerDependency?) and perform a quality assertion (unit test, e2e test, security assertions, quality scan...);
  2. Once packaged and published the test package is immutable (the registry does not allow the overwrite of the package without increasing the version number, also package managers can verify the test package signature/checksum);
  3. We can and should use the test package to evaluate our own dependency;
  4. Our consumers also can use the same test package to evaluate our dependency, write their own test package to perform additional assertions that our test package doesn't and even publish it to other consumers to use;
  5. We can have one test package for each public method/behavior in a way to allow our consumers to composite their own pipeline of tests to evaluate the new versions of the dependency based on the signatures they consume;
  6. The consumers can't evaluate our entire codebase, but in the tests they can trust since it's performing the exact same function they use in their code;
  7. This test package acts like a 'dependency guard' for our consumers, they must be consumed only with exact versions (without ^ or ~ flags) once the consumers must know exactly what is being evaluated;
  8. The consumers must setup a new project to install the test packages and a script to run them all, creating a pipeline of 'dependency guards' to run before they use the dependency;
  9. The test package will cover the dependency until it's signature changes, then it is time to upgrade the test packages for the new signature. The consumers will note that the old version of the test package included in pipeline fails when it tries to evaluate the new version of the dependency with the changed signature, then they have to choose to upgrade their pipeline with the new test package and change their own codebase or keep using the previous version. It's not only 'semver' that will point that we have a breaking change, our test package will operate like a guard to alert and prevent broken code.
  10. Each consumer will have their own test projects acting like pipelines to execute test packages and evaluate the versions of a dependency. I think that these projects should run in a CI (maybe triggered each time a new version of the dependency appears) which in case of success will update the dependency version in the consumer local registry (verdaccio, artifactory...) from which the consumer codebase will retrieve the dependency.
  11. The test package must not use untested additional dependencies which may affect the test results, the idea is that one public method is tested by test package, or the minimal as possible to keep the test dry, clean and fast (but keep it sane). If a test package requires additional dependencies to perform the test I guess they should be peerDependencies giving to the final consumer the power to choose the exact version of each (which they may have tested before with their respective test packages)
  12. The coverage metrics for the dependency will change depending on the number of the test packages included and the dependency's version in use. But now these metrics offer you valuable information. If you use a single method of a dependency in your entire codebase, and only include the test package for this signature, your coverage metrics tend to be low, since a lot of code from the dependency remains untouched... So you have to think, should I keep this dependency only because of this method, or maybe use another one that is more specialized and fits exactly the need?
  13. For package developers, the way the test packages are combined and used will represent the community wishes and opportunities to better understand the code usage, sometimes it could be better to split a functionality in another package than inflate the actual package with a feature that is less used.
  14. I recommend evaluating dependencies and new versions in an isolated environment (docker...) to prevent malicious code from running in sensitive environments.

Note> It's time for QA Engineers to shine by creating multiple test packages to evaluate the behavior of every existing package.

Supposed scenario timeline

  1. We need to bring this 'new lib' into our codebase;
  2. First, we build a test project to setup the test packages we will use to evaluate our new dependency behavior;
  3. We analyze each test package (exact versions) to understand the evaluated behaviors and identify those that make sense to use (the lib owner can even have a recipe of recommendations);
  4. Since the dependency pass the selected tests it's added to our local registry and can be used in our codebase (good opportunity to wrap this external dependency into a internal package if needed);
  5. The codebase is updated with the new dependency approved/tested signatures;
  6. A new patch/minor/major version of the lib is launched;
  7. Our CI engine is triggered by a feed of the new version and perform the same evaluation in order to update our local registry only if the dependency succeed the tests;
  8. If it fails, the team should verify which tests are broken and check for new test packages for the new version of dependency, and run from step 3 again;

Beyond unit tests, we can build test packages to evaluate specific behaviors, scan packages for vulnerabilities and/or malicious code, quality code, mutation tests... The consumers can share their test packages and build a test pipeline using test packages that fit exactly their needs.

Well, I'll keep thinking about this approach, but I think it's time to get some feedback. Please let me know what you think about it, thank you for your time.

References

Top comments (1)

Collapse
 
qpwo profile image
Luke Harold Miles

I think having a separate repo @checks/lodash that you install with lodash makes a lot of sense. This proposal seems like a pretty good idea to me.