In the old times of monolithic Ansible, making sure the automation content we are using is high-quality was next to impossible. We could either use what Ansible package delivered or resort to things like distributing Ansible modules in roles (which we consider an ugly hack around Ansible's limitations).
With the introduction of Ansible Collections, we gained a lot more control over the content we use in our Ansible playbooks. We can install the core Ansible engine and then equip it with modules, plugins, and roles we need. But, as always, with great power comes great responsibility.
With the ability to install Ansible Collections, we are solely responsible for the quality of content we use to build our Ansible playbooks. But how can we separate high-quality content from the rest? Here are a few things we do when we are evaluating Ansible Collections.
Check documentation. Once we find a potential candidate on Ansible Galaxy, we first check the documentation. It should contain at least a quick start tutorial with installation instructions and reference documentation for modules and roles.
Assess playbook readability. Because we want our playbooks to serve as a human-readable description of the desired state, modules from the Ansible Collection under evaluation should have a consistent user interface and descriptive parameter names.
Test basic functionality. Before we start using 3rd party content in production, we always check the basic functionality each Ansible module should have. Things like enforcing state instead of executing actions (running the same task twice in a row should be safe and the second ideally reports back no changes) and supporting check mode are the bare minimum we expect.
Peek at the tests. This last check is a bit harder to perform if we never developed an Ansible module or role before. But even then, checking the CI/CD configuration files should give us a general idea about the test suite's robustness. We are looking for integration and sanity tests (