We've all been there, from emails to spam "@here" in Slack. When we have a change in our tools that needs to be updated by another team, this process can become a frustrating task.
This example and many more are day-to-day issues that big companies face.
If you have created tools or integration that other teams use, you probably had to make an update or a change, and sometimes these changes cannot wait. The usual play is to send a large distributed email requesting to update your dependency.
At best, it would go to their backlog, and at worse, it will be ignored.
At Autodesk, we understood this process should be automated.
Our story started with a simple package named "ProjectPageLayout".
All Autodesk ACS teams (20+) used this package, and once we wanted to make a change, we had to request every team to update it to get a consistent experience in our application.
When we retroed this process, an idea came up, why can't we just test their "project" and make sure they use the latest version? And if it's not the latest version, we can enforce it.
This is how the idea for "ProjectLinter" came to life.
Our inspiration is ESLint. If ESLint statically analyzes your code, we statically analyze your whole project to find problems quickly. It can fit all project types, and you can run it as part of your continuous integration pipeline and send notifications asynchronously.
Sometimes it can also automatically fix your problems.
Every team can create a rule (like in ESLint) to enforce it in other projects. To make it simple for the project owners, they do not have to be familiar with the rules and specificities because when creating a rule, it is deployed and automatically executed alongside the other rules seamlessly.
In other words, it is seamless to the users of the linter and simple to create a rule for the enforcing team.
But wait, why should we limit ourselves to updating the team's dependencies and business logic rules if we can also ensure their projects possess the best standards and quality in the industry.
We also create rules that ensure performance, minimal bundle sizes, scan security vulnerabilities, high testing coverage, and minimal package duplication.
Another massive win from this process is that it can give us a way to measure infrastructure tool's success. We can define KPIs according to rules, for example, “Performance” (Web vitals).
We can see changes over time for this rule and therefore understand whether there is a wide problem across teams that requires infrastructural assistance and whether this assistance is successful.
And then, a problem was raised that sometimes teams maintain code bases that do not change frequently, and if we wait for a code change to trigger the linter, it could be too late.
The solution is not just to run the linter when the codebase is changed but also over time, for example, daily and asynchronously.
Another issue is that with all due respect to the importance of the rules, you can not break the team's process out of nowhere.
Therefore we created gradual diagnostics systems.
You can set that currently a rule would just warn the teams and send them notifications, but after two months of no response, it would take more extreme measures.
Let me close by saying: I genuinely believe this is another step toward a simpler future with more automation. The beautiful thing is that my team and I had the opportunity to work on that during the hackweek at Autodesk TLV on an idea
we are passionate about and love.
My Teammates:
Top comments (0)