When we want to improve the code quality of a project we try to evaluate the code to detect if it’s well designed, well implemented and very well tested. After the analysis using the existing tools or even by a review from an expert we can get a report of what needs to be improved in the code to assure a delivery of a release with a minimum of issues.
But wait, who is the origin of the code? It’s a development team, each one has its responsibility. We can have an architect, the designers, the developers and the testers. And what matters more than the code is the team skills involved in the project, don’t forget that a bad architect could ruin your project even if you have excellent developers.
During the development process of a project try to elevate the development team skills by detecting regularly their weaknesses and correct them as soon as possible to avoid making the same mistakes for their next development tasks.
There are many tools that can help the developers to improve their code quality, many of them focus on the code and not on the developer skills. Scanyp is a highly customisable solution to adapt it to your needs to focuses more on the team skills. And It’s free for the projects less than 500K lines of code.
Scanyp offers a comprehensive evaluation of the team’s development proficiency across various categories, encompassing code implementation, design, unit testing, and code documentation. What effectively contributes to code quality improvement.
After the analysis using Scanyp the dashboard give us the whole picture about the skills scores and what more interesting is their evolutions, to check if the developers increase their skills or not:
However from the team skills page we have the details per category:
Code implementation score:
This score is calculated from the following scores:
- The maintainability score which gives an approximation about the maintainability of the project.
- The naming score is calculated from the rules detecting the violation of the names against the most known coding standard naming conventions of the programming language concerned. Of course you can from the admin area change these rules to match your chosen naming conventions.
- The clones score which is calculated from the number of the clones detected in the source code
- The code smells score which is calculated from the code smells detected in the code, and as many parameters in Scanyp we can also change the code smells parameters from the admin area to match our expectations. For example by default scanyp considers that a type is big if LOC>500. However, you can change this threshold from the script function of this query. These rules are grouped in the code smells group.
- Compliance with the implementation rules which is calculated from the code implementation violated rules, And also you can modify as you want the rules provided by default concerning the implementation.
Note that all the numbers are clickable to have exactly where the problems are in the code.
The design score:
This score gives us a whole picture about the design of your project:
This score is calculated from the following scores:
- Coupling score which is calculated from the most coupled types methods in the code base. If many types and methods are highly coupled this score become low,
- Cohesion score which is calculated from the not cohesive types found in the project. If many types have a low cohesion, the score becomes low.
- Design smells score which is calculated from the violated design rules, And like the code implementation smells we can easily change the design smells rules.
- Rules compliance which is calculated from the violated design rules, And also you can modify as you want the rules provided by default concerning the design.
The bug free score:
This score gives us a whole picture about the issues detected in your project:
This score is calculated from the number of the issues detected. However the weight of an issue is proportional to its severity. Which means that you can have a low score if only 20 critical issues are detected and a good score even if you have 100 low issues detected.
The unit testing score:
Which represents the coverage percent of the project.
The documentation score:
This score gives us a whole picture about the documentation of the public types and methods of your project:
This score is calculated from the following scores:
- Comments on public big methods and types : The documentation is needed for big types and methods. Indeed for small methods no need to affect the score if a documentation is not found.
- Comments on public complex methods : For complex methods it is recommended to have documentation, and no need to affect the score if a not complex method with no documentation is detected.
- Comments on interfaces/ abstract types: These kind of types are destined to be used as an API and it’s mandatory to have a documentation for each method exposed, so the user knows exactly what this method does.
To resume, as a team member try to focus to elevate your skills to avoid repeating the same mistakes, what effectivly contribute to have a high quality code.
Top comments (0)