Postado em 22/09/2017
Would you be proud of your project’s source code if you had to open source it today?
Alternatively, as Bob Martin puts it, “have you been impeded by bad code?”.
The truth of the matter is that without the proper measurements and dashboards it is very hard to see the trends and notice the software rot incrementally - let alone prevent this worsening in a build pipeline system such as Jenkins, GitLab or similar ones. “You can’t manage what you don’t measure”, as Lloyd S. Nelson has already said . “Put it on the wall” is a very strong Lean principle.
Ever since McCabe came up with the idea of the Cyclomatic Complexity metric in 1976, showing how it co-relates to bugs and high maintenance costs, our field has been evolving with new metrics and tools to control or prevent software rot. Take a look at any SONAR Dashboard and you will see the options are many.
But which software metrics should you use? Which ones to prioritize?
There are various metrics such as process metrics (e.g. Velocity), performance metrics (e.g. Average Response Time) and so on, but in this article we will focus on source code metrics which relate to software quality.
Ok, which ones relate to the maintenance and evolution of source code, producing “Clean Code”? In his seminal work, The Economics of Software Quality , Capers Jones describes in detail which practices and metrics correlate to “good code” (some of the insights have been summarized by us – in Portuguese ).
You can look at the source code with an architecture mindset and measure how well the layering and coupling of the modules/components has been designed and how well this design has been maintained. We will leave this macro approach for another article.
Alternatively, you can look at the source code with a malleability mindset and evaluate the parts in terms of Abstractness and Instability . We will also leave this semi-macro approach for another article.
But you can look at a more micro level structure via static analysis of the source code. We have been successfully using – and limiting - the following source code metrics at a micro level:
These metrics can be shown “alive”, in a graph showing a “current max” versus a “maximum allowed maximum”, such as in the following kiviat:
But maximum values don’t tell you how the remaining values spread. How bad is the rest? A great way to visualize the whole population is via histograms, where we can see how things are overall:
Great, we can see some outliers and points we need to tackle, from right to left! But which ones should we tackle? What are these routine names that need fixing? We need a mechanism for drilling down.
For that we use Scatterplots (tree maps also work here – rectangles where the size and color vary according to the metric), where we can plot 2 metrics (X and Y axis), possibly 3 (circle sizes) and interactively see where our candidates for a Pareto approach of fixing really are:
By interactively identifying from top-right towards bottom-left the culprits, we can properly build a roadmap and schedule of which ones to tackle and when1. Great, we managed to set our priorities straight!
Now, a final point of wisdom: should you blindly tackle all elements with high value of metrics? Should you really prioritize solely based on the high/bad values?
Experience and wisdom tells us: NO. What you really want to do is to intersect these elements with the following sets:
By filtering out the files/components with very few bug reports and very few code changes from our histograms and scatterplots or tree maps you will be able to better focus where the bleeding really occurs in your software development and spend your money wisely. Then, repeat! After all, Lean is about Continuous Improvement – a new Pareto structure will present itself, so you repeat the cycle.
In a future article we will talk about semi-macro metrics. Happy Restructuring and Refactoring!