I have been working for a while on the solution to collect and report various code quality metrics. This is the final piece of it.

Why this was needed

We inherited about 200k lines of legacy code and in process of maintaining and adding new features to it, the question of measuring quality came up. As we making these changes, are we making the overal code base better? Is the code becoming easier to manage and maintain? What are some problematic areas in the code (hot spots) and how are we doing on those area?

Our first attempt was to use built-in features of Visual Studio. When you right-click on the project in Visual Studio and pick Analyze | Calculate Code Metrics, you'll get this:

VS Metrics

This is nice, but how could we see the historical values or aggregations of multiple projects?

We also have Jenkins CI server which produces code coverage report after it runs unit tests:


This is also helpful but only provides a single snapshot on the day the CI job ran and just like Visual Studio metrics, it is for single project only.

Another code quality variable we would like to understand was code churn which is a number of source code lines added or deleted. As it has been shown code churn can be used to predict system defect density. I remember seeing the code churn report in TFS, but we don't use TFS and all our code is in Github and Bitbucket.


The solution has two parts, data collection and visualization. To collect data I wrote two utilities,
Code Metrics Loader and Code Churn Loader which collect metrics data from Microsoft Power Tool output, Open Cover results and from Github and Bitbucket's public API.

For the visualizaton part of the solution I developed a simple portal


Here on the home page we see system-wide "big three" - maintainability index, code coverage and lines of code as well as hot-spots as far as worst maintainablity index accross all system and highest code code churn.

Metric area is the place to see metrics trends and details:


There is an ability to navigate two hierarachies or in warehouse speak, cube dimensions which are Module => Namespace => Type => Member and Date. This is the place where code coverage originated in Jenkins meets Visual Studio generated maintainablity index on any hierarchy level. This means we can see both code coverage and maintainability index on module, namespace, type or member levels.

On the code churn page we can see churn bars for each data collection date with the ability of drilling down to commit and file levels:

Code Coverage

Essentially, two cube dimensions here are Repository => Commit => File hierarchy and Date.

How this is built

I use Angular JS + ASP.Net MVC combination. I could have gone the traditional way of Angular JS + Node, but didn't see benefits in this approach. I could not have written all of this entirely in JavaScript, I needed C# for a bunch of stuff, primaraly the service layer.

Having Asp.Net MVC as a backend gives me an ability to manage everything in one solution in Visual Studio, not to mention that Web API which is what Angular talks to, comes right out of the box.

For the data tier I use my all time favorite EF Code First which I also use in the data collection utilites.