The Radisson Blu Portman Hotel hosted the low-key event with about 50 attendees. We heard several suitably technical presentations from Olivier Gaudin, Freddy Mallet, and Nicolas Peru of SonarSource, and Duncan Pocklington from Microsoft.
The day opened with a question. Who is responsible for code quality? Developers or QA?
The answer was unabashed: developers.
Introducing technical debt can be OK in certain circumstances, but the team needs to understand the tradeoff and how much of a problem they’re creating. The best way to do this is through objective and consistent measurement.
“Fix the leak”
Knowing you have a problem is one thing. Fixing it is quite another.
The overriding theme of the day was “fix the leak”: when you have a leaking pipe, should you fix it first or mop up first? Cleaning up is not very helpful if you don’t fix the source of the problem in the first place.
(This was particularly poignant for me, as I was late to the conference due to a leaking pipe at home.)
In practice, this translates to setting a quality bar (or “gate” in SonarSource lingo) for new changes, but mostly ignoring existing problems until you get things under control.
This seems like a nice approach for two reasons:
- it reduces the friction to get started on a legacy codebase since you can pretend you’re starting from a clean slate;
- it’s a line in the sand that sets expectations for the team moving forward.
Olivier took pains to emphasise that having an automated tool enforcing this sort of behaviour doesn’t relieve you from educating the team on best practices. Every metric can be gamed, so you need to get people on board with the concept to really make the most of it.
SonarQube measures the maintainability, reliability and security of your code base, and tracks improvements over time. It also pinpoints specific code smells in your code that should be fixed.
SonarQube is used by more than 75k companies, some with thousands of developers and millions of lines of code. It’s become the de facto code quality tool since its introduction 8 years ago, outgrowing its Java roots to now support more than 20 languages.
Freddy gave us a rundown of the features from recent versions, including v5.6 (to be released in a couple of weeks).
Of note is the modernised architecture which no longer requires direct connections between analysers and the database. It all goes through a web service now which is much more sensible.
The quality ratings are also being refined. The existing SQALE metric is good for measuring the maintainability of a project, but it does not take into account the severity of issues. It also doesn’t really mesh with the leak concept.
In SonarQube 5.6, SQALE will be renamed Maintainability, and there will be new ratings for Releasability, Security, and Reliability. Pulling all this information together across all of your projects will be a new Governance dashboard (a commercial plugin).
At work, we use gitflow. We don’t want to merge a feature branch if it will reduce the quality of the project, so we’re particularly keen to understand how branch support will improve in SonarQube.
Already there is pull request integration with GitHub and Stash which lets you know when merging would introduce debt.
Currently within SonarQube itself, however, separate branches are treated as separate projects. Configuration is duplicated, and worse, every feature branch includes all the issues and debt in the main branch.
Fixing this is fortunately a high priority for SonarSource, although it’s not been announced when it will ship. The aim will be to rate all branches for a project as a diff against a master branch.
Clustering was a surprising addition to the roadmap, as this doesn’t seem like the kind of product that would need to support massive loads. However, some truly enormous installations do exist in the wild which could make use of multiple web servers talking to the same database.
The hidden agenda for clustering became clear when Freddy announced SonarQube as a Service. This will be a free service for open source projects that can analyse a project hosted anywhere (though it will require a GitHub account for authentication). It will support all of the built-in SonarSource plugins, but no 3rd party ones. This is great news for the open source community!
Finally, I had a chance to ask about wallboards and integration with systems like JIRA. The general advice is that these should be handled outside SonarQube itself, and integrated using the full-featured RESTful API exposed by SonarQube.
SonarLint is a plugin for your IDE (Eclipse, IntelliJ or Visual Studio) that flags code quality issues as you type. The idea is to prevent leaks before they’re shared with other developers through your SCM.
It can run as a standalone tool as all the analysis is local, although it can also import quality rules from your SonarQube instance so it’s in sync with your shared standards.
Issues marked as “won’t fix” in SonarQube will still show up in your IDE, although the team is working on tighter integration. They’re also thinking about custom rules, and “quick fix” tools which would automatically fix smells in the code.
Interestingly, there’s a CLI version of SonarLint which could be used to hook it into other development environments.
Much was made of the partnership between SonarSource and Microsoft, who have been collaborating since January 2015. There is now good support for .NET projects, with all of the tools being developed in the open on GitHub.
The MSBuild scanner and SonarLint support are built on Roslyn, the new component underpinning the recently rewritten .NET compilers and intellisense.
A more technical presentation from Nicolas outlined the direction SonarSource was taking for static code analysis.
SonarQube has had lexical and syntactic analysis for some time now. These are good for finding maintainability issues in code, i.e., technical debt.
Semantic analysis, and a new technique known as symbolic execution are also now becoming available for some languages, which aim to find actual bugs in code.
Symbolic execution looks for run-time issues (e.g., null pointer exceptions) by statically analysing various permutations of the state of code. It’s currently limited to individual functions, although support for cross-procedural analysis is something they’re shooting for.
The conference was a good chance to get pretty technical with the people intimately involved in the development of SonarQube and SonarLint.
There is a lot of potential to really raise the bar on software quality using these tools, which is something we as an industry sorely need.
Thanks to Olivier, Freddy, Nicolas, Duncan, and the rest of the SonarSource team for some fascinating talks! #SSCT2016