Raising coding standards

Raising coding standards

Feature articles |
A recent study carried out by a leading Static Analysis tool provider, PRQA, showed that, perhaps not surprisingly, engineers aren’t as efficient at identifying code violations as a dedicated tool. However, while the headline results may seem contrived, the underlying premise is that an engineer’s time and skills are better used in resolving the subjective issues that arise from automated code inspection.
By eeNews Europe

Share:

The study is based on the results of PRQA’s "Developers’ Challenge", held at the Embedded Systems Conference in Silicon Valley earlier this year, which targets engineers with a ‘genuine interest in writing high quality code’.

而其主要目标是厚颜无耻地演示nstrate the value of automated code inspection and review, it highlighted the significant gap between engineers’ ability to (rapidly) identify coding violations in a sample of C/C++ source code provided, that compiled with only a few warnings, yet contravened a number of recognised coding standards.

The inability for compilers to do a good job of reviewing code isn’t news, neither is the fact that software can check software quicker than wetware, but the follow-on conclusion is that engineers are still needed to apply discretion to the results. In the challenge, 50 engineers of varying ability spent around 30 minutes (although one particularly diligent participant returned the sample the following day) to identify between none and 33 issues contained within the code.

The automated tool took considerably less time to identify 120. However, once the less subjective issues were identified and addressed (by engineers) there remained a number of violations that could only be assessed in context; something that even PRQA admits isn’t possible using an automated approach.

In PRQA’s defence, the intention isn’t to replace code reviews but to expedite them. It’s only fair to point out too that the static analysis tool was configured to mimic the compiler’s own settings, as well as modifying the coding standards against which it was analysed. This was intended to exclude violations that were perhaps not relevant to the code’s level of completion, as well as avoiding compiler-centric issues that would otherwise be omitted.

This resulted in the 120 ‘real world’ issues; four times as many as the best effort returned by an engineer. PRQA argues, therefore, that the results were relevant, identifiable and posed a real risk to the quality of the code. Once the ‘major’ issues were addressed, the code was re-assessed, this time returning 44 violations. At this point, PRQA proposes that the code is still not ready for a full (and costly) code review, because many of the remaining issues can be addressed more cost-effectively using automated coder inspection.

Focusing on the remaining issues, it was shown that a further 40 issues could be resolved based on the results of the tool, this left just four messages, two of which were suppressed (based on the personal preference of the engineering team) and two that were deemed too subjective to be handled by anyone but the original software engineering team. The results, as interpreted by PRQA, show that before commencing any code review it makes sense to utilise automated code inspection, as it saves time and money, while enforcing coding standards that will ultimately improve the engineering team’s capabilities. The empirical evidence based on the 50 ‘contestants’ shows there remains a significant gap between engineers’ interpretation of software quality — which some may say itself represents a quality issue — and the impartiality introduced by an automated method that leaves no error — however small — unreported.

What isn’t in dispute is the need for software engineers that can bring context to the results of automated code inspection, but as there will likely always be a performance gap between manual and automated code inspection what remains unresolved is any foundation for defining an acceptable balance between the two extremes. That could be an area that warrants further investment by both OEMs and static analysis tool providers. A white paper with full details of the Developers’ Challenge is available via PRQAwww.programmingresearch.com

Linked Articles
eeNews Europe
10s
Baidu