Improving software security with static automated code analysis in an industry setting
Article first published online: 20 FEB 2012
Copyright © 2012 John Wiley & Sons, Ltd.
Software: Practice and Experience
Volume 43, Issue 3, pages 259–279, March 2013
How to Cite
Baca, D., Carlsson, B., Petersen, K. and Lundberg, L. (2013), Improving software security with static automated code analysis in an industry setting. Softw: Pract. Exper., 43: 259–279. doi: 10.1002/spe.2109
- Issue published online: 12 FEB 2013
- Article first published online: 20 FEB 2012
- Manuscript Accepted: 28 DEC 2011
- Manuscript Revised: 21 DEC 2011
- Manuscript Received: 1 JUL 2011
- static analysis;
- software security;
- static code analysis
Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning. Copyright © 2012 John Wiley & Sons, Ltd.