Getting down to business: Leveraging the right static analysis
Static analysis is a development testing activity with the potential to go far beyond simply checking code. When used as part of a policy-driven defect prevention strategy, static analysis can drive a software engineering team's productivity and minimize fiscal, legal, and ethical risks associated with potentially faulty code. The reason more organizations do not realize the benefits of static analysis, however, is that it's often homogeneously deployed as a tool for "finding bugs." But the truth is that there are different implementations of static analysis that serve different purposes in the development process. And while it's a foregone conclusion that software engineers should run static code analysis, the proper implementation of the right technologies is the difference between wasting time and money and reaching new software development heights.
Generally speaking, best practices are platform neutral – that’s why they’re called “best practices.” The subtleties endemic to embedded development notwithstanding, there are known standards for ensuring quality, regardless of platform. Avoiding memory leaks, for example, should be universal. Further, the relationship between static analysis and software isn’t necessarily defined by the application: It is defined by the purpose of the device. That said, running static analysis is a particularly important best practice for embedded software development.
Traditionally, embedded software is very costly and painful to access post-release. For this reason, most quality or validation activities are focused on eliminating the need to patch or refactor embedded code. Fixing errors post-release poses the greatest risk not only to the brand but also the bottom line. In some industries, particularly in the safety-critical realm, the consequences associated with software defects are so substantial that quality and verification tasks must be executed flawlessly. Software embedded into critical devices such as insulin pumps, weapon control systems, automotive braking systems, and so on, require a preventive strategy that uses a full range of static analysis capabilities; otherwise consequences could include costly litigation, C-level resignations, and even loss of life. This is opposed to agile, continuous development, Web-driven software applications such as smartphones, televisions, and so on, for which a preventive strategy is less important. To this end, the following discussion takes place on the preventive strategy side of the software development spectrum, examining various static analysis implementations:
- Integration-time static analysis
- Continuous Integration-Time (CI) static analysis
- Metrics analysis
- Edit-time static analysis
- Runtime static analysis
Integration-time static analysis
Running static analysis during integration to detect low-hanging fruit and egregious errors is a good starting point for implementing a preventive strategy. Integration-time static analysis simulates feasible application paths without actually executing the code, which is very helpful for systems in which runtime analysis isn’t possible. Static analysis can test across multiple functions and files and catch common memory problems, such as uninitialized memory, overflows, null pointers, and so on.
Static analysis serves a few purposes in terms of the development strategy when organizations begin with testing during integration. First, engineers can review the test results and determine how important they are for the particular application. Static analysis might uncover potential defects that might have a serious impact on software security, reliability, or performance. On the other hand, it could return things that the business might not care about. For example, business probably doesn’t care about a defect in a gaming console that causes the software to crash when an unlikely sequence of operations occurs. The user can simply reboot and continue enjoying their system. Resolving the same sort of issue in other contexts, however, might be crucial to preventing catastrophic consequences.
Static analysis can also help software engineers find potential defects that would have been very difficult to conceive of during the risk assessment phase. Engineers can catalog potential defects to improve future risk assessment iterations.
Continuous Integration-Time (CI) static analysis
After running integration-time static analysis, software engineers should have a stronger sense of potential systemic problems in the code. The next step is to run CI static analysis to enforce the coding policy outlined in the planning phase. This prevents the types of defects discovered during integration-time analysis.
For every issue discovered in static analysis, there are at least 10 more of the exact same thing in other places in the code. Static analysis is the ideal tool for addressing all violations of the same kind at the same time. This is opposed to chasing every possible path through the code. It’s far better to find the systemic problems in order to create an environment in which bugs cannot survive.
When we talk about static analysis, in many cases we mean anti-pattern analysis. A positive pattern is something that should be in the code. For example, a policy that requires engineers to use a typedef when declaring function pointers is a positive pattern static analysis rule. This is in contrast to a policy that, for example, prohibits the use of the data() member function from a string class when interfacing with the standard C library.
Executing both types (positive- and anti-pattern) of static analysis is important, but it’s worth mentioning this distinction because if the organization spends the time to build a coding policy based on positive patterns, this ensures that software engineers are building code exactly how it should be per business objectives or compliance requirements.
Metrics analysis is a static analysis implementation that evaluates code characteristics and provides insight about the code that can help software engineers identify weaknesses (Figure 1). It is a critical sensor that can highlight areas of the application that can be prone to logical errors. Metrics analysis is an essential baseline measurement that should trigger further analysis, such as code review or some other remediation activity.
Metrics analysis is best used as early as possible because it might affect how software engineers write their code. Avoid trying to implement metrics analysis reactively or during the QA phase. The goal with metrics analysis isn’t just to detect potential defects; it’s to detect them in such a way that allows engineers to follow a sustainable coding trajectory. Run metrics analysis on potential defect hotspots, remediate any violations, and implement a pattern-based analysis rule to prevent future occurrences.
Any metric that correlates to potential problems is fair game. For example, a medical device company might use metrics analysis to measure cyclomatic complexity because a high score indicates that there are too many decision points for the device to handle during normal operation. Knowing that the complexity score exceeds the threshold set in the coding policy when there are 10 branches to cut, as opposed to finding out in the QA phase, will help keep the project on time and on budget. The organization might, for example, want to measure public variables because a high number might correlate to too many dependencies in the code. Each organization will need to decide which metrics can be correlated to possible defects in the code.
Edit-time static analysis
The static analysis sweet spot is while the developer is working in the editor. Running static analysis at edit time serves a few purposes. First, it points software engineers to potential problems. Second, it implements the risk assessment strategy by ensuring that any issues are remediated systemically.
But when should static analysis be implemented? We’ve discussed why implementing static analysis too late is a problem; however, it can also be implemented too early because there must be enough context for static analysis to provide meaningful information. Running static analysis on a character, line, or even statement creates too much noise to be useful. Enforcing positive design patterns ensures that new code is built as intended – while it’s being written. Running static analysis at edit time is a powerful way to promote the correct behaviors within the development team because feedback is rapid and in context of the code being written. Leveraging this type of analysis makes code reviews more productive because engineers should be able to correct policy-based errors immediately.
Runtime static analysis
Some static analysis patterns can detect defects at runtime. If the embedded target can accommodate the overhead, the organization should execute runtime static analysis to round out its preventive strategy. Runtime static analysis detects errors while the code is actually running, which enables software engineers to test real paths with real data.
Final note about static analysis and QA
In an ideal preventive strategy, errors found when QA runs static analysis should already be known and determined acceptable. This is because software engineers should have already tested against and adjusted design patterns to enforce coding policies. Violations at this stage mean that there is a problem with the process, such as improper static analysis rules. In these cases, QA needs to send the code back to development so they can find the systemic cause of the defect and implement a rule to prevent future occurrences. From this perspective, static analysis is a much better quality gate than a bug finder.
 Joint Strike Fighter, Air Vehicle, C++ Coding Standards, chapter 4.22 “Pointers & References”; AV Rule 176
 PCI Data Security Standard Version 1.2; “Requirement 6: Develop and maintain secure systems and applications”