Dig deep into safety-critical code testing with coverage analysis

July 12, 2017

Blog

Dig deep into safety-critical code testing with coverage analysis

For safety-critical code, functional testing that ensures that the application does what it is supposed to do and does those things correctly just scr...

For safety-critical code, functional testing that ensures that the application does what it is supposed to do and does those things correctly just scratches the surface. Applications contain hidden complexities that can show up under unpredictable conditions. If not coded correctly, they can lead to disaster. Developers must dig deep to test all the underlying code for subtle errors. But what exactly does that mean?

While rudimentary functional tests can be manually generated from system requirements documents, testing at the deeper levels is much more efficient using automated tools—tools to generate test harnesses and test cases, tools to run those tests, and tools to evaluate the effectiveness of the testing. That last, critical activity is accomplished by coverage analysis.

At a basic level, function (or procedure) coverage analysis shows whether each function has been called. Statement coverage takes this a step further, providing a means to ensure that every line of code has been exercised at least once. But while these are both useful, there is more to coverage analysis than just function and statement coverage.

Safety-critical code requires deeper analysis

There are several levels at which code can be tested, and safety-critical code requires a deep, thorough dive. Branch/decision coverage provides a more thorough examination, designed to demonstrate that each branch has been taken at least once, while branch condition combination coverage requires all possible combinations of conditions to be tested.

That sounds simple enough, but if a decision depends on four or more conditions, then testing every combination becomes unreasonably demanding. Modified condition/decision coverage, or MC/DC, is designed to provide a pragmatic alternative. MC/DC ensures that:

  • Each entry and exit point is invoked
  • Each decision takes every possible outcome
  • Each condition in a decision takes every possible outcome
  • Each condition in a decision is shown to independently affect the outcome of the decision

Function call coverage extends that line of enquiry and builds upon the concept of function coverage by generating information on which function calls have been exercised. This is important because bugs commonly occur in interfaces between modules.

In some cases, such as critical avionics applications that are subject to standards such as DO-178C, more demanding tests are also required. For the most critical “DAL A” applications, DO-178C requires object code verification, which involves analyzing coverage information for the assembler code along with that for source code.

Dynamic testing usually takes place using software tools, which instrument a copy of the source code to provide coverage data at runtime. That data is subsequently analyzed to reveal exactly what parts of the code have been exercised, and to what level. It makes the results visible to the developer in the form of displays such as data and control flow diagrams and source code with notations (Figure 1).

Figure 1

[Figure 1 | LDRA’s TBvision code coverage provides statement, branch, and MC/DC coverage for safety-critical standards, such as DO-178C. In the background is a branch/decision diagram, cross-referenced to the annotated source code. In the foreground are summaries of coverage achieved for each of the functions and the pass/fail results.]

Alleviate menial testing tasks with automated tools

Dynamic analysis can be applied to a complete application (system test), or subsets of it (unit test, including integrated component testing), and usually a combination of the two approaches is used as the complete system becomes available. An integrated tool suite collates information from both sources to provide overall coverage metrics. Unit test tools alleviate the menial work of setting up the testing environment by statically analyzing the structure of the code and then creating a “harness” or framework around the application that injects the inputs and receives outputs during testing. For safety-critical applications, “test vectors” must be based on requirements to provide evidence that the code is performing as it should to both expected and unanticipated inputs, yet still fulfilling the requirements and nothing more.

It is also possible to automatically generate test vectors from an in-depth static analysis of the source code, which will typically result in covering 50 to 75 percent of the code during run time. Clearly that doesn’t provide evidence of correct functionality, but it does have a place in non-critical applications where coverage analysis might not otherwise take place. Even in critical applications, this approach takes dynamic analysis beyond requirements-based testing by verifying robust behavior in the face of data such as boundary values, null pointers, and default switch statement conditions.

It is most cost-effective to begin unit testing as early in the development cycle as possible, perhaps even before target hardware is available to developers. This means it is important to use tools that apply the same test vectors on the host development system and the target hardware so that test cases are generated once, saving time and money.

A complete tool suite can also provide analysis in the form of data and control flow analysis, which is required by standards such as DO-178C (avionics) and ISO 26262 (automotive) to ensure that every invocation of a function has been exercised and that every access to the data has been exercised. It follows variables through the source code and reports on anomalous use (Figure 2).

[Figure 2 | Report of variable and parameter usage based on the current test run highlights the file and location within the file where the variable was used, with custom filters that allow more refined testing.]

This deep level of testing—and thorough and rigorous evaluation of testing—can only be reliably done using an integrated suite of software analysis tools.

Jay Thomas is a Technical Development Manager for LDRA Technology, and has been working on embedded software applications in aerospace systems since the year 2000. He specializes in embedded verification implementation and has helped clients on projects including the Lockheed Martin JSF, Boeing 787, as well as medical and industrial applications.

LDRA

www.ldra.com

@ldra_technology

Jay Thomas, LDRA Technology
Categories
Debug & Test