Main Content

Evaluate Status of Back-to-Back Testing for Software Units

When you design a software unit, you can create a model and run different types of simulations, such as normal mode, software-in-the-loop (SIL), and processor-in-the-loop (PIL) simulations. Industry-recognized software development standards, such as ISO 26262, recommend that you compare the results from normal mode simulations on your model with the results from simulations on your generated code. The process of running and comparing these simulation results is called back-to-back testing. Back-to-back testing can help you verify that your model and generated code are functionally equivalent.

You can automatically compare model and code testing results back-to-back in the SIL Code Testing and PIL Code Testing dashboards. The back-to-back testing metrics return the status of back-to-back testing for each test by comparing, at each time step, the outputs of the normal mode model simulation and the outputs of the code executed in SIL or PIL mode. By using these metrics, you can make sure that you run the necessary tests to validate your requirements, identify issues, and assess translation validation between your model and generated code.

In this example, you use a cruise control model to:

  • Monitor the completeness of model and code testing.

  • Identify untested software unit tests.

  • Run tests in normal mode.

  • Run tests in software-in-the-loop (SIL) mode.

  • Identify and fix model and code testing issues.

  • Confirm that you have properly run software unit tests back-to-back.

You can follow the same steps in this example to analyze PIL testing in the PIL Code Testing dashboard and view metric results from the PIL tests. The PIL Code Testing dashboard uses the same layout as the SIL Code Testing dashboard, but the metric results come from PIL tests. For information on test authoring and the different simulation types, see Test Planning and Strategies (Simulink Test) and SIL and PIL Simulations (Embedded Coder).

View Status of Model Testing

The example project cc_CruiseControl contains a software component and several software units. To open the example project, in the MATLAB® Command Window, enter:

openProject("cc_CruiseControl");
scenario = "incomplete"; loadScenario();

To view the current status of model testing of the software units in the project, open the Model Testing Dashboard. On the Project tab, click Model Testing Dashboard. Or, in the Command Window, enter:

modelTestingDashboard

View the model testing metric results for the software unit cc_DriverSwRequest. In the Project panel, click cc_DriverSwRequest.

If you previously collected metric data for a unit, the dashboard populates with the existing data. The dashboard shows model testing metric results such as the number of requirements linked to tests, the number of tests linked to requirements, the number of passing model tests, and the status of model testing coverage. Collecting data for a metric requires a license for the product that supports the underlying artifacts, such as Requirements Toolbox™, Simulink® Test™, or Simulink Coverage™. After you collect metric results, you need only a Simulink Check™ license to view the results. For more information, see Model Testing Metrics. For information on how to navigate and use the Model Testing Dashboard, see Explore Status and Quality of Testing Activities Using Model Testing Dashboard.

Identify Untested Software Unit Tests

Identify which tests have not been run on the unit. In the Model Test Status section, click the Untested widget.

The dashboard opens the Metric Details with a table of information about the untested tests. In the Artifact column are hyperlinks to the individual untested tests. The Source column shows the test file that contains the test. You can point to the hyperlinks to view more information about the tests.

Because the untested tests are stored in the same test file, use the hyperlink to open the test file in Test Manager. In the Source column, click cc_DriverSwRequest_Tests.mldatx.

Run Tests in Normal Mode

You can run the untested tests to see how the model performs. The tests in this example project are set to run normal mode simulations by default. In the Test Manager toolstrip, click the Run button.

The test results show that 5 tests pass, 1 test fails, and 1 test is disabled.

Typically, you want to review and fix non-compliant model testing results before you review code testing results. But for this example, do not fix the failing test yet.

Run Tests in Software-in-the-Loop (SIL) Mode

Run the same tests again, but in software-in-the-loop (SIL) mode to see how the generated code performs. In Test Manager, click on the Test Browser tab. Then, in the toolstrip, click Run > Run Selected in > Software-in-the-Loop (SIL).

The test results show that five tests pass, one test fails, and one test is disabled.

View Status of Model and Code Testing

You can view a summary of the model and code testing results in the dashboards. Return to the Model Testing Dashboard window and, in the warning banner, click Collect.

The dashboard collects the metric results by using the recent test results and updates the information in the dashboard. There are no untested tests shown in the Metric Details for the software unit cc_DriverSwRequest.

To view the overall status of the model testing results, navigate back to the main model testing results using the breadcrumb. Click cc_DriverSwRequest.

The Model Test Status section shows that 71.4% of model tests passed.

To view the status of the code testing results, open the SIL Code Testing dashboard. In the dashboard toolstrip, in the Add Dashboard section, click SIL Code Testing.

The SIL Test Results section shows that 71.4% of SIL code tests passed, but one test failed both during model testing and during SIL code testing. The Back-to-Back Test Status widget shows that you successfully ran each test back-to-back, once in normal mode and once in SIL mode. However, you need to address the failing test results.

Identify and Fix Test Failures

You can investigate the test failures by looking into the results. In the SIL Test Results, in the Test Failures section, click on the Model failure widget.

The Metric Details show that the test called Detect set failed. To see where those failing results came from, open the test results for Detect set. In the Artifact column, expand Detect set. Then, click on Detect set [Normal Mode Result].

The failing test result opens in Test Manager. The Logs section in the results shows that the actual value returned during the simulation did not match the expected value.

To visualize the difference between the actual and expected values from the simulation, open the simulation output. In the Results and Artifacts pane, under Detect set, expand the Sim Output. Then, select actual_request and expected_request. Alternatively, you can use the function metric.loadB2BResults.

The Visualize tab shows the difference between the actual values and expected values. During the test, the software unit cc_DriverSwRequest requested the value RESET instead of SET.

To fix the failure, open the software unit cc_DriverSwRequest. In the dashboard, in the Artifacts panel, double-click cc_DriverSwRequest.

The test fails because the input to the subsystem DetectRaiseAndOverride2 is specified as RESET instead of SET. Double-click db_Request_Enum.RESET and change the value to db_Request_Enum.SET. Make sure to re-save the model.

Re-Run Tests and Confirm Testing Back-to-Back

In Test Manager, re-run the test Detect set. In the Test Browser tab, select Detect set. Then, in the toolstrip, click Run. The test passes.

Check the status of model and code testing in the SIL Code Testing dashboard. In the dashboard, click Collect to re-collect the metric results and include the latest test results.

The SIL Test Results show that there are now zero model test failures, but there is still a SIL test failure, and one test is failing back-to-back testing. Because you only re-ran the test Detect set in normal mode, there are no updated SIL test results for the test, and you did not run the test back-to-back.

Get the updated test results and complete the back-to-back testing by re-running the test Detect set in SIL mode. In Test Manager, in the Test Browser tab, select Detect set. Then, in the toolstrip, click Run > Run Selected in > Software-in-the-Loop (SIL). The test passes.

View the latest code testing results to confirm that each of the tests for the software unit passed and ran back-to-back. Return to the SIL Code Testing dashboard.

The dashboard shows zero model failures, 85.7% of SIL tests passed, and 85.7% of the SIL tests ran back-to-back to the normal mode tests. The remaining test is a disabled test that does not run and therefore has not been tested back-to-back. If you have a disabled test, make sure to review whether you need to enable the test to properly test your software unit.

Tolerance Considerations

This example project uses simulation tests to test the software unit cc_DriverSwRequest.

For baseline tests and simulation tests, the back-to-back metrics use the Simulation Data Inspector to compare the logged outputs from the test and determine the back-to-back testing status. For these tests, the metric specifies the absolute tolerance and relative tolerance values for each signal depending on the data type of the signal. If you need the metric to consider individual signal tolerances in the comparison, use an equivalence test instead.

For information, see the computation details for the:

See Also

|

Related Topics