Test Case Planning

You can use Simulink® Test™ to functionally test models and code. Before you create a test, consider:

  • What model or model component are you testing?

  • Does the component integrate code, such as using a C Caller block?

  • Do you need to run your test in multiple environments, such as using generated code on an external target?

  • What is the test objective? For example, do you need requirements verification, data comparison, or a quick test of your design?

  • Does your test use multiple parametric values?

  • Do you require coverage results?

System to Test

You can test a whole model, or you can focus on a model component. You can create a test case for the entire model, or use a test harness.

  • A test harness lets you isolate a whole model for testing. You can add verification blocks and logic to the test harness to keep your main model clear. You can also add other models that simulate the environment.

  • A test harness lets you isolate a model component from the main model for unit testing. For information, see Create a Test Harness.

A test harness is associated with a block or an entire model. A test harness contains a copy of the block or a reference to the model, and inputs and outputs placed for testing purposes. You can add other blocks to the test harness. You can save the harness with your model file, or you can save it in a file separate from the model. The test harness works the same whether it is internal or external to the model. You can also specify whether to synchronize changes between the model and the test harness.

The figure shows an example of a test harness. The component under test is the shift_logic block, which is copied from the main model during harness creation. The copy of the shift_logic block is linked to the main model. The inputs are Inport blocks and the output is an Outport block. The vertical subsystems contain signal specification blocks and routing that connects the component interface to the inputs and outputs.

Before you create a test harness, decide whether to save the harness with the model and how to synchronize the harness with the model. For more information, see Test Harness and Model Relationship and Synchronize Changes Between Test Harness and Model.

Testing Goals

Before you author your test, understand your goals.

Requirements Verification

You can assess whether a model behaves according to requirements. For example, suppose your requirements state that a transmission must shift gears at certain speeds. Create a test case for the transmission controller model, or create a test harness for the controller component. Verify whether the model meets requirements by:

  • Authoring verify statements in the model or test harness.

  • Including Model Verification blocks in the model or test harness.

  • Capturing simulation output in the test case, and comparing simulation output to baseline data.

Run the test case and capture results in the Test Manager. You can link the test case to requirements authored in Simulink Requirements™, collect coverage with Simulink Coverage™, and add test cases for more scenarios. For an example, see Test Downshift Points of a Transmission Controller.

Data Comparison

Using the Test Manager, you can compare simulation results to baseline data, or to another simulation. In either case, you must specify the signals to capture.

In a baseline test, establish the baseline data, which are the expected outputs. You can define baseline data manually, import baseline data from an Excel® or MAT file, or capture baseline data from a simulation.

In equivalence testing, you compare two simulations to see whether they are equivalent. For example, you can compare results from two solvers, or compare results from simulations in normal and software-in-the-loop (SIL) mode. Explore the impact of different parameter values or calibration data sets by running back to back tests. For an example, see Test Two Simulations for Equivalence.

For comparison tests, you can accept results that fall within a technically acceptable difference by specifying value or time tolerances. You can specify this tolerance before you run the test, or view the results and adjust the tolerance afterward. For more information, see Set Signal Tolerances.

Simulation Testing

In cases where your test only requires a model to simulate without errors, you can run a simulation test. A simulation tests is useful if your model is still in development, or if you have an existing test model that contains inputs and assessments and logs relevant data. Using the Test Manager to run simulation tests allows you to manage numerous tests and to capture and manage results systematically.

Note that Simulink and Stateflow® breakpoints are not supported when running a test from the Test Manager. You can select the Run with Stepper button in the Test Manager, run the test, and use the breakpoints with the simulation stepper.

Multiple Release Testing

You can set up your test to run other releases of MATLAB® that you have installed, starting with R2011b. This capability lets you run tests in releases that do not have Simulink Test. You can run the same test in multiple releases to verify that it passes in each of them. For more information, see Run Tests in Multiple Releases.

SIL and PIL Testing

You can verify the output of generated code by running back-to-back simulations in model and SIL (software-in-the-loop) or PIL (processor-in-the-loop) mode. The same back-to-back test can run multiple test scenarios by iterating over different test vectors, defined in a MAT or Excel file. You can apply tolerances to your results, to allow for technically acceptable differences between model and generated code. Tolerances can include acceptable differences for values and timing, which can apply with hardware running in real time.

Real-Time Testing

With Simulink Real-Time™, you can include effects of physical plants, signals, and embedded hardware by executing tests in HIL (hardware-in-the-loop) mode on a real-time target computer. By running a baseline test in real-time, you can compare results against known good data. You can also run a back-to-back test between model, SIL, or PIL, and a real-time simulation. In either case, the Test Manager allows selection of the target computer configured with Simulink Real-Time.

Controlling Parameters and Configuration Settings

You can control parameters and configuration settings from the Test Manager, for example, to explore design options or iterate over calibration sets, solvers, and data type settings. You can create sets of parameters of interest and override values when each iteration runs. You can also perform parameter sweeps by writing more complex scripts directly in the Test Manager.

Coverage

With Simulink Coverage, you can collect coverage data to help quantify the extent to which your model or code is tested. When you set up coverage collection for your test file, the test results include coverage for the system under test and, optionally, referenced models. You can specify the coverage metrics to return.

If your results show incomplete coverage, you can increase coverage by:

  • Adding test cases manually to the test file.

  • Generating test cases to increase coverage, with Simulink Design Verifier™. You can generate test cases from the Test Manager results.

In either case, you can link new test cases to requirements. This is required for certain certifications.

Iterations

An iteration is a variation of a test case that uses a particular set of data. For example, suppose that you have multiple sets of input data. You can set up one iteration of your test case for each external input file, where otherwise the tests are the same. You can also specify parameter sets and override values in iterations. Another way to override parameters is using scripted parameter sweeps, which enable you to iterate through many values.

When you design your test, you decide whether you want multiple tests or iterations of the same test. One advantage of using iterations is that you can run your tests using fast restart. Iterations make sense if you are changing only parameters, inputs, or configuration settings but otherwise the tests are the same. Use separate test cases if you need independent configuration control or each test relates to a different requirement.

For more information on iterations, see Test Iterations.

Related Topics