Create and Run Test Cases with Scripts
Create and Run a Baseline Test Case
This example shows how to use
sltest.testmanager
functions, classes, and methods to
automate tests and generate reports. You can create a test case, edit the test case
criteria, run the test case, export simulation output, and generate results reports
programmatically. The example compares the simulation output of the model to a
baseline.
% Open the model for this example openExample('sldemo_absbrake'); % Create the test file, test suite, and test case structure tf = sltest.testmanager.TestFile('API Test File'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'baseline','Baseline API Test Case'); % Remove the default test suite tsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel); % Assign the system under test to the test case setProperty(tc,'Model','sldemo_absbrake'); % Capture the baseline criteria baseline = captureBaselineCriteria(tc,'baseline_API.mat',true); % Test a new model parameter by overriding it in the test case % parameter set ps = addParameterSet(tc,'Name','API Parameter Set'); po = addParameterOverride(ps,'m',55); % Set the baseline criteria tolerance for one signal sc = getSignalCriteria(baseline); sc(1).AbsTol = 9; % Run the test case and return an object with results data ResultsObj = run(tc); % Get the test case result and the Sim Output run dataset tcr = getTestCaseResults(ResultsObj); runDataset = getOutputRuns(tcr); % Open the Test Manager so you can view the simulation % output and comparison data sltest.testmanager.view; % Generate a report from the results data filePath = 'test_report.pdf'; sltest.testmanager.report(ResultsObj,filePath,... 'Author','Test Engineer',... 'IncludeSimulationSignalPlots',true,... 'IncludeComparisonSignalPlots',true); % Export the Sim Output run dataset dataset = export(runDataset);
The test case fails because only one of the signal comparisons between the
simulation output and the baseline criteria is within tolerance. The results
report is a PDF and opens when it is completed. For more report generation
settings, see the sltest.testmanager.report
function
reference page.
Create and Run an Equivalence Test Case
This example compares signal data between two simulations to test for equivalence.
% Open the model for this example openExample('sldemo_absbrake'); % Create the test file, test suite, and test case structure tf = sltest.testmanager.TestFile('API Test File 1'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'equivalence','Equivalence Test Case'); % Remove the default test suite tsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel); % Assign the system under test to the test case % for Simulation 1 and Simulation 2 setProperty(tc,'Model','sldemo_absbrake','SimulationIndex',1); setProperty(tc,'Model','sldemo_absbrake','SimulationIndex',2); % Add a parameter override to Simulation 1 and 2 ps1 = addParameterSet(tc,'Name','Parameter Set 1','SimulationIndex',1); po1 = addParameterOverride(ps1,'Rr',1.20); ps2 = addParameterSet(tc,'Name','Parameter Set 2','SimulationIndex',2); po2 = addParameterOverride(ps2,'Rr',1.24); % Capture equivalence criteria eq = captureEquivalenceCriteria(tc); % Set the equivalence criteria tolerance for one signal sc = getSignalCriteria(eq); sc(1).AbsTol = 2.2; % Run the test case and return an object with results data ResultsObj = run(tc); % Open the Test Manager so you can view the simulation % output and comparison data sltest.testmanager.view;
In the Equivalence Criteria Result section of the Test Manager results, the
yout.Ww
signal passes because of the tolerance value. The
other signal comparisons do not pass, and the overall test case fails.
Run a Test Case and Collect Coverage
This example shows how to use a simulation test case to collect coverage results. To collect coverage, you need a Simulink® Coverage™ license.
% Open the model for this example openExample('sldemo_autotrans'); % Create the test file, test suite, and test case structure tf = sltest.testmanager.TestFile('API Test File'); ts = createTestSuite(tf,'API Test Suite'); tc = createTestCase(ts,'simulation','Coverage Test Case'); % Remove the default test suite tsDel = getTestSuiteByName(tf,'New Test Suite 1'); remove(tsDel); % Assign the system under test to the test case setProperty(tc,'Model','sldemo_autotrans'); % Turn on coverage settings at test-file level cov = getCoverageSettings(tf); cov.RecordCoverage = true; % Enable MCDC and signal range coverage metrics cov.MetricSettings = 'mr'; % Run the test case and return an object with results data rs = run(tf); % Get the coverage results cr = getCoverageResults(rs); % Open the Test Manager to view results sltest.testmanager.view;
In the Results and Artifacts pane of the Test Manager, click on Results. You can view the aggregated coverage results.
Create and Run Test Case Iterations
This example shows how to create test iterations. You can create table iterations programmatically that appear in the Iterations section of a test case. The example creates a simulation test case and assigns a Signal Editor scenario for each iteration.
% Open the model for this example openExample('sldemo_autotrans'); % Create test file, test suite, and test case structure tf = sltest.testmanager.TestFile('Iterations Test File'); ts = getTestSuites(tf); tc = createTestCase(ts,'simulation','Simulation Iterations'); % Specify model as system under test setProperty(tc,'Model','sldemo_autotrans'); % Set up table iteration % Create iteration object testItr1 = sltestiteration; % Set iteration settings setTestParam(testItr1,'SignalEditorScenario','Passing Maneuver'); % Add the iteration to test case addIteration(tc,testItr1); % Set up another table iteration % Create iteration object testItr2 = sltestiteration; % Set iteration settings setTestParam(testItr2,'SignalEditorScenario','Coasting'); % Add the iteration to test case addIteration(tc,testItr2); % Run test case that contains iterations results = run(tc); % Get iteration results tcResults = getTestCaseResults(results); iterResults = getIterationResults(tcResults);
See Also
sltest.testmanager.TestFile
| sltest.testmanager.TestSuite
| sltest.testmanager.TestCase
| sltest.testmanager.TestCaseResult
| sltestiteration
| sltest.testmanager.TestIteration
| sltest.testmanager.TestIterationResult
| sltest.testmanager.ParameterSet
| sltest.testmanager.BaselineCriteria
| sltest.testmanager.EquivalenceCriteria
| sltest.testmanager.run
| sltest.testmanager.report