Main Content

Collect Metrics on Model Testing Artifacts Programmatically

This example shows how to programmatically assess the status and quality of requirements-based testing activities in a project. When you develop software components by using Model-Based Design, you use requirements-based testing to verify your models. You can assess the testing status of one component by using the metric API to collect metric data on the traceability between requirements and test cases and on the status of test results. The metrics measure characteristics of completeness and quality of requirements-based testing that reflect industry standards such as ISO 26262 and DO-178. After collecting metric results, you can access the results and export them to a file.

Open the Project

Open the project that includes the models and testing files. At the command prompt, enter dashboardCCProjectStart. The project contains models and requirements and test cases for the models. Some of the requirements have traceability links to the models and test cases, which help to verify that a model's functionality meets the requirements.

dashboardCCProjectStart

Collect Metric Results

Create a slmetric.Engine object for the current project.

metric_engine = metric.Engine();

To specify the metrics that you want to collect, create an array that lists the metric identifiers. For this example, collect results for the metrics TestCasesPerRequirementDistribution and TestCaseStatus. For a list of metrics and their identifiers, see Model Testing Metrics.

metric_IDs = {'TestCasesPerRequirementDistribution','TestCaseStatus'};

When you collect metric results, you can collect results for each component in the project or for one component at a time. The Model Testing Dashboard considers each model in the project to represent the algorithm for a component. When you collect and view results for a component, the metrics return data for the artifacts that trace to the model. For this example, collect metrics for db_DriverSwRequest. Create an array that identifies the path to the model file in the project and the name of the model.

component = {fullfile(pwd, 'models', 'db_DriverSwRequest.slx'), 'db_DriverSwRequest'};

Collect results for the metrics by executing the engine. Use the argument ArtifactScope to specify the component for which you want to collect results. The engine runs the metrics on only the artifacts that trace to the model that you specify. Collecting results for these metrics requires a Simulink® Test™ license and a Simulink Requirements™ license.

execute(metric_engine, metric_IDs, 'ArtifactScope', component)

Access Results

Use the getMetrics function of the engine to access the metric results. The function returns the metric.Result objects that contain the result data for the specified component and metrics. Accessing results that you already collected for the metrics requires only a Simulink Check™ license. For this example, store the results for each metric in an array.

results_TestCasesPerReqDist = getMetrics(metric_engine, 'TestCasesPerRequirementDistribution');
results_TestStatus = getMetrics(metric_engine, 'TestCaseStatus');

The metric TestCasesPerRequirementDistribution returns a distribution of the number of test cases linked to each functional requirement for the model. Display the bin edges and bin counts of the distribution, which are fields in the Value field of the metric.Result object. The left edge of each bin shows the number of test case links and the bin count shows the number of requirements that are linked to that number of test cases. The sixth bin edge is the upper limit of the count of test cases per requirement, which shows that the fifth bin contains requirements that have four or more test cases.

disp(['Component:  ', results_TestCasesPerReqDist(1).Artifacts(1).Name])
disp(['  Tests per Requirement:  ', num2str(results_TestCasesPerReqDist(1).Value.BinEdges)])
disp(['  Requirements:  ', num2str(results_TestCasesPerReqDist(1).Value.BinCounts)])
Component:  db_DriverSwRequest
  Tests per Requirement:  0   1   2   3  4  18446744073709551615
  Requirements:  12   9   0   0   0

This result shows that for the component db_DriverSwRequest there are 12 requirements that are not linked to test cases and 9 requirements that are linked to one test case. Each requirement should be linked to at least one test case that verifies that the model meets the requirement. The distribution also allows you to check if a requirement has many more test cases than the other requirements, which might indicate that the requirement is too general and that you should break it into more granular requirements.

To export the data to a spreadsheet, create a cell array traceabilityData that stores the bin edges and counts. Then, export the data by using writetable.

traceabilityData = {'Test Cases per Requirement', 'Number of Requirements'};

for n = 1:length(results_TestCasesPerReqDist(1).Value.BinCounts)
   
   traceabilityData{n+1, 1} = results_TestCasesPerReqDist(1).Value.BinEdges(n);
   traceabilityData{n+1, 2} = results_TestCasesPerReqDist(1).Value.BinCounts(n);

end

filename = 'TestCasesPerRequirementDist.xlsx';
distTable = table(traceabilityData);
writetable(distTable, filename);

The metric TestCaseStatus assesses the testing status of each test case for the component and returns one of these numeric results:

  • 0 — Failed

  • 1 — Passed

  • 2 — Disabled

  • 3 — Untested

Display the name and status of each test case. Then, store the name and status in a cell array and export the data to a spreadsheet.

testStatusData = {'Test Case', 'Status'};

for n=1:length(results_TestStatus)

   disp(['Test Case: ', results_TestStatus(n).Artifacts(1).Name])
   disp([' Status: ', num2str(results_TestStatus(n).Value)])

   testStatusData{n+1,1} = results_TestStatus(n).Artifacts(1).Name;
   testStatusData{n+1,2} = results_TestStatus(n).Value;

end

filename = 'TestCaseStatus.xlsx';
distTable = table(testStatusData);
writetable(distTable, filename);
Test Case: Set button
 Status: 3
Test Case: Decrement button hold
 Status: 3
Test Case: Cancel button
 Status: 3
Test Case: Resume button
 Status: 3
Test Case: Decrement button short
 Status: 3
Test Case: Increment button short
 Status: 3
Test Case: Enable button
 Status: 3
Test Case: Increment button hold
 Status: 3

For this example, the tests have not been run, so each test case returns a status of 3.

By running a script that collects these metrics, you can automatically monitor the testing status of your project. Use the results to monitor progress toward testing completeness or to detect downstream testing impacts when you make changes to artifacts in the project. To further explore the results in the Model Testing Dashboard, at the command line, type modelTestingDashboard.

See Also

|

Related Topics