Main Content

Collect Metrics on Model Testing Artifacts Programmatically

This example shows how to programmatically assess the status and quality of requirements-based testing activities in a project. When you develop software units by using Model-Based Design, you use requirements-based testing to verify your models. You can assess the testing status of one unit model by using the metric API to collect metric data on the traceability between requirements and test cases and on the status of test results. The metrics measure characteristics of completeness and quality of requirements-based testing that reflect industry standards such as ISO 26262 and DO-178. After collecting metric results, you can access the results or export them to a file. By running a script that collects these metrics, you can automatically analyze the testing status of your project to, for example, design a continuous integration system. Use the results to monitor testing completeness or to detect downstream testing impacts when you make changes to artifacts in the project.

Open the Project

Open the project that includes the models and testing files. At the command prompt, enter dashboardCCProjectStart. The project contains models and requirements and test cases for the models. Some of the requirements have traceability links to the models and test cases, which help to verify that a model's functionality meets the requirements.

dashboardCCProjectStart

Collect Metric Results

Create a metric.Engine object for the current project.

metric_engine = metric.Engine();

Update the trace information for metric_engine to reflect any pending artifact changes and to ensure that all test results are tracked.

updateArtifacts(metric_engine)

Create an array of metric identifiers for the metrics you want to collect. For this example, create a list of all available metric identifiers.

metric_Ids = getAvailableMetricIds(metric_engine);
This figure shows the metric identifier for each widget in the dashboard.

Model Testing Dashboard listing the metric identifiers for each widget.

For a list of metrics and their identifiers, see Model Testing Metrics.

When you collect metric results, you can collect results for one unit at a time or for each unit in the project.

Collect Results for One Unit

When you collect and view results for a unit, the metrics return data for the artifacts that trace to the model.

Collect the metric results for the db_DriverSwRequest.

Create an array that identifies the path to the model file in the project and the name of the model.

unit = {fullfile(pwd,'models','db_DriverSwRequest.slx'),'db_DriverSwRequest'};

Execute the engine and use the argument ArtifactScope to specify the unit for which you want to collect results. The engine runs the metrics on only the artifacts that trace to the model that you specify. Collecting results for these metrics requires a Simulink® Test™ license, a Simulink Requirements™ license, and a Simulink Coverage™ license.

execute(metric_engine, metric_Ids, 'ArtifactScope', unit)

Collect Results for Each Unit in the Project

To collect the results for each unit in the project, execute the engine without the argument for ArtifactScope.

execute(metric_engine, metric_Ids)

For more information on collecting metric results, see execute.

Access Results

Generate a report file that contains the results for all units in the project. For this example, specify the HTML file format and name the report MetricResultsReport.html.

reportLocation = fullfile(pwd, 'MetricResultsReport.html');
generateReport(metric_engine,'Type','html-file','Location',reportLocation);

Open the HTML report from the root folder of the project. To open the table of contents and navigate to results for each unit, click the menu icon in the top-left corner of the report. For each unit in the report, there is an artifact summary table that displays the size and structure of that unit.

Artifact Summary table listing the Number of Artifacts for each Artifact Type.

Saving the metric results in a report file allows you to access the results without opening the project and the dashboard. Alternatively, you can open the dashboard to see the results and explore the artifacts.

modelTestingDashboard

To access the results programmatically, use the getMetrics function. The function returns the metric.Result objects that contain the result data for the specified unit and metrics. For this example, store the results for the metrics TestCaseStatus and TestCasesPerRequirementDistribution in corresponding arrays.

results_TestCasesPerReqDist = getMetrics(metric_engine,'TestCasesPerRequirementDistribution');
results_TestStatus = getMetrics(metric_engine, 'TestCaseStatus');

View Distribution of Test Case Links per Requirement

The metric TestCasesPerRequirementDistribution returns a distribution of the number of test cases linked to each functional requirement for the unit. Display the bin edges and bin counts of the distribution, which are fields in the Value field of the metric.Result object. The left edge of each bin shows the number of test case links and the bin count shows the number of requirements that are linked to that number of test cases. The sixth bin edge is 18446744073709551615, which is the upper limit of the count of test cases per requirement, which shows that the fifth bin contains requirements that have four or more test cases.

disp(['Unit:  ', results_TestCasesPerReqDist(1).CollectionScope(1).Name])
disp(['  Tests per Requirement:  ', num2str(results_TestCasesPerReqDist(1).Value.BinEdges)])
disp(['  Requirements:  ', num2str(results_TestCasesPerReqDist(1).Value.BinCounts)])
Unit:  db_DriverSwRequest
  Tests per Requirement:  0   1   2   3  4  18446744073709551615
  Requirements:  12   9   0   0   0

This result shows that for the unit db_DriverSwRequest there are 12 requirements that are not linked to test cases and 9 requirements that are linked to one test case. Each requirement should be linked to at least one test case that verifies that the model meets the requirement. The distribution also allows you to check if a requirement has many more test cases than the other requirements, which might indicate that the requirement is too general and that you should break it into more granular requirements.

View Test Case Status Results

The metric TestCaseStatus assesses the testing status of each test case for the unit and returns one of these numeric results:

  • 0 — Failed

  • 1 — Passed

  • 2 — Disabled

  • 3 — Untested

Display the name and status of each test case.

for n=1:length(results_TestStatus)

   disp(['Test Case: ', results_TestStatus(n).Artifacts(1).Name])
   disp([' Status: ', num2str(results_TestStatus(n).Value)])

end
Test Case: Set button
 Status: 3
Test Case: Decrement button hold
 Status: 3
Test Case: Cancel button
 Status: 3
Test Case: Resume button
 Status: 3
Test Case: Decrement button short
 Status: 3
Test Case: Increment button short
 Status: 3
Test Case: Enable button
 Status: 3
Test Case: Increment button hold
 Status: 3

For this example, the tests have not been run, so each test case returns a status of 3.

See Also

| | | | |

Related Topics