Main Content

Explore Status and Quality of Testing Activities Using the Model Testing Dashboard

The Model Testing Dashboard collects metric data from the model design and testing artifacts in a project to help you assess the status and quality of your requirements-based model testing.

The dashboard analyzes the artifacts in a project, such as requirements, models, and test results. Each metric in the dashboard measures a different aspect of the quality of the testing of your model and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178C.

This example shows how to assess the testing status of a unit by using the Model Testing Dashboard. If the requirements, models, or tests in your project change, use the dashboard to assess the impact on testing and update the artifacts to achieve your testing goals.

Explore Testing Artifacts and Metrics for a Project

Open the project that contains the models and testing artifacts. For this example, in the MATLAB® Command Window, enter:

dashboardCCProjectStart('incomplete')

Open the Model Testing Dashboard by using one of these approaches:

  • On the Project tab, click Model Testing Dashboard.

  • In the Command Window, enter:

modelTestingDashboard

The first time that you open the dashboard for a project, the dashboard must identify the artifacts in the project and collect traceability information.

The dashboard displays metric results for the unit you select in the Artifacts panel.

Click the unit db_DriverSwRequest to view its metric results. When you initially select a unit in the Artifacts panel, the dashboard automatically collects the metric results for the unit. If you want to collect metrics for each of the units in the project, click Collect > Collect All.

If metric data was previously collected for a unit, the dashboard populates with the existing data. Collecting data for a metric requires a license for the product that supports the underlying artifacts, such as Requirements Toolbox™, Simulink® Test™, or Simulink Coverage™. Once metric results have been collected, viewing the results requires only a Simulink® Check™ license. For more information, see Model Testing Metrics.

View Traceability of Design and Testing Artifacts

The Artifacts panel organizes the units in the project under the component models according to the model reference hierarchy. Expand a unit to see the artifacts in the project that trace to it.

For this example, in the Artifacts panel, expand the folder for the unit db_DriverSwRequest and its subfolders.

For each unit in the project, the traced artifacts include:

Functional Requirements

Requirements of Type Functional that are either implemented by or upstream of the unit. Use the Requirements Toolbox to create or import the requirements in a requirements file (.slreqx).

  • Implemented — Functional requirements that are directly linked to the unit with a link Type of Implements. The dashboard uses these requirements in the metrics for this unit.

  • Upstream — Functional requirements that are indirectly or transitively linked to the implemented requirements. The dashboard does not use these requirements in the metrics for this unit.

Design

The model file that contain the unit that you test and the libraries and data dictionaries that the model uses.

Tests

Test cases and test harnesses that trace to the unit. Create the test cases in a test suite file by using Simulink Test.

  • Unit Tests — Test cases that the dashboard considers as unit tests. A unit test directly tests either the entire unit model or the model subsystems. The dashboard uses these tests in the metrics for this unit.

  • Others — Test cases that trace to the unit but the dashboard does not consider as unit tests. For example, the dashboard does not consider tests on a library to be unit tests. The dashboard does not use these tests in the metrics for this unit.

  • Test Harnesses — External test harnesses that trace to the unit or unit subsystems. Double-click a test harness to open it.

Test Results

Results of the test cases for the unit. To use the results in the dashboard, run the unit tests, export the results, and save them as a results file. The dashboard shows the latest saved results from the test cases.

  • Unit Simulation — Simulation results from unit tests. The dashboard uses these results in the metrics for this unit.

  • Others — Results that are not simulation results, are not from unit tests, or are only reports. For example, SIL results are not simulation results. The dashboard does not use these results in the metrics for this unit.

If there are changes to artifact files in the project, the dashboard detects the changes and automatically re-traces the artifacts to update the artifact traceability information shown in the Artifacts panel. You can turn the automatic re-tracing on or off by clicking Collect > Auto Trace.

An artifact appears under the folder Trace Issues if there are unexpected requirement links, requirement links which are broken or not supported by the dashboard, or artifacts that the dashboard cannot trace to a unit. The folder includes artifacts that are missing traceability and artifacts that the dashboard is unable to trace. If an artifact generates an error during traceability analysis, it appears under the Errors folder. For more information about artifact tracing issues and errors, see Trace Artifacts to Units for Model Testing Analysis.

Navigate to the requirement artifact for Cancel Switch Detection. Expand db_DriverSwRequest > Functional Requirements > Implemented > db_SoftwareReqs.slreqx and select the requirement Cancel Switch Detection.

At the bottom of the Artifacts panel, the Details pane displays the name of the artifact and the path to the artifact from the project root. You can scroll to a unit, collapse or expand the artifact list, or open a unit dashboard by right-clicking an artifact and selecting an action. You can also use the menu button to the right of the search bar to perform these actions, restore the default view of the artifacts list, or view a legend of the dashboard icons.

View Metric Results for a Unit

You can collect and view metric results for each unit in the Artifacts panel. To view the results for the unit db_DriverSwRequest, in the Artifacts panel, click db_DriverSwRequest. When you click on a unit, the dashboard shows the Model Testing information for that unit. The top of the dashboard shows the name of the unit, the data collection timestamp, and the user name that collected the data. To open the results for multiple units at the same time, right-click a unit and click Open unit dashboard in new tab.

If artifacts in the project change after the results are collected, the dashboard detects this and shows a warning banner at the top of the dashboard to indicate that the metric results are stale.

The Stale icon appears on dashboard widgets that might show stale data which does not include the changes. If you see the warning banner, click the Collect button on the warning banner to re-collect the metric data and to update the stale widgets with data from the current artifacts. You can also find the Collect button on the dashboard toolstrip in the Metrics section. For the unit in this example, the metric results in the dashboard are not stale.

The dashboard widgets summarize the metric data results and show testing issues you can address, such as:

  • Missing traceability between requirements and tests

  • Tests or requirements with a disproportionate number of links between requirements and tests

  • Failed or disabled tests

  • Missing model coverage

You can use the overlays in the Model Testing Dashboard to see if the metric results for a widget are compliant, non-compliant, or generate a warning that the metric results should be reviewed. Results are compliant if they show full traceability, test completion, or model coverage. In the Overlays section of the toolstrip, check that the Compliant and Non-Compliant buttons are selected. The overlay appears on the widgets that have results in that category. You can see the total number of widgets in each compliance category in the top-right corner of the dashboard.

To see the compliance thresholds for a metric, point to the overlay icon.

You can hide the overlay icons by clicking a selected category in the Overlays section of the toolstrip. For more information on the compliance thresholds for each metric, see Model Testing Metrics.

To explore the data in more detail, click an individual metric widget to open the Metric Details. For the selected metric, a table displays a metric value for each artifact. The table provides hyperlinks to open the artifacts so that you can get detailed results and fix the artifacts that have issues. When exploring the tables, note that:

  • You can filter the results by the value returned for each artifact. To filter the results, click the filter icon in the table header.

  • By default, some widgets apply a filter to the table. For example, for the Requirements Linked to Tests section, the table for the Unlinked widget is filtered to only show requirements that are missing linked test cases. Tables that have filters show a check mark in the bottom right corner of the filter icon .

  • To sort the results by artifact, source file, or value, click the corresponding column header.

Evaluate Testing and Traceability of Requirements

A standard measure of testing quality is the traceability between individual requirements and the test cases that verify them. To assess the traceability of your tests and requirements, use the metric data in the Test Case Analysis section of the dashboard. You can quickly find issues in the requirements and tests by using the data summarized in the widgets. Click a widget to view a table with detailed results and links to open the artifacts.

Requirements Missing Tests

In the Requirements Linked to Tests section, the Unlinked widget indicates how many requirements are missing links to test cases. To address unlinked requirements, create test cases that verify each requirement and link those test cases to the requirement. The Requirements with Tests gauge widget shows the linking progress as the percentage of requirements that have tests.

Click any widget in the section to see the detailed results in the Requirement linked to test cases table. For each requirement artifact, the table shows the source file that contains the requirement and whether the requirement is linked to at least one test case. When you click the Unlinked widget, the table is filtered to show only requirements that are missing links to test cases.

Requirements with Disproportionate Numbers of Tests

The Tests per Requirement section summarizes the distribution of the number tests linked to each requirement. For each value, a colored bin indicates the number of requirements that are linked to that number of tests. Darker colors indicate more requirements. If a requirement has a too many tests, the requirement might be too broad, and you may want to break it down into multiple more granular requirements and link each of those requirements to the respective test cases. If a requirement has too few tests, consider adding more test cases and linking them to the requirement.

To see the requirements that have a certain number of test cases, click the corresponding number to open a filtered Test cases per requirement table. For each requirement artifact, the table shows the source file that contains the requirement and the number of linked test cases. To see the results for each of the requirements, in the Linked Test Cases column, click the filter icon , then select Clear Filters.

Tests Missing Requirements

In the Tests Linked to Requirements section, the Unlinked widget indicates how many tests are not linked to requirements. To address unlinked tests, add links from these test cases to the requirements they verify. The Tests with Requirements gauge widget shows the linking progress as the percentage of tests that link to requirements.

Click any widget in the section to see detailed results in the Test case linked to requirements table. For each test case artifact, the table shows the source file that contains the test and whether the test case is linked to at least one requirement. When you click the Unlinked widget, the table is filtered to show only test cases that are missing links to requirements.

Tests with Disproportionate Numbers of Requirements

The Requirements per Test widget summarizes the distribution of the number of requirements linked to each test. For each value, a colored bin indicates the number of requirements that are linked to that number of tests. Darker colors indicate more tests. If a test has too many or too few requirements, it might be more difficult to investigate failures for that test, and you may want to change the test or requirements so that they are easier to track. For example, if a test verifies many more requirements than the other tests, consider breaking it down into multiple smaller tests and linking them to the requirements.

To see the test cases that have a certain number of requirements, click the corresponding bin to open the Requirements per test case table. For each test case artifact, the table shows the source file that contains the test and the number of linked requirements. To see the results for each of the test cases, in the Linked Requirements column, click the filter icon , then select Clear Filters.

Disproportionate Number of Tests of One Type

The Tests by Type and Tests with Tags widgets show how many tests the unit has of each type and with each custom tag. In industry standards, tests are often categorized as normal tests or robustness tests. You can tag test cases with Normal or Robustness and see the total count for each tag by using the Tests with Tag widget. Use the Test Case Breakdown to decide if you want to add tests of a certain type, or with a certain tag, to your project.

To see the test cases of one type, click the corresponding row in the Tests by Type table to open the Test case type table. For each test case artifact, the table shows the source file that contains the test and the test type. To see results for each of the test cases, in the Type column, click the filter icon , then select Clear Filters.

To see the test cases that have a tag, click the corresponding row in the Tests with Tag table to open the Test case tags table. For each test case artifact, the table shows the source file that contains the test and the tags on the test case. To see results for each of the test cases, in the Tags column, click the filter icon , then select Clear Filters.

Analyze Test Results and Coverage

To see a summary of the test results and coverage measurements, use the widgets in the Simulation Test Result Analysis section of the dashboard. Find issues in the tests and in the model by using the test result metrics. Find coverage gaps by using the coverage metrics and add tests to address missing coverage. Run the tests for the model and collect the dashboard metrics to check for model testing issues.

Tests That Have Not Passed

In the Model Test Status section, the Untested and Disabled widgets indicate how many tests for the unit have not been run. Run the tests by using the Simulink Test Manager and export the new results.

The Failed widget indicates how many tests failed. Click on the Failed widget to view a table of the test cases that failed. Click the hyperlink for each failed test case artifact to open it in the Test Manager and investigate the artifacts that caused the failure. Fix the artifacts, re-run the tests, and export the results.

The Inconclusive widget indicates how many tests do not have pass/fail criteria such as verify statements, custom criteria, baseline criteria, and logical or temporal assessments. If a test does not contain pass/fail criteria, then it does not verify the functionality of the linked requirement. Add one or more of these pass/fail criteria to your test cases to help verify the functionality of your model.

Click any widget in the section to open the Test case status table. For each test case artifact, the table shows the source file that contains the test and the status of the test result. When you click the Failed, Disabled, Untested, or Inconclusive widgets, the table is filtered to show only tests for those test case result statuses. The dashboard analyzes only the latest test result that it traces to each test case.

Missing Coverage

The Model Coverage widget shows whether there are model elements that are not covered by the tests. If one of the coverage types shows less than 100% coverage, you can click the dashboard widgets to investigate the coverage gaps. Add tests to cover the gaps or justify points that do not need to be covered. Then run the tests again and export the results. For more information on coverage justification, see Fix Requirements-Based Testing Issues.

To see the detailed results for one type of coverage, click the corresponding bar. For the model and test case artifacts, the table shows the source file and the achieved and justified coverage.

See Also

Related Topics