Main Content

Manage Requirements-Based Testing Artifacts for Analysis in the Model Testing Dashboard

When you develop and test software units using Model-Based Design, use the Model Testing Dashboard to assess the status and quality of your unit model testing activities. Requirements-based testing is a central element of model verification. By establishing traceability links between your requirements, model design elements, and test cases, you can measure the extent to which the requirements are implemented and verified. The Model Testing Dashboard analyzes this traceability information and provides detailed metric measurements on the traceability, status, and results of these testing artifacts.

Model Testing Dashboard showing results for unit db_DriverSwRequest

Each metric in the dashboard measures a different aspect of the quality of your unit testing and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178C. To monitor the requirements-based testing quality of your models in the Model Testing Dashboard, maintain your artifacts in a project and follow these considerations. For more information on using the Model Testing Dashboard, see Explore Status and Quality of Testing Activities Using the Model Testing Dashboard.

Manage Artifact Files in a Project

To analyze your requirements-based testing activities in the Model Testing Dashboard, store your design and testing artifacts in a MATLAB® project. The artifacts that the testing metrics analyze include:

  • Models

  • Libraries that the models use

  • Requirements that you create in Requirements Toolbox™

  • Test cases that you create in Simulink® Test™

  • Test results from the executed test cases

When your project contains many models and model reference hierarchies, you can track your unit testing activities by configuring the dashboard to recognize the different testing levels of your models. You can specify which entities in your software architecture are units or higher-level components by labeling them in your project and configuring the Model Testing Dashboard to recognize the labels. The dashboard organizes your models in the Artifacts panel according to their testing levels and the model reference hierarchy. For more information, see Categorize Models in a Hierarchy as Components or Units.

Trace Dependencies Between Project Files and Identify Outdated Metric Results

When you use the Model Testing Dashboard, the dashboard creates a digital thread to capture the attributes and unique identifiers of the artifacts in your project. The digital thread is a set of metadata information about the artifacts in a project, the artifact structure, and the traceability relationships between artifacts.

The Model Testing Dashboard monitors and analyzes the digital thread to:

  • Detect when project files move and maintain the same universal unique identifiers (UUIDs) for the artifact files and the elements inside the artifact files

  • Capture traceability and listen to tools, such as the Test Manager in Simulink Test, to detect new tool outputs and the dependencies of the tool operations

  • Identify outdated tool outputs by analyzing the traceability and checksums of inputs to the tool operations

  • Create an index of your project and store a representation of each artifact, their inner structure, and their relationships with other artifacts

  • Provide a holistic analysis of your project artifacts to help you maintain traceability and up-to-date information on the requirements, units, test cases, and results impacting your design

The dashboard can store the results of the digital thread analysis and then perform traceability analysis across domains, tools, and artifacts, without needing to locally load or access the project artifacts.

As you modify your models and testing artifacts, check that you save the changes to the artifacts files and store the files that you want to analyze in your project.

Trace Artifacts to Units for Model Testing Analysis

To determine which artifacts are in the scope of a unit, the Model Testing Dashboard analyzes the traceability links between the artifacts and the software unit models in the project. The Artifacts panel lists the units, organized by the components that reference them. Under each unit, the panel shows these artifacts that trace to the unit:

  • Functional Requirements

  • Design Artifacts

  • Tests

  • Test Results

Artifacts panel showing units and traced artifacts

To see the traceability path that the dashboard found from an artifact to its unit, right-click the artifact and click View trace to unit. A traceability graph opens in a new tab in the Model Testing Dashboard. The graph shows the connections and intermediate artifacts that the dashboard traced from the unit to the artifact. To see the type of traceability that connects two artifacts, place your cursor over the arrow that connects the artifacts. The traceability relationship is either one artifact containing the other or one artifact tracing to the other. For example, for the unit db_DriverSwRequest, expand Functional Requirements > Upstream > db_SystemReqs.slreqx. Right-click the requirement for Target speed increment and click View trace to unit. The trace view shows that the unit db_DriverSwRequest traces to the implemented functional requirement Switch precedence, which traces to the upstream functional requirement Target speed increment.

Dashboard trace view for a functional requirement

Under the list of components is the folder Trace Issues which contains unexpected requirement links, requirements links which are broken or not supported by the dashboard, and artifacts that the dashboard cannot trace to a unit. To help identify the type of tracing issue, the folder Trace Issues contains subfolders for Unexpected Implementation Links, Unresolved and Unsupported Links, Untraced Tests, and Untraced Results. For more information, see Fix Requirements-Based Testing Issues.

If an artifact returns an error during traceability analysis, the panel includes the artifact in an Errors folder. Use the traceability information in these sections and in the units to check if the testing artifacts trace to the units that you expect. To see details about the warnings and errors that the dashboard finds during artifact analysis, at the bottom of the Model Testing Dashboard dialog, click Diagnostics.

As you edit and save the artifacts in your project, the dashboard tracks your changes and indicates if the traceability data in the Artifacts panel might be stale by showing a warning banner. To update the traceability data, click the Trace Artifacts button on the warning banner.

Functional Requirements

The folder Functional Requirements shows requirements of Type Functional that are either implemented by or upstream of the unit.

When you collect metric results for a unit, the dashboard analyzes only functional requirements that the unit directly implements. The folder Functional Requirements contains two subfolders to help identify which requirements are implemented by the unit or are upstream of the unit:

  • Implemented — Functional requirements that are directly linked to the unit with a link Type of Implements. The dashboard uses these requirements in the metrics for the unit.

  • Upstream — Functional requirements that are indirectly or transitively linked to the implemented requirements. The dashboard does not use these requirements in the metrics for the unit.

If a requirement does not trace to a unit, it appears in the Trace Issues folder. If a requirement does not appear in the Artifacts panel when you expect it to, see Requirement Missing from Artifacts Panel.

Use the Requirements Toolbox to create or import the requirements in a requirements file (.slreqx).

Design Artifacts

The folder Design shows:

  • The model file that contains the block diagram for the unit.

  • Models that the unit references.

  • Libraries that are partially or fully used by the model.

  • Data dictionaries that are linked to the model.

Tests

The folder Tests shows test cases and test harness that trace to the unit. This includes test cases that run on the unit and test cases that run on subsystems in the unit model by using test harnesses.

When you collect metric results for a unit, the dashboard analyzes only test cases that run on the unit model or unit model subsystems. The folder Tests contains subfolders to help identify which test cases are testing the unit and which test harnesses trace to the unit:

  • Unit Tests — Test cases that the dashboard considers as unit tests. A unit test directly tests either the entire unit model or the model subsystems. The dashboard uses these tests in the metrics for the unit.

  • Others — Test cases that trace to the unit but that the dashboard does not consider as unit tests. For example, the dashboard does not consider tests on a library to be unit tests. The dashboard does not use these tests in the metrics for the unit.

  • Test Harnesses — External test harnesses that trace to the unit or unit subsystems. Double-click a test harness to open it.

If a test case does not trace to a unit, it appears in the Trace Issues folder. If a test case does not appear in the Artifacts panel when you expect it to, see Test Case Missing from Artifacts Panel. For troubleshooting test cases in metric results, see Fix a test case that does not produce metric results.

Create test cases in a test suite file by using Simulink Test.

Test Results

When you collect metric results for a unit, the dashboard analyzes only the test results from unit tests. The folder Test Results contains two subfolders to help identify which test results are from unit tests:

  • Unit Simulation — Simulation results from unit tests. The dashboard uses these results in the metrics for the unit.

    The following types of test results are shown:

    • Saved test file icon Saved test results — results that you have collected in the Test Manager and have exported to a results file.

    • Temporary test results iconTemporary test results — results that you have collected in the Test Manager but have not exported to a results file. When you export the results from the Test Manager the dashboard analyzes the saved results instead of the temporary results. Additionally, the dashboard stops recognizing the temporary results when you close the project or close the result set in the Simulink Test Result Explorer. If you want to analyze the results in a subsequent test session or project session, export the results to a results file.

  • Others — Results that are not simulation results, are not from unit tests, or are only reports. For example, SIL results are not simulation results. The dashboard does not use these results in the metrics for the unit.

If a test result does not trace to a unit, it appears in the Trace Issues folder. If a test result does not appear in the Artifacts panel when you expect it to, see Test Result Missing from Artifacts Panel. For troubleshooting test results in dashboard metric results, see Fix a test result that does not produce metric results.

Trace Issues

The folder Trace Issues shows artifacts that the dashboard has not traced to units. Use the folder Trace Issues to check if artifacts are missing traceability to the units. The folder Trace Issues contains subfolders to help identify the type of tracing issue:

  • Unexpected Implementation Links — Requirement links of Type Implements for a requirement of Type Container or Type Informational. The dashboard does not expect these links to be of Type Implements because container requirements and informational requirements do not contribute to the Implementation and Verification status of the requirement set that they are in. If a requirement is not meant to be implemented, you can change the link type. For example, you can change a requirement of Type Informational to have a link of Type Related to.

  • Unresolved and Unsupported Links — Requirements links that are either broken in the project or not supported by the dashboard. For example:

    • If a model block implements a requirement, but you delete the model block, the requirement link is now unresolved.

    • If a requirement links to or from a data dictionary, the link is not supported and the Model Testing Dashboard does not trace the link.

      The Model Testing Dashboard does not support traceability analysis for some artifacts and some links. If you expect a link to trace to a unit and it does not, see the troubleshooting solutions in Resolve Missing Artifacts, Links, and Results in the Model Testing Dashboard.

  • Untraced Tests — Tests that execute on models or subsystems that are not on the project path.

  • Untraced Results — Results that the dashboard cannot trace to a test case. For example, if a test case produces a result, but you delete the test case, the dashboard cannot trace the results to the test case.

When you add traceability to an artifact, the dashboard detects this and shows a warning banner at the top of the dashboard to indicate that the artifact traceability shown in the Artifacts panel is outdated. Click the Trace Artifacts button on the warning banner to refresh the data in the Artifacts panel.

The Model Testing Dashboard does not support traceability analysis for some artifacts and some links. If an artifact is untraced when you expect it to trace to a unit, see the troubleshooting solutions in Trace Issues.

Artifact Errors

The folder Errors appears if artifacts returned errors when the dashboard performed artifact analysis. These are some errors that artifacts might return during traceability analysis:

  • An artifact returns an error if it has unsaved changes when traceability analysis starts.

  • A test results file returns an error if it was saved in a previous version of Simulink.

  • A model returns an error if it is not on the search path.

Open these artifacts and fix the errors. The dashboard detects changes to the artifacts and shows a warning banner at the top of the dashboard to indicate that the artifact traceability shown in the Artifacts panel is outdated. Click the Trace Artifacts button on the warning banner to refresh the data in the Artifacts panel.

Diagnostics

To see details about artifacts that cause errors, warnings, and informational messages during analysis, at the bottom of the Model Testing Dashboard dialog, click Diagnostics. You can filter the diagnostic messages by their type: Error, Warning, and Info. You can also clear the messages from the viewer.

The diagnostic messages show:

  • Modeling constructs that the dashboard does not support

  • Links that the dashboard does not trace

  • Test harnesses or cases that the dashboard does not support

  • Test results missing coverage or simulation results

  • Artifacts that return errors when the dashboard loads them

  • Information about model callbacks that the dashboard deactivates

  • Files that have file shadowing or path traceability issues

  • Artifacts that are not on the path and are not considered during tracing

Collect Metric Results

The Model Testing Dashboard collects metric results for each unit listed in the Artifacts panel. Each metric in the dashboard measures a different aspect of the quality of your model testing and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178. For more information about the available metrics and the results that they return, see Model Testing Metrics.

As you edit and save the artifacts in your project, the dashboard detects changes to the artifacts and shows a warning banner at the top of the dashboard to indicate that the artifact traceability shown in the Artifacts panel is outdated. Click the Trace Artifacts button on the warning banner to refresh the data in the Artifacts panel.

After you update the traceability information, if the metric results might be affected by your artifact changes, the dashboard shows a warning banner at the top of the dashboard to indicate that the metric results are stale. Affected widgets have a gray staleness icon . To update the results, click the Collect button on the warning banner to re-collect the metric data and to update the stale widgets with data from the current artifacts. If you want to collect metrics for each of the units and components in the project, click Collect > Collect All.

The dashboard does not indicate stale metric data for these changes:

  • After you run a test case and analyze the results in the dashboard, if you make changes to the test case, the dashboard indicates that test case metrics are stale but does not indicate that the results metrics are stale.

  • When you change a coverage filter file that your test results use, the coverage metrics in the dashboard do not indicate stale data or include the changes. After you save the changes to the filter file, re-run the tests and use the filter file for the new results.

See Also

Related Topics