Requirements-Based Verification with Simulink Test
Overview
Model-Based Design starts with informal textual requirements that may include complex timing-dependent signal logic that is difficult to formalize during testing. This webinar explains how you can use Simulink Test to author temporal assessments with precise semantics in a natural language format. These assessments accurately model complex timing-dependent behavior with conditions, events, signal values, and delays. We will demonstrate the complete workflow for requirements-based testing with authoring, managing, and executing tests.
Highlights
Learn how to:
- Automate requirements based with Model-Based Designs
- Translate informal text requirements into unambiguous assessments with clear, defined semantics
- Trace requirements to design and test
- Monitor internal signals without perturbing the design or generated code with the new Observer Block
About the Presenter
Paul Urban is a senior product marketing manager at MathWorks responsible for the verification and validation tools, specifically Requirements Toolbox, Simulink Test and Simulink Check. His experience includes extensive work in applying Model Based Design for systems engineering, embedded software development and test. Prior to joining the MathWorks in 2016, Paul held roles in product marketing, business development, consulting, and development at IBM Watson IoT, Telelogic, and I-Logix where he worked on solutions based on UML/SysML with Rhapsody.
Recorded: 13 Aug 2019
How often have you delivered you design to your customer only to find out that it wasn't what the customer expected? There are many industry examples of projects that failed due to miscommunication of requirements. The challenges that initial requirements are primarily specified using informal text that are easy to misinterpret, are incomplete or inconsistent. My name is Paul Urban, and I work in the model verification and validation product group at The Math where I manage the Simulink requirements, Simulink test and Simulink check products.
Today, I will show you new capabilities that allow you to verify and validate requirements earlier. Using temporal assessments, you can translate text requirements into unambiguous assessments that can be executed. The solution forms a digital thread that links requirements to implementation in the model and to tests that verifies the requirement. You can see from this quote from an automotive customer, how model based design helped analyze requirements faster and shorten the delivery of designs meeting their customer needs.
There is an ever increasing demand to include more functionality and products based on software. This increases the size and complexity and puts added pressure on the development and testing processes. In the traditional development process, as you move from left to right, requirements are refined into specifications that ultimately get manually implemented in code. Errors are introduced at various stages of development, but studies show that most of the defects are introduced in the initial design. These errors are carried through the design phases and testing uncovers many of the errors, but not until late in the process where rework and further testing is required. But at the end some latent errors may still remain in the software and may only be detected in the field. These errors can result in safety issues or costly product recalls that are much bigger bugs to address.
Furthermore, studies also show that the cost of finding a bug increases over time. For example, finding a software bug while coding on your desktop PC is cheaper than finding that bug on a production hardware that's in the field. The cost of testing increases-- later the defects are found in the development cycle. We need to be able to determine as early as possible if all the requirements are implemented, that I correctly understand the requirement, is a design behaving correctly per the requirement? And, we also want to be able to test the design without modifying it. With Simulink, graphical models replace ambiguous designs with executable models that have precise meanings.
It allows you to validate your design earlier on the desktop prior to hardware being available where errors are cheaper and easier to identify. Simulink gives engineers a great set of languages to describe complicated systems. From physical domains-- like drive trains, and electrical machinery. To software domains, like block diagrams and state machines. These representations allow you to abstract unneeded details to focus on what's most important.
Informal or ad hoc simulation testing can be done by building a virtual representation of your system, that you can interact with on your desktop. This is a great way to get an early understanding of your system, to evaluate design alternatives prior to building hardware. After refining your model the Simulink design can be automatically converted into production code through co-generation.
With this complete model- based design process, you eliminate costly error prone manual steps, as you refine the design model into your target hardware. Simulation is a great way to validate behavior early but it is mainly a manual process and largely ad hoc. To determine if all the requirements are met and functioning correctly, we need to take a more rigorous approach with requirements based testing to systematically test our models against the requirements. This is where the The MathWorks verification and validation workflow helps to automate the manual steps in testing the model and establish a repeatable verification workflow. I will now step through the workflow to perform the verification of a requirement.
Typically requirements start as informal ideas. They may come from Word, Excel or even more structured environments like doors. Engineers need to interpret these informal ideas into a model based design, and then verify that the design meets those requirements. This can be very challenging to get right when data is viewed and managed in separate tools. It is difficult to establish the traceability between the requirements and the design and test.
To work with requirements from other tools directly in Simulink, there is an import operation through Simulink requirements for Word, Excel, Doors. And also support for the standard requirements interchange format-- ReqIF. ReqIF is supported by most requirements management tools. If requirements change at the source, and then an update operation is performed that synchronizes the changes, the Simulink user may want to edit the requirements or add more details-- such as custom attributes to the requirements.
The default is to treat external requirements as read-only but in the recent 19A release, the ability to unlock the requirements enables the update of fields within the requirements. Additionally, to importing requirements-- you can also create requirements directly in Simulink where Simulink is a source. To include the Simulink authored requirements within the overall project and export operation using ReqIF as introduced in 19A.
Combining the import, update and export operations with ReqIF enables a round-trip workflow with external requirements tools that also support the standard. There are two main views available to work with requirements within Simulink. The editor and the perspective. The editor allows you to view and edit all of the requirements sets in your project. The perspective, provides a view of the requirements and the design together that allows you to work with both in one view.
Here's a closer look at the requirements perspective-- a control of the right of the Canvas turns a perspective on and off. Badges on the canvas show where links exist, and you can optionally display the description. The browser shows a tabular summary of the requirements. It includes an implementation and verification status to show the completeness of the design and test. The Property Inspector includes all the details of the requirements, such as a description, custom attributes, and it also includes the links for the requirement.
By having requirements included in a Simulink environment, it allows you to work with them in the design environment to quickly create traceability links. Simulation provides a great way to catch design errors earlier, but a more rigorous and automated method of testing is needed to increase the confidence in the design. Simulink tests gives you a systematic way to test your models.
First, you can isolate the component under test using a Test Harness-- to allow for writing the tests without dirtying the model. Test cases can be created to verify the design against the baseline or equivalence between modeling code. You can use many formats to define the inputs, such as-- MAT files, Excel, signal letter or text sequences. To assess the results, you can compare against baseline outputs in
a MAT or an Excel file. Write custom criteria using MATLAB unit tests. And use a test assessment blog to define online pass fail conditions. You can link to test artifacts or requirements to complete traceability between the requirements design and test.
Analysis of the links shows how complete the implementation and verification is for the hierarchy of requirements. A full colored bar indicates where complete requirements coverage exists. Gaps identify where there's missing implementation or testing and where further work is needed. I will present an example of this workflow using a heat pump. It starts with some textural requirements for a heat pump controller that are in an external tool. The activate heat pump requirement includes timing logic. The pump should activate when the temperature is greater than two degrees for more than two seconds and then stay active for a period of time.
Based on these requirements, the controller needs two inputs-- one for the temperature and another for the room temperature. It will output three commands to control the pump that are on a bus. One turns a fan on, another turns a pump on, and the third specifies whether heating or cooling is needed. The behavior for the controller to turn on the fan and went to activate the pump for heating or cooling is implemented using a Stateflow Chart.
To ensure all the requirements are implemented in the model, they were imported into Simulink requirements. In the perspective, I can see that the implementation status of the active heat pump shows a gap, and it does not have any links. To create a link to the model, all I need to do is drag and drop. I can show a requirements annotation to view details of the requirement within the model.
Notice that the link is created, and the implementation status is updated to show its implemented. Adding links or annotations does not dirty the model. Separate files are used to maintain these artifacts. Now, I will move on to the verification of the requirement. The Controller is included within a plant model for the heater and thermal properties of the house that can simulate its operation. I want to test only the controller without dirtying the rest of the model. I will do this by creating a Test Harness.
I select the command to create the Test Harness, and then there are a number of operations that you can provide for the harness. In particular, you can choose different kinds of sources and different syncs. We choose a test sequence to drive the inputs of The Controller This creates a new Test Harness Model that contains only the controller and a test sequence block. The component on the test will be kept in sync with the main model if there are any changes.
The test sequence block allows you to generate test inputs and define assessments as a sequence of steps. You can think of this as a simplified version of Stateflow. It allows you to describe complex temporal test sequences. For the Test Harness created for the controller, a sequence of steps using a ramp function varies the range of the room temperature over time to enter all modes of the controller.
The first step initializes the controller, then the code outside step lowers the room temperature to put the controller in heating mode. The hot outside step tests how the controller cools the room. Finally, the controller goes back to idle. The test sequence provides a way to create a complex sequence of test steps to drive the test inputs.
Let's look at this requirement to activate the heat pump. This is a simple concept-- when some condition is true, then some other condition must be true for some time. But formalizing assessments-- the captured temporal conditions can be hard. Let's look at how we can alter this requirement using the new temporal assessment editor. In the test manager, I open a test case using a Test Harness for the heat pump model and add the assessment. I build the assessment using predefined patterns. The requirement specifies a trigger response pattern. I name the assessment-- then I need to select a pattern for the trigger condition.
In my case, I want the test to be triggered when the temperature difference is above some threshold for more than two seconds. I type an expression corresponding to the trigger condition and specify the minimum time that the condition must be true. Notice that all the symbols are initially marked as unresolved and automatically added to the symbol table. These are mapped to signals in the model or to expressions. Then, I select a time reference corresponding to the time when the response condition must be evaluated. Finally I choose a pattern for the response condition and fill the condition and a minimum time.
I can collapse the assessment to see a readable yet precise description of the requirement. I can create a link to the requirement that is being tested. And, to see the requirement, I can use the link to navigate directly to it. Notice that a verified link is created and the status shows yellow, indicating that the test hasn't been run yet. So we mapped our assessment symbols to the model and are now ready to run the test case. We can run the tests linked directly from the requirement.
The test runs and the status is automatically updated showing that the test failed. There's a link to navigate directly to the test results. The assessment result shows the expected behavior and the actual results where the assessment failed. A textual explanation tells us that the pump should have activated at 13 seconds. The expression tree explains more details about the failure and allows me to debug it. We can use a data cursor to see that the pump did not turn on until later, which is not correct.
To debug the error, we can start by using the link to navigate to the implementation in the model. Upon examining the transition logic, we can quickly see that the condition is using the wrong threshold value. I correct this in the model, and now I can navigate back to the test from the model and then rerun the test. The test passes, and now-- navigating back to the requirements editor, I can see that the verification result is automatically updated to show green for passing.
To summarize what we saw-- using a Temporal Assessment Editor, we were able to translate a text requirement to a formal assessment using a form based editor. The assessment could be viewed in a readable English like sentence. The assessment results window allowed me to review and debug the results. And then, the test result was traceable back to the requirement and the design. To see the overall status and identify gaps you can display an implementation and verification status to measure the overall completeness of design and test.
For the implementation status, the blue indicates that a link exists. Light blue indicates there is a justification, and a blink indicates that a link is missing. For verification status, green indicates that the test passed, red indicates a failure, and yellow indicates there was no results. And in a blink indicates that a link to a test is missing. A test harness is one way to isolate a component under test, but to verify a design, you may need to access some signals buried deep inside a model hierarchy. But you don't want to modify the design or its interface just for testing purposes.
The new Observer Blocs allow you to monitor the signals of your model, while preserving the designs dynamic response and interfaces. You can separate verification logic from the design and access any signal at any hierarchy without modifying the interfaces and also without impacting the systems dynamic response. It helps avoid cluttering the model with additional signals required only for testing. You can take the verification logic and locate it in a separate model that co-simulates with your model under test.
Tests can be reused to perform equivalence testing for SIL, PIL or HIL. SIL, or Software In the Loop testing, refers to reusing tests from the model to execute the generated code on a desktop PC, and then comparing the results to the simulation results. You can also measure code coverage to see that the generated code is completely tested. PIL or Processor In the Loop testing refers to cross compiling the
generated code for the target processor. Executing that code on the target processor, and then comparing the results from the target and the simulation results. There's also a third in the loop testing called HIL or Hardware In the Loop. In this case, testing checks the real time behavior of the design and the code using Simulink Real Time and Speedgoat.
We have many customers using the VNV workflow. Here's one example-- Ellis Automotive is one of the largest tier one suppliers of vehicle switches and components in Korea. They needed to meet the increasing demand from OEMs for faster delivery of more features and for production systems that comply with the ISO 262 standard. Ellis Automotive worked with The MathWorks consulting to adopt model based design with MATLAB and Simulink, to reduce their development time of their mirror and power window controls.
They ran simulations and Simulink that revealed customer specification errors. But, fortunately these were relatively easy to resolve because they had been identified early in the development. They were able to eliminate manual coding errors and reduce their development times. To learn more about Ellis automotive and other customers visit our website.
In summary, model based design enables you to hit the Play button to verify and validate designs earlier and catch errors before hardware is even available. The new test assessment language enables you to translate informal text requirements into unambiguous assessments with precise semantics. They can be viewed as natural language sentences for better understandability.
With this solution, we are able to trace from the requirements, to the design, and to the test-- allowing us to easily identify where a requirement is implemented and how it is being tested. This also enables us to determine what will be impacted if there was a change in our requirements. To learn more about The MathWorks Verification and Validation Workflow, check out the product pages for the products that were discussed today or the solution page on www.mathworks.com
Featured Product
Simulink Test
Sélectionner un site web
Choisissez un site web pour accéder au contenu traduit dans votre langue (lorsqu'il est disponible) et voir les événements et les offres locales. D’après votre position, nous vous recommandons de sélectionner la région suivante : .
Vous pouvez également sélectionner un site web dans la liste suivante :
Comment optimiser les performances du site
Pour optimiser les performances du site, sélectionnez la région Chine (en chinois ou en anglais). Les sites de MathWorks pour les autres pays ne sont pas optimisés pour les visites provenant de votre région.
Amériques
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asie-Pacifique
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)