Accelerate A-SPICE compliance with Model Based Design - Part 2/2 (Software Engineering Processes)
Overview
This webinar is the second part of a two-parts series on using Model-Based Design to achieve compliance with Automotive SPICE. This part focuses on Software Engineering Processes.
Automotive SPICE allows organizations across the automotive supply chain to assess and improve the capability levels of their own processes as well as those of their suppliers. Consequently, A-SPICE compliant processes allow suppliers to satisfy and even exceed customer expectations.
To effectively achieve A-SPICE compliance, organizations choose to use Simulink family of products. This is partially because automation capabilities in Model-Based Systems Engineering allow engineers to focus on their state-of-the-art products and innovations and leverage tooling support to achieve process quality aspects like traceability, consistency, and documentation.
Starting in R2022a, the IEC Certification Kit provides a mapping document between base practices of engineering processes in automotive SPICE and use cases of Simulink family of products.
In this webinar, we will see how these products work together in a streamlined way to support your ASPICE compliance efforts when it comes to Software Engineering processes.
Mr. Peter Abowd from Kugler Maag will join the session to highlight some of the best practices in this space.
Highlights
- Achieve ASPICE compliance with MBD by performing base practices of software engineering processes
- Maintain consistency and traceability across your development and V&V artifacts with MBD
- Ensure continuity of your software development artifacts
- Shift your verification to the left and detect issues as soon as they are introduced throughout the development process
- Show evidence for compliance with automatically generated artifacts using MBD
About the Presenters
Mohammad Abu-Alqumsan, Product Manager at MathWorks. Mohammad focuses on quality, safety and cybersecurity and consults with industry participants on qualifying tools and developing workflows that comply with popular certification standards such as ISO 26262, SOTIF, IEC 61508, ISO 21434 and Automotive SPICE.
Peter Abowd, CEO of Kugler Maag Cie North America. Peter’s expertise ranges from building successful global embedded software organizations and providing organizational change leadership to guiding teams in implementing practices from ASPICE, functional safety, Agile, software product lines, and CMMI.
Recorded: 5 Oct 2022
Hello, everyone. Welcome to our second webinar on Model-Based Design and Automotive Spies. My name is Mohammad Abu-Alqumsan, and I am the product marketing manager for the IEC Certification Kit here at MathWorks. My colleagues, Alex Shin and Nukul Sehgal from our application engineering teams will join us later.
Peter Abowd is our today's special guest from Kugler Maag North America. Peter will join later for a discussion and for the Q&A session.
This is the second webinar on Automotive SPICE with model-based design and will mainly focus on software engineering processes, like the first webinar on system engineering processes. In today's webinar, you will learn how to first quickly develop and realize software requirements, architecture, and design; second, simulate often and test early to gain confidence in your software implementation even before the availability of hardware; third quickly iterate over your designs and implementations after changes and modifications.
The direct result of these features is an A-SPICE-compliant process that allows you to develop high quality products to satisfy or even exceed the expectations of your customers. In the previous webinar, we introduced our case study of the battery system, which we used to show the effective support model-based design and model-based systems engineering have for A-SPICE processes. We have seen how you can use simulation to validate some system-level decisions, even before the start of product development at the software or hardware levels.
In this webinar, we will continue with the battery system example. Two main work products from the system level are necessary to kick off activities at the software level. These are the system requirements and the system architectural design.
System level interface specification typically define the hardware-software interfaces as well. Fully integrated software is later integrated with hardware and elements from other domains during the system level integration.
Here is today's agenda. My colleagues Alex and Nukul will walk us through each process in the software engineering process group and will provide some highlights on relevant model-based design practices. I will then summarize model-based design support over some overarching A-SPICE concepts like traceability, consistency, and documentation before we have some concluding remarks.
Mr. Peter Abowd, CEO of Kugler Maag North America, will join us after that for a quick discussion. Then we open the floor for your questions. Now please, let me hand it over to my colleague, Alex.
Thank you, Mohammad. I'll start with SWE 1 and 2, and then hand it over to my colleague, Nukul, who will take us through the detail design, unit construction, and unit verification, which are SWE 3 and 4. I'll circle back to go through as SWE 5 and 6 on software integration and integration test and qualification test.
Now, let's start with 1 and 2. First process of soft engineering process group, SWE, is soft requirements analysis. The purpose is to define software requirements based on system requirements and architecture. Both functional and non-functional requirements need to be fully specified.
With model-based design, you can import existing software requirements into Simulink if these are already also externally. Here is an example of software requirements written in Microsoft Word. In this example, you're using a user interface to input the requirements.
You can easily understand the structure of requirements using a requirements editor. All the necessary properties, such as requirements ID, summary, and description can be imported, including custom attributes. When the external requirements are updated, Requirements Toolbox helps you to identify these changes to remain in synchronization.
You can also explore requirements written and updated in Simulink in the ReqIF format. And of course, you can directly also modify requirements using Requirements Toolbox. Once the requirements are specified, reviews are conducted to ensure correctness, technical feasibility, and verifiability of software requirements.
Requirements table can be used to support this process by automatically checking consistency and completeness of formal requirements. Tabular format is provided to easily capture precondition and post-condition of formal requirements. Automatic analysis capability will highlight flaws in your requirements as inconsistency or incompleteness.
Based on the findings, requirements should be improved for consistency and completeness. Here is an updated version of the formal requirements. This time, as can be seen from the analysis results, they are consistent and complete.
Demonstrating bi-directional traceability is an important part of A-SPICE activity. Links and Requirements Editor can help you to effectively establish and demonstrate bi-directional traceability from software to requirements to system requirements and vise versa.
Traceability can also be visualized using requirements diagram. If you're interested only the upstream, you can use the filters. You can review all the links in a project using the traceability matrix as well.
But always, design provides proven tools to analyze traceability, address gaps, and quickly respond to changes. So in this process, software requirements are defined and analyzed. Here's the list of what products required. Reports and artifacts are generated based on activities described.
In the next process, some of the architectural design is established to identify allocation of software requirements to elements of software and to evaluate some of the architectural design against defined criteria. This is a software architecture diagram of Battery Management Controller developed using System Composer. Elements of software are clearly represented with components, connections, and port.
Interfaces of each software elements are designed based on interface requirements. Complex interfaces can also be designed using Interface Editor. As part of the architectural design, dynamic behaviors, including interaction between software elements, need to be described. To describe required dynamic behavior of software elements, it's important to understand the interactions between software elements.
Let's look at Main State Machine of battery Management Controller. Different architecture views can be used to visualize required interactions between software elements. Allocated requirements can also be reviewed together. From this view, lifelines of sequence diagrams can be automatically generated, providing a great starting point.
This is the completed version of sequence diagram with the Main State Machine, clearly describing the required interactions between software elements. And from released 2022B of MATLAB, you can execute sequence diagrams to verify described dynamic behaviors.
During software architectural design process, resource consumption objectives need to be defined for the software elements. Using System Composer, projectspace-based stereotypes can be designed and assigned to software elements. In this example, resource consumption objectives are defined for SOC estimation component.
Using an analysis viewer, overall resource consumption for software can easily be visualized. Malfunction can also be used to analyze and adjust resource consumption objectives. In software architectural design process, software architectural design, interface requirements specification, and traceability records are some of the main output products required for this process.
Now, I'd like to hand it over to my colleague, Nukul, to walk us through SWE 3 and 4. Over to you, Nukul.
Thank you, Alex. Now it is time to dive into software detail design, unit construction, and unit verification. The purpose of SWE 3 is to develop and evaluate the software detailed designs and to specify and produce the software units.
To start with, you can create a skeleton Simulink model for a software component in the architecture directly to System Composer. If the models are already available, you can link them with the respective software components in the architecture. You can use System Composer, Simulink, and Stateflow to develop software detail design.
Let's see an example using Stateflow to implement the detailed design for the BMS state machine. This machine specifies states like standby, driving, charging, and full state.
By developing your detailed design models, you can establish traceability links between the functional requirements and the elements of detailed design. This helps you establish protection and traceability of software requirements further down to the software unit. With the help of data dictionaries, you can specify and document interface properties like data types and dimensions for your software units.
During software detail design activities, you should also consider evaluating the design for the parameters related to model and code quality. To this end, you can automatically analyze software units at the model level against modeling guidelines and generate a report as an artifact.
Model metrics, which are of interest at this step of the process, are mainly related to size, missing interface information, requirements linking, consistency, cyclomatic complexity, et cetera. The Model maintainability dashboard facilitates reviewing metrics related to the size, complexity, and architecture of your software detail design models, much like the Metrics dashboard does for a unit.
In model-based design, you can use production code generated to generate source wood from your detailed design. To this end, you can configure embedded coder to generate production code. This may include setting code condition objectives for mata compliance and configuring other optimization settings.
In the report section, you can choose to generate reports on static code metrics, like the usage of global variables and function complexity. In the code perspective, you can open model and code side by side to view the bidirectional traceability between the software detail design, software source code, and other relevant software requirements.
Let's see. At this stage, the output work products you can get from model based design relevant to SWE 3. These are the detailed design of your software components, the software unit, and the interface and traceability data.
The process SWE 4, or Software Unit Verification, starts when using model-based design as soon as your fully detailed design models are available. You can evaluate software detail design models against high integrity and mob guidelines to check for design quality. The matrix details view gives a summary for all results regarding the checked components against the selected guidelines.
Once the model advisor completes the static analysis, you can generate static verification reports for archiving and for review. You can place any violations you have in the design back to their origins in the model. These links and the recommended actions provided in the result details let you easily fix these violations if necessary. We are listing these activities in relevance to SWE 4.
But in practice, you can exercise these types of analysis before production code generation in SWE 3. In fact, this is the recommended practice. This lets you shift more of your A-SPICE verification activities to the left side of the V model.
For static code verification, it is common to have coding rules such as MISRA C. Polyspace products can help you do so for automatically generated code, as well as handwritten code. Polyspace products can also use formal methods to formally check the presence of common runtime errors like division by 0 or overflows.
In SWE 4, you also need to dynamically test your software units against detailed design and software requirements. For testing software units, you can use Harness models, which let you isolate a unit from the main model, or testing without changing the original model. Harness models let you also provide the inputs according to your test case specifications. These test cases are managed by the test manager. You can automatically generate a test specification report that contains all the details about the test cases, the test harness, and the test steps.
In Simulink test, you can alter test inputs in many different ways, including Match files or Excel files, Signal Editor Block, Test Sequence Block, MATLAB scripts, Stateflow diagrams and more. To assess the simulation results, you can compare it against baseline results that are saved in MAT or Excel files.
You can write custom code criteria using MATLAB code that is based on MATLAB unit test framework. Or you can use the test assessment block to define partial conditions during simulation. You can also test your models in different modes, including Software-in-the-Loop, Processor-in-the-Loop, and Hardware-in-the-Loop. For scalability, you can run your tests in parallel using the Parallel Computing Toolbox and schedule your test to run in a continuous integration environment.
The SIL testing, you start with the test vectors used for simulation. You then perform desktop simulation with these test vectors and gather the results. Using Embedded Coder, you generate code and compile the code for desktop PC.
This code is then executed on the PC to produce results. The results for code execution is compared to the simulation. The cell process shows equivalence of model and code. You can also use this process to assess code execution time, as well as collect code coverage.
In PIL testing, you compile the code for the target using the cross compiler. This code is then executed on the target and produced results are collected back in Simulink. The results for the code execution on target is compared to simulation.
The PIL process shows equivalence of model and code. You can also use this process to assess target code execution time, as well as select on-target code coverage. From the test manager of Simulink test, you can easily select and execute test cases. The test manager also lets you interactively analyze log signals after test execution is completed.
It also lets you easily generate the result report as an artifact that contains all the results, along with the data that you are interested in documenting, like logged simulation outputs. You can configure test execution to collect coverage data. Let's see what coverage these tests have achieved on the Main State Machine of our Battery Management Controller.
One way to check coverage is to directly go from test manager to the models. We can see the coverage results highlighted over the model and along with the coverage details side by side. After the code generation, or when testing in SIL mode, you can similarly check for code coverage results in the coverage details.
The Simulink Test Manager allows you to manage your tests for simulation baseline or equivalent testing regardless of simulation or testing mode. It also allows you to interactively investigate results. You can visualize the results and debug any failures. You can group the test into test suits and run single tests, single suits, or all the tests.
As mentioned, when running tests, you can measure coverage to identify which parts of your model are generated code or exercised during testing.
Coverage results can be highlighted on Simulink, Stateflow, and generated code. Coverage analysis for integrated C, C++, or MATLAB code in the design is also supported. You can use missing coverage data to find gaps in testing, missing requirements, unintended functionality, or dead logic.
You can also perform coverage analysis for C and C++ code generated from embedded code during Software in the Loop and Processor in the Loop simulations to identify untested portion of the generated code prior to the software integration testing. Simulink Coverage lets you generate coverage reports which helps in understanding the code covered and missing coverage with respect to configured coverage criteria like statement or MCDC.
Throughout your testing activities, you can use the Model Testing dashboard to collect metric data on the status, quality, completeness, and consistency of your requirement-based testing. It builds upon the available digital thread in model-based design to retrieve relevant information, like requirements with missing tests or designed with insufficient coverage.
In software unit verification, key work products will be test specification, test result reports, along with the traceability. Now, Alex will continue to discuss how model-based design can be used for SWE 5 and 6. Over to you, Alex.
Thank you, Nukul Now, let's look at SWE 5 and 6. In the next process, software units are integrated into larger software items based on software architectural design. Integrate itself to items are tested to provide evidence of compliance to architectural design, including their interfaces.
Architectural design can support integration of software units. Simulink models can be linked to the architecture diagram to form software items in System Composer. Correct interpretation of data by software items can also be checked in the integrated model. You can update the integrated model to check any interface mismatches.
When mismatches are filed, the integrated model will provide a convenient link to the issue. This can be reviewed, addressed, and validated based on requirements within the same modeling environment.
Now, the next activity is to develop and perform integration test based on test specification and strategy. Similar to unit verification, Simulink test can support your test specification activities. As part of test specification, you can develop test harnesses for your integration. At this stage, test inputs should be focused on testing the interactions of software units within an appropriate test environment.
Developed test harnesses can be registered for use in your test cases in test manager. By running test cases, dynamic interaction between software units in the integrated software item can be verified. Coverage information is also available to check the execution of software units. Simulation results can be reviewed for the expected results.
Test manager also provides built-in capability to easily capture baseline results. After a simulation, test manager can also automatically compare the simulation results with pre-specified baseline results. You can also define formalized verification criteria based on your requirements. This can also be used to automate the verification of test results.
At the integrated software code level, polyspace static analysis can also support the integration test activities. Data flow analysis results can be analyzed to show correct data flow between software items. You can also check data race conditions, including the cost of any potential issues.
Polyspace provides complete static analysis capabilities for code level testing. Overall quality of software can also be tracked using the dashboard capabilities. Integrated software, test specification, and result, and traceability records are some of the key work products in this process.
Now, the last process in the software engineering process group is the software qualification test. In this process, integrated software is tested to provide evidence for compliance with the software requirements. Test cases are developed, based on software requirements, and test results are recorded.
Hardware-in-the-Loop testing is often used for the software qualification test. Battery Management Controller software is deployed to an embedded controller. Battery model, including necessary test environment models, are running on a real-time target machine such as speed performance target machine. Embedded controller and the spate of tagging machine are now wired together to fully test the deployed software, according to qualification test specification.
Hardware-in-the-Loop testing can validate your embedded controller without requiring the real battery on the test. This allows injection of faults and test conditions without damaging the real hardware.
Speed could provide hardware options specific to Battery Management System development, such as battery simulation I/O module, full insertion I/O unit, and battery cell emulator. Using a combination of these hardware options, you can create an ideal Hardware-in-the-Loop simulation environment for Battery Management Systems.
Qualification tests created in Simulink test environment can be used to automatically run multiple test cases on the Hardware-in-the-Loop set up. Once the tests are complete, we can use generator reports as output work products. If you're using a continuous integration platform, the test environment is compatible for any of the platforms such as Jenkins GitHub Actions. Test specification, test results, and traceability record are the expected work products in this process. All right, now I would like to bring back Muhammad to discuss overarching concepts in A-SPICE.
Thank you, Alex. I'm going, in the next slide, to summarize some of the features that are relevant to two important A-SPICE overarching concepts. The first is documentation. Here is a summary of the output work products you can automatically generate using model-based design features presented today.
Here are some examples-- software requirements, software architectural design, detailed design, and test results. The second concept is using traceability to assess consistency and completeness. Throughout this presentation, you have seen several ways to investigate up and downstream traceability links. For example, from software architectural elements to requirements, or from detailed design to requirements.
One can also use traceability diagrams to show a summary of all upward and downward lengths associated with a specific requirement. Traceability matrices, on the other hand, let you have a high level of view an existing length, which you can easily update.
The Model Testing dashboard builds on the digital thread across all artifacts to assist you in your efforts to guarantee correctness and completeness. Based on the features we have shown today, it is safe to conclude the following-- model based design offers a wide spectrum of features that let you quickly develop your software architectures and designs.
With model-based design, you have the source code implementation with a click using production code generation. Shifting your verification to the left side of the V with model-based design simulations and static analysis features is a great asset, not only in defect detection but also in defect prevention.
Furthermore, changes and modifications in complex software systems are inevitable. Tool support for traceability and dependency analysis can greatly support your impact analysis after each change. This contributes to faster and more efficient development iterations.
All this can contribute only to a more efficient automotive SPICE-compliant process. Our intention today was to highlight some of the features in model-based design that ease your journey with automotive SPICE software engineering processes. For a more detailed mapping between the base practices of engineering processes and model-based design activities, please refer to the IEC Certification Kit.
For your quick reference, this slide shows a broad view of the MathWorks solution to automotive SPICE. Don't worry if you are not able to read through. We are going to share these slides after today's event.
Here is the same information in a workflow style that shows the system and software level activities. You want to develop products according to ISO 26262, ISO 21434, also active as well, no problem. Together, this reference workflow and the IEC Certification Kit has the support you need. Feel free to check that out.
Model-based design is proven in use for A-SPICE and ISO 26262. On mathworks.com, you may find several user stories on this. Here is a selected list from Robert Bosch and Volkswagen for your quick reference. The last link is for a talk on effective strategies for A-SPICE and safety compliance by Peter Abowd, our guest for today.
So last time, we discussed systems processes. We talked about system architecture, system requirements, system architecture, and testing. And today, we see parallel concepts as software.
Like, we need, to alter our requirements, then design our architecture, go to Design and unit construction, and then go to the right side of the V. So I have one starting question. Why do we need software requirements when we have system level requirements.
Yeah. That's a common question. And admittedly, the standard doesn't express itself so clearly on this. But really, they complement each other-- the system and software requirements. And also, it requires that there be electrical and mechanical requirements in addition to the software.
And why is there this relationship? And it goes back to the idea that the system requirement is what the system needs to achieve. And when you take a system architecture into consideration, this now takes the goal of the system requirement and gives responsibilities to the different disciplines by imposing the constraints of the design.
So I started my career in radio design in the '80 and have used this analogy to explain system and software requirements for quite a while now. And what we have is a system requirement that says tune 101.1 FM.
OK. And that requirement has existed for a long time. But those of you who have maybe been around for a while might remember that early tuners had capacitive plates and mechanical knobs and had some electric circuitry. So the system requirement, tune 101.1 FM, was realized by a system architecture in which there were a lot of mechanical requirements, some electrical requirements, and no software requirements.
Whereas, if we kind of fast forward to modern day radio, we have an antenna and we have some circuitry, brick wall filter circuitry maybe to sample the RF right off the antenna into a DSP and control software to basically demod and pull the signal off, as well as the digital information describing the signal-- and really, the signal is digital now even. So it's a lot of software responsibility, but still the same system requirement. So you can see how the system requirements and the system architecture are used to derive the individual responsibilities of software, electrical, and mechanical requirements.
Great example, Peter. I think most of our attendees today, probably coming from the automotive, and wonder like, this example, probably, is also an automotive example, because we have radio systems in our car. So it's a great example.
Absolutely.
So again, I want to follow the left side of the V. That's the processes at the left side. So we talked about requirements a little bit.
So your talk at the MathWorks Automotive Conference this year, it was about architecture and detail design, when to start, and when to stop architecture, and detail design. So where do you see a good practice to draw this line between the architecture and the design in general?
Yeah, it is also like the question about software and system requirements. The question about when does architecture stop and detailed design begin or what's really the pragmatic relationship between the two-- this question comes up a lot. And in that video, which is I think is still available from your site, I go into more detail-- and I'll be able to go into now.
But in A-SPICE, it talks about software architecture as being what you saw today, a bit about decomposition of elements. But it also describes a termination of architecture in something that's called a software component. So you can decompose and decompose elements of elements. And then you decide to stop.
And the reasons for stopping are in the head of the architect. Why stop decomposing this piece anymore? It's because at this point, we have no purpose as an architect to abstract any further or hide any further. But those are decisions that the architects will use to justify. But they stop architecting, and this is a software component.
And now the software component is what becomes the subject of a detailed design description. And the detailed design description-- this is now SWE 3-- describes how that software component will be realized in units, which are software.
Now, in the MathWorks toolset, those elements can be simulate blocks on a sheet. They can decompose to have more Simulink blocks. And you can end this decomposition in Simulink with a model reference block.
And that model reference block is now the atomic software component of that architecture. And when you open up the model reference block, its description is the detailed design. And when you generate the code, that generated code is the unit.
So there's a very clean path towards A-SPICE compliance there that you can easily justify and be successful with. And of course, it's great with these tools because you can actually simulate the design and evaluate it for correctness while you're on the left hand side of the V in SWE 3 and even simulate the architecture in SWE 2.
So that makes the verification on the right hand side of the V, integrate software integration and software unit test, just that much easier. So I hope that helps clarify it.
It does. A great job in doing that. And I think you mentioned this already. Like, if you go with model-based design or Simulink or model-based design tooling, really you just browse into one component and then you go to the next level of detail.
So you go from the-- it's kind of continuum between the architecture and the detail design. But you have that boundary with those model references out there. This is a great segue to the next question, which is about traceability.
So establishing bidirectional traceability in A-SPICE is almost a common best practice for all engineering processes. Why do you think this is the case? Like, why do we emphasize this bidirectional traceability all the time in A-SPICE?
Yeah. Well, because the model itself has traceability just all over the place. And as you get further down the V, the more what I call exhaustive and nauseating the amount of traceability is specified in there. And it's really lost in the model and in understanding of the model that, from a philosophical perspective, a traceability is a means to an end.
In other words, it has no inherent value in and of itself. You don't get anything for having traceability. The important part of the model is the consistency. All right, consistency we can interpret pretty effectively as being completeness and correctness.
So when we talk about consistency in the A-SPICE world, we're talking about a sense of completeness and a sense of correctness, which, from an engineering perspective, those should be characters we seek after. I want to have all of my requirements resolved in my design, and I want my design to correctly resolve those requirements. That's the important part.
So in order to do that, in order to measure the completeness and the consistency, or to help support the consistency, we need traceability. But we don't need insanity of traceability. We need enough traceability, we need enough means to achieve the end.
So how are you going to determine that your design is correct? Well, if you're using the MathWorks tools, you got Simulink on the left hand side, and you've got maybe even System Composer identifying your system elements, you can allocate requirements to those system elements in a number of ways without necessarily having to link tools.
But furthermore, you can go ahead and prove correctness about your design using simulation, using MIL. And you should spend your energy there rather than trying to make, I don't know, 70,000 connection links between MATLAB and DOORS or any other-- Polarian, and 10,000 connections between-- or try to import a design into another tool just for traceability. This is something insanity that we see that really loses people in traceability when they should really be spending the energy on correctness, using the traceability.
Right. Like, model businessline offers this tool support for traceability, but, again, it's an Engineer-in-the-Loop that should ensure this consistency and correctness correctly. Is a great way to do so. Absolutely. It should be taken advantage of far more than it currently is.
OK. No, I cannot disagree. So this is great. The last question I have here for you, Peter, is you've been assisting projects using model-based design and projects that aim to comply with A-SPICE. So where do you see the biggest advantage when people adopt model-based design?
Well, we've touched on it a little bit already, but I'll reemphasize it. OK. Unfortunately, what we see a lot happening, especially on the software side, is kind of non-value-add compliance to the left hand side of the V when it comes to software architecture and software designs. And it's not the automotive industry's fault.
The world of software engineering has broken down in terms of its effectiveness of software design. I mean, think about it. Some of the tools that people are using for software design are decades old and haven't really modified or helped the users substantially since then. Whereas, a suite like MathWorks has been evolving and growing for 25 plus years. Whereas, others have just been stagnant and reintroduced into the market.
And so-- and the point is that these architectures and designs are there on the left hand side of the V to prevent failure, to avoid making defects, because on the right hand side of the V, verification that we built, what we intended to build, or qualification that we built what the customer wants, these are supposed to be proofs. And really with qualification, proof that the requirement is achieved in a lot of different contexts.
They're not supposed to be exercises in finding defects, because it's expensive over there. And this is well-studied data in the universe of software engineering. You can find lots of studies on the cost of finding and fixing a defect later and later in the life cycle. And none of that cost beats, preventing the defect in the first place.
And when it comes to preventing it, great requirements and great architecture and design simulation are state-of-the-art. And I'd love to say they're state-of-the-practice, but it's not enough out there. It's still state-of-the-art, whereas it could be state-of-the-practice.
OK. This is awesome, Peter. I think we are reaching the end of this part in our discussion. But it would be great if you stay for the Q&A session with our attendees, because now I need to welcome Giovanni and Mark, our colleagues, to this Q&A session.
Giovanni will be moderating the questions that we have received so far. So to you, Giovanni.
I thank you, Mohamed. So then let's start with the first question that I see. And I believe that's for you, Mohamed. So what is the recommended practice to define the interfaces in model-based design? Can you say something about this?
Yes. I think there are different ways to define interfaces. If you go to the Simulink Canvas, and then you have every input or output you can define. It's strangers, data types and so on. But this is not a scalable solution.
I think the common or the recommended best practice here is to use that dictionary because, again, you can use those interface definitions across models. So again, we talked about architecture detail design and the different abstraction levels here. Again, those that are dictionaries would be helpful at the architecture level, at the detail design, and at the end, at the implementation model, the model that you use for code generation.
Thank you, Mohammad. Next question then. And Mark, that's for you. So I find it a really good question. So our customers do not accept a Simulink model with algorithms as detailed design. So what can we do in that case?
Yeah. Thanks, Giovanni. That's something that we hear from time to time that some people out there think that the design description needs to be on a more abstract level, more descriptive, maybe using diagrams like sequence diagrams or other diagrams that are just describing the behavior, especially in between the different software units, because one software component can consist of several software units of course. And interaction between those is often non-trivial.
So but here we have a simple solution. You can also use just System Composer for that level to kind of restrict yourself just to the structural description of all the units talking to each other, the interface definitions, and using, for example, sequence diagrams to exactly describe the behavior on that level and the interaction of the different units with each other, or you can also use other diagrams for descriptive purposes here on this level.
And this doesn't prevent you from later on using, for example, simulation purposes once we have the software units defined in Simulink, for example. Of course, you can still do that also in System Composer. And even later on, code generation would be possible, provided you have already the Simulink descriptions on the unit level code system composers also supporting these use cases.
So to summarize it, use System Composer for detailed design description and only later on when it comes to unit implementation, start using symbol link and stateflow. Peter, is there anything you want to also add to this question?
I just want to add how we can interpret from the model, as well as from the assessor guidelines, that a detailed design or an architecture using model-based is acceptable. We go to just simply the model base environment, there are rules about you need to have defined semantics in order to qualify as an architecture or a detailed design.
And certainly, the MathWorks tools have a very specific and clearly defined set of semantics for expressing the behavior of the intended code. And so we accept these aspects for both architecture and detailed designs quite often. And there's even arguments that some folks say-- there are some opinions that the expression of a reference, a model reference in Simulink could be actually considered implementation.
So there's a range of interpretations. But definitely, we can support with the guidelines, the rating guidelines, that the Simulink abstractions are acceptable design notation.
Thank you, Peter. And Mark, since you mentioned System Composer, or Mohammad, whoever wants to answer here. So how to specify the design of software components? Simulink System Composer?
So in the presentation, we mentioned Simulink and Stateflow. But are here the doors also open for System Composer?
I think the answer to that question has been answered already. That was the discussion Mark and Peter-- so actually nothing to add over this.
Maybe it was a similar question, but I think we covered this as well.
OK. Then the next one. And that's for Peter. Peter, why is the industry not trying to use CMMI instead of A-SPICE?
Well, the industry did use CMMI. And quite exhaustively and unfortunately, it got a bit abused. Maybe not abused as much as misunderstood. And so the short story is what happened is organizations, business entities were getting certification of appraisals from CMMI as being level 5. And it really exploded.
In the first 10 years of the CMMI, only one project globally was level 5, and that was the space shuttle. When automotive started to adapt the CMMI in the late '90s and early 2000, suddenly every Indian service company was getting level 5. And it's not to say that those weren't necessarily legitimate.
But what was happening was those weren't the teams of people that were working on projects for clients of those service industries. So it wasn't accurately reflecting the project focus. And while A-SPICE isn't an improvement model like the CMMI, A-SPICE compares rather to something like SCAMPI, which was an appraisal approach for doing CMMI-based improvement.
A-SPICE was at least only project-focused. And that meant, I don't care how the company is rated. I want to know how the project team that's working on my product is. And so that more narrow focus really helped both the customers and the suppliers focus their improvement in the appropriate perspective. And that's why it's taken off more, and also because the VDA could purchase the rights to it. And make it legitimately automotive. So they couldn't do that with CMMI. But that's the short story.
OK. Thanks for telling us the short story, Peter, because we are at the top of the hour. And I'm so sorry for those who posed some question and we were not able to answer. But we will make sure to answer you offline and reach out. We have, hopefully, your contact information. And so we will do this in the coming days.
You see here-- really, thanks for attending. You see here several links how you can reach out to us, how you can request a free trial of MATLAB, or find example codes on our MATLAB Central, and how you can contact our sales representatives.
There is one last thing I would like to ask you kindly to participate in is answering those poll questions. We are always looking at areas and ways to help you, not only comply with A-SPICE, but all your engineering activities. So please feel free to provide this feedback here to us. And we will make sure to address this in next releases and make your A-SPICE journey as easy as possible.
And with this, I really would like to thank our guest Peter. It was much pleasure having you in person and having you today, Peter. And I look forward to more collaboration with you on the topic of model-based design and automotive SPICE.
Well, thanks for having me. And nice to be here again with Giovanni and Mark.
Yeah. And Thanks, Giovanni, thanks, Mark. Thanks for the extended team, Alex and Nukul, for all the great information that we have learned during those two webinars. And thanks, everyone, for attending.