Test Exceptions and Mask Contraints within the TestManager

Hi,
i want to test my module under test using a test harness and testmanager combined with a test assesment. Additional to "normal" test cases, i also want to do out of range tests, or invalid mask inputs (contraints protected) or setting unvalid inputs for signals.
I found the openExample('matlab/VerifyErrorTestInputValidationOfAFunctionExample') example, which is kind of what i want to do, but i want to integrate it into my test manager to automatically run several different tests.
An example, I do have a simulink subsystem, which has a mask parameter called a, which needs to be a postive number and is protected by a mask constraint. In my first test step i enter value 5 for this mask, in the next test i want to add a -5. I do expect the mask contraint exception to be thworn and it will be thrown but i want to add this expected exception to my test manager as a valid test that has passed when the correct exception has happened.
How can i do that?

Réponses (2)

Umar
Umar le 2 Juil 2024
Hi Stefanie,
You can use the verifyError function within your test case. For example, in your case, after running the simulation with the negative value (-5) for the mask parameter, you can use verifyError to confirm that the expected exception was thrown. Here is an example on how to implement it
testCase = sltest.TestCase('MyTest');
verifyError(testCase, @() yourSimulationFunction(-5), 'expectedExceptionIdentifier');
Please make sure to replace yourSimulationFunction with the function that runs your simulation with the negative value. Make sure to provide the correct expectedExceptionIdentifier that matches the exception you expect to be thrown when the constraint is violated.
For more information regarding verifyError function, please refer to
https://www.mathworks.com/help/matlab/ref/matlab.unittest.qualifications.verifiable.verifyerror.html
Hope this will help resolve your problem.
Hello, I’d like to join in on the question. I have exactly the same use case.
The first sub-problem — evaluating the corresponding error — I was able to solve using the Custom Criteria field in the Test Manager as follows:
test.verifyEqual(test.sltest_simout.SimulationMetadata.ExecutionInfo.ErrorDiagnostic.Diagnostic.identifier,'Simulink:Masking:ConstraintErrorMessageHeader')
But I still have the issue that the test fails overall due to the (expected) error, even though the custom criterion is fulfilled.
My question therefore is: Is there a way to override the entire test result? I understand that TestResult is read-only — but maybe there’s a workaround?
Best regards

3 commentaires

Hi @Kris Schweikert ,

Thanks for jumping in!

You're absolutely right—the custom criteria field allows you to validate specific diagnostics, like checking the exception identifier, but the overall test will still be marked as failed if an error is thrown during execution, even if it's expected and verified in your custom logic.

One workaround is to structure your test such that the error is anticipated and doesn’t cause the test to fail prematurely. Instead of relying on the simulation to fail naturally (which triggers a failure at the test level), you can run the simulation inside a function and explicitly catch the error using a `try-catch` block. Then you can validate the error details in your custom criteria or using `verifyEqual`, without the test itself being marked as failed by default behavior.

Here's an example of how you might do this:

function out = runSimulationWithExpectedError()
  try
      simOut = yourSimulationFunction(-5); % or however you call your         model
      out = simOut;
  catch ME
      out = ME; % Return the exception object for verification
  end
end

Then in your test case:

testCase = sltest.TestCase.forInteractiveUse;
err = runSimulationWithExpectedError();
testCase.verifyEqual(err.identifier, 
'Simulink:Masking:ConstraintErrorMessageHeader');

This way, you're explicitly handling the error and verifying it, so the test framework won’t automatically mark it as a failure just because an error occurred.

Unfortunately, you're right that `TestResult` objects are read-only, so you can’t override the final status. But by avoiding unhandled exceptions, you can prevent the test from failing at the framework level.

Hope this helps!

thank you very much for your reply - inspired by your response, I ended up doing something quite similar by using a try-catch statement in the custom criteria field of my test. Instead of running the simulation, I just set the parameter of the mask to a false value and evaluate the thrown error:
try
% inject false value of parameter
set_param(strcat(test.sltest_bdroot{1},'/<myMaskedSubsystem>'), 'myParameter', 'myFalseValue');
test.VerifyTrue(false); % fail test if no error was thrown after false injection
catch ME
% check error
test.verifyEqual(ME.identifier,'Simulink:Masking:InvalidParameterSettingWithPrompt');
end
When added to a valid test (i.e. containing some other logical & temporal assesments) this works without failing the whole test due to the thrown error.
Cheers
Kris

Hi @Kris Schweikert,

That’s a great solution — I’m glad my response helped spark the idea! Using set_param directly in the custom criteria to inject an invalid value and then catching the expected error is a smart way to bypass simulation-level failures while still validating error handling. I like how you explicitly fail the test when no error is thrown — that makes the intent very clear.

As you pointed out, wrapping this in a broader test with other logical or temporal assessments ensures that the expected error doesn’t inadvertently fail the entire test run. It's a clean workaround for the limitations of TestResult mutability.

Thanks for sharing your approach — I’m sure others will find it helpful too!

Connectez-vous pour commenter.

Produits

Version

R2023a

Question posée :

le 2 Juil 2024

Commenté :

le 19 Sep 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by