Interpretation of warning message from fitglme: "Warning: Final linear predictor from PL iterations is not feasible."

9 vues (au cours des 30 derniers jours)
Hi all,
I'm looking to understand the meaning of a warning message returned by the fitglme function. That warning is "Warning: Final linear predictor from PL iterations is not feasible". I see that here have been two other posts on these forums with the same question but have yet to receive any response, i.e. here and here.
Thank you for your assistance,
JMD
  3 commentaires
Joseph DeCunha
Joseph DeCunha le 9 Jan 2025
@Walter Roberson, thanks for making me aware. I've corrected the post. It was a typo.
Joseph DeCunha
Joseph DeCunha le 11 Jan 2025
As an additional detail, I've found that using the "Laplace" fitting method also results in the same warning message. However, Laplace uses maximum likelihood rather than pseudolikelihood for optimization, so the meaning of the warning message is not entirely clear in this instance. Is it possible the warning message is being issued errantly?

Connectez-vous pour commenter.

Réponses (1)

Umar
Umar le 8 Jan 2025

Hi @Joseph DeCunha ,

After reviewing your comments, documentations and analysis of data contained in matlab data file

https://www.mathworks.com/help/stats/generalizedlinearmixedmodel-class.html

https://www.mathworks.com/help/stats/fitglme.html

This issue arises specifically when attempting to include a random slope term in your generalized linear mixed-effects model. You are fitting a model with nested data and a binary outcome using a cumulative Gaussian link function.

In your case, the first model (with just a random intercept) runs without issues, but the second model (with a random slope for `Ratio`) generates this warning. This suggests that the current model structure may not be able to find a suitable set of parameters that satisfy the feasibility conditions for the linear predictor during the optimization process.

Detailed Explanation and Solutions

1. Understanding Feasibility Issues: The feasibility issue often occurs when the model is overparameterized, meaning there are too many parameters relative to the available data or variability in your data does not support the complexity of the model. For instance, adding a random slope increases the number of parameters significantly, which can lead to convergence problems, especially if there are not enough observations to reliably estimate these additional parameters.

2. Potential Solutions:

Simplify Your Model: Start by reducing the complexity of your model. You might try removing some interaction terms or fixed effects to see if the model can converge with fewer parameters.

Check Data Variability: Examine your dataset for variability in `Ratio`. If `Ratio` does not vary much within subjects, it may not be appropriate to include it as a random slope.

Initial Values: Provide initial values for parameters by using the `MuStart` name-value pair argument. Setting these can help guide the optimization process.

Adjust Fit Method: If you’re currently using `'Laplace'`, consider trying other fitting methods such as `'MPL'` or `'REMPL'`, which might handle parameter estimation differently.

Examine Random Effects Structure: Review how you specify random effects. Sometimes, specifying them as `(1 + Ratio | Subject)` can lead to issues if there isn't enough data to estimate these slopes. Consider starting with just `(1 | Subject)` and gradually introducing complexity.

3. Diagnostics and Iterative Fitting: Use diagnostics functions like `randomEffects` and `fixedEffects` on your initial simpler models to understand how well they fit before adding complexity. You can also set up iterative fitting options by adjusting settings such as `Verbose`, which gives you feedback on each iteration's progress and can help identify where things go awry.

4. Data Check: Ensure that your dataset does not contain missing values or extreme outliers that could affect convergence. Check for multicollinearity among predictors, especially with interaction terms, which can also lead to instability in parameter estimates.

5. Example Code Adjustment: If you decide to try simplifying your model first, here’s an example of how you might adjust your code:

   % Model with only random intercept
   glme = fitglme(T,'Response ~ 1 + Ratio*ASD*Modality*Range + (1|Subject)', ...
       'Distribution','binomial','Link','probit','FitMethod','laplace');
   % If successful, add complexity gradually
   % For example: Adding random slope only after ensuring previous model 
   works
   glme_with_slope = fitglme(T,'Response ~ 1 + Ratio*ASD*Modality*Range + 
   (1+Ratio|Subject)', ...
       'Distribution','binomial','Link','probit','FitMethod','laplace');

Documentation and Community Support: Since you've noted that similar queries have been raised in forums without responses, consider checking MATLAB’s official documentation or user community for updates on known issues with specific versions (like 2015a).

Modeling Strategy: When dealing with complex mixed models, consider using simulation studies or bootstrapping techniques to validate your model’s assumptions and robustness before finalizing results.

By following these strategies and considerations, you should be able to troubleshoot and potentially resolve the warning issue you're encountering with your mixed-effects model in MATLAB.

Hope this helps.

  3 commentaires
Walter Roberson
Walter Roberson le 10 Jan 2025
Neither of those have comments as such, just Questions asked. And they were asked by different people from each other, neither of whom is @Joseph DeCunha. They were asked in June 2016 and November 2017.

Connectez-vous pour commenter.

Catégories

En savoir plus sur Financial Data Analytics dans Help Center et File Exchange

Produits


Version

R2023b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by