[HELP] Statistics in MATLAB - Two way Mixed anova or something else ?

Hello everyone,
I would like to ask some questions about appropriate statistical test for our study design.
This study aims to evaluate the postural balance among parkinsonian patients.
We have 3 groups, and in each group we have 4 – 4 – 3 subjects, randomly attributed, and 3 different interventions for each group.
We perform a pre-evaluation and a post evaluation after 6 weeks of intervention.
We have severals dependant variables.
As Laerd statistics advise us with the statistical selector we used a Two-way mixed Anova. We have some concern about our statistical power due to the small sample size in each group.
We would like to see group differences, but also if possible, associations & correlation. Is a Two-Way Mixed Anova the right test or Linear Regression is a better one ? Or neither ?
Thank you everyone

5 commentaires

I'm not sure I understand the study design. I assume that you measure the same dependent variables in all subjects before and after all interventions. For example, Measure 1=standard deviation of center of pressure (measured on a force plate for 30 seconds), Measure 2=timed up and go, Measure 3=4 square step test. All three measures are made in each subject pre and post each intervention. If that is not true, then please say so.
Do you do the same 3 interventions in each of the three groups? If so, then I don't understand why you call it three groups of 4, 4, and 3. Why not call it one group of 11? It seems like they are all doing the same things and being evaluated the same way.
Do you do Intervention 1 in Group 1, Intervention 2 in Group 2, and Intervention 3 in Group 3? Then it seems like you have performed 3 independent experiments, and those experiments can and should be analyzed separately.
Do you do Interventions 1 and 2 in Group 1, Interventions 2 and 3 in Group 2, and Interventions 3 and 1 in Group 3? I would have to think about that one.
Or do you do something different than any of the above?
Let's be very clear on the experimental design before getting into the statistical analysis more.
Thanks for your answer.
You are right, we measure in all participants the same dependant variable (around 15/20).
Like your example, that’s right.
Do you do the same 3 interventions in each of the three groups?”
No, we perform 3 differents exercices program for each 3 different groups.
We only could recruit, in the short time we had, 11 participants, randomly assigned in 3 groups, that’s why we have 4 4 3.
We have one between factor with 3 levels (3 different groups), and one within factor (pre and post evaluations) with continiuous variable, we can’t determine covariates, and we evaluate 1 dependant variable at a time.
Do you do Intervention 1 in Group 1, Intervention 2 in Group 2, and Intervention 3 in Group 3? Then it seems like you have performed 3 independent experiments » Yes.
I hope that helps!
Kind regards folks !
I am not a statistician. I recommend that you consult someone who is.
When you say "“we perform 3 differents exercices program for each 3 different groups", I assume this means you do one intervention in each group, and the intervention is different for each group. If that is not true, disregard the following.
I understand that you measure 15-20 indpendent variables in each subject. You plan to analyze the treatment effect independently for each of the independent variables. More on that later.
A mixed two-factor ANOVA could be appropriate for this problem. Your design definitely mimics the "Study Deisgn 1" described by Laerd here, for which Laerd proposed a mixed two factor ANOVA. However, I am not sure I trust Laerd's advice, because Laerd proposes using a mixed two factor anova for "Study Design 2", whose description sounds to me like a clear "Model I" (two fixed factors). Another reason I am hesitant to use a mixed, or Model III, analysis, is this comment from Prof. Zelmer at USCD: "In practice, very few people employ model II or mixed-model ANOVA, because they use less stringent criteria for determining the difference between fixed effects and random effects." The only difference in the analysis, when you choose Model I (two fixed factors) versus Model III (mixed: one fixed factor, one random) is in how you compute the F statistic for testing the factor that is fixed in both models, which is the treatment. See Zar, Biostatistical Analysis, 5th edition, Table 12.3, p. 262.
You could do a single factor ANOVA on the change in the dependent variable, from pre to post. In this analsis, you would tabulate the change in the dependent variable from pre to post for each subject. The single factor is treatment (or, equivalently, group), and there are three possible values. The null hypothesis for the ANOVA is that the pre-to-post change in dependent variable is the same for all three groups. The alternative is that the pre-to-post changes are not the same for all three groups.
Th difference between doing a two-factor ANOVA (whether it is Model I or Model III) on the scores, versus a single factor ANOVA on the change in scores, is that with the two-factor, you test 3 hypotheses, and with the one factor, you only test one. The tests with the two-factor are: "H0: the means of the groups are the same"; "H0: The pre and post scores are the same"; and "H0: the effect of time is different for different groups". For the single factor, the one hypothesis tested is "H0: the change in score is different for different treatments". It seems to me that only the last hypothesis is of interest to you, so why not do a one-way ANOVA on the change in scores?
Again, consult a professional for the pros and cons of two factor ANOVA on the scores, versus one factor ANOVA on the change in scores. (As in any field, some are better than others. I have met and worked with some professional statisticians whose advice and counsel has not impressed me, because they could not articulate the theoretical and/or practical reason for favoring one approach over another. Maybe they were operating on a level beyond my comprehension.)
Now back the the issue of 15-20 dependent variables. I think there is a significant multiple comparisons problem, i.e. a significant risk of a Type I error, with this experimental deisgn. See here, for example. If I got a manscript with this design for review, I would ask that the authors address the issue. If I were the author, I would try the Holm-Bonferroni method.
A useful place to seek help on statistics is CrossValidated StackExchange site.
You say you're not statistician, but you seems to be like a very good amateur !
I will follow your advice, ask someone who's used to do statistics and consult CrossValidated, and on the other hand, consult the book you quote.
Yes, in fact we only want to see if 1 group (intervention/treatment) is better than the other 2.
Thanks for your answers !

Connectez-vous pour commenter.

 Réponse acceptée

William Rose
William Rose le 15 Août 2022
@John Doe, You're welcome. Thank you for your kind comment.
To determine if treatment 1 is better than treatment 2 or treatment 3, proceed in two steps.
Step 1. Test if treatments 2 and 3 are comparable. If they are comparable (more on that below), then combine groups 2 and 3, and proceed to step 2A. If treatments 2 and 3 are not comparable, proceed to step 2B.
Step 2A: One-sided test performed on the pre-to-post change in score. The null hypothesis is H0: The change in score with Treatment 1 is less than or equal to the change with combined treatment 2 or 3 (which are combined into a single group). You will have a better chance of detecting that Treatment 1 is superior if you use the ones-sided test, since the critical value for rejecting H0 is lower with a 1-sided test. Matlab's t test uses Welch's version of the t test by default, which is good, since it does not depend on equal variances. See here for more on the benefits of Welch's t test.
Step 2B: Do a one-sided test on the change in scores in group 1 versus group 2. The null hypothesis is H0: The change in scores is the same for treatments 1 and 2. Do a one-sided test on group 1 versus group 3, with an analogous null hypothesis.
Now let's return to step 1, deciding if groups 2 and 3 produce comparable results. If we fail to combine groups that are in fact similar (let's call this error A), we will lose power to detect potential superiority of treatment 1, because the numbers in groups 2 and 3 are very low. If we combine groups that are in fact not similar (error B), we will lose power, because the variance of the combined group will be high, and this high variance will make it harder to detect potential superiority of treatment 1. We don't want to make error A or error B. Therefore it is not obvious which null hypotheses is preferable, when testing the equality of groups 2 and 3. We can go with H0a: Groups 2 and 3 are the same. With this null hypothesis, we are unlikely to make error A, because we will only split groups 2 and 3 if there is strong evidence that they are unequal. Or we can go with H0b: Groups 2 and 3 are not the same. With H0b, we are unlikely to make error B, because we will only combine groups if there is strong evidence that they are equal. You will have to decide which null hypothesis you prefer. If you choose H0a, then use a two-sided Welch's t test, discussed above, applied to the change in scores in group 2 versus group 3. If you choose H0b, then use two one-sided tests (TOST), described here (2800 citations!). An advantage of H0a compared to H0b is that with H0a, you do not have to choose a "range of equivalence" or justify that choice, like you have to do with TOST.
The small sample sizes in this study, n=4, 4, 3 for the groups, makes me suspect that no matter how you test, you will be hard-pressed to find significance, at least if you deal in a proper way with the multiple comparisons problem associated with 15-20 dependent variables.
Good luck.

Plus de réponses (1)

William Rose
William Rose le 14 Août 2022
@John Doe, Please consider my second comment below my answer. I meant to post it as an answer.

Catégories

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by