I have been thinking to write a paper about MANOVA (and in particular why it should be avoided) for some time, but never got round to it. However, I recently discovered an excellent article by Francis Huang that pretty much sums up most of what I'd cover. In this blog post I'll just run through the main issues and refer you to Francis' paper for a more in-depth critique or the section on MANOVA in Serious Stats (Baguley, 2012).

I have three main issues with MANOVA:

1) It doesn't do what people think it does

2) It doesn't offer Type I error protection for subsequent univariate tests (even though many text books say it does)

3) There are generally better approaches available if you really are interested in multivariate research questions

Let's start with the first point. People think MANOVA analyses multiple outcome variables (DVs). This isn't really correct. It creates a composite DV by combining the outcome variables in an atheoretical way. Then analysis proceeds on the composite DV. The composite is in a sense 'optimal' because weights are selected to maximise the variance explained from the set of predictors in the model. However, this optimisation will capitalise on chance. Furthermore it will be unique to your sample – invalidating (or at least making difficult) comparisons between studies. It will also be hard to interpret. This has implications knock-on implications for things like standardised effect sizes as generally effect size metric for MANOVA relate to the composite DV rather than the original outcome variables. For further discussion see Grayson (2004).

In relation to the second point the issue is one that is fairly well known in other contexts. In ANOVA one can use an omnibus test of a factor to decide whether to proceed with post hoc pairwise comparisons. This is the logic behind the Fisher LSD test and it is well known that this test doesn't protect Type I error very well if there are more than 3 means being compared – specially it protects against the complete null hypothesis and not the partial null hypothesis (see Serious Stats p. 495-501). For adequate Type I error protection it would be better to use something like the Holm or Hochberg correction (the latter having greater statistical power if the univariate test statistics are correlated – which they generally are if MANOVA is being considered). That said if you do just want a test of omnibus null hypothesis – that there are no effects on any of the DVs – MANOVA may be a convenient way to summarise a large set of univariate tests that are non-significant.

Last but not least, there exist multivariate regression (and other) approaches that are more appropriate for multivariate research questions (see also Huang, 2019). However, I've rarely seen MANOVA used for multivariate research questions. In fact, I've rarely if ever seen a MANOVA reported that actually aided interpretation of the data.

References

Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan. (see pages 647-650)
Grayson, D. (2004). Some Myths and Legends in Quantitative Psychology. Understanding Statistics, 3(2), 101–134. https://doi.org/10.1207/s15328031us0302_3
Huang, F. L. (2020). MANOVA: A Procedure Whose Time Has Passed? Gifted Child Quarterly64(1), 56–60. https://doi.org/10.1177/0016986219887200
Huberty, C. J., & Morris, J. D. (1989). Multivariate analysis versus multiple univariate analyses. Psychological Bulletin, 105(2), 302–308. https://doi.org/10.1037/0033-2909.105.2.302
2

View comments

I have been thinking to write a paper about MANOVA (and in particular why it should be avoided) for some time, but never got round to it. However, I recently discovered an excellent article by Francis Huang that pretty much sums up most of what I'd cover. In this blog post I'll just run through the main issues and refer you to Francis' paper for a more in-depth critique or the section on MANOVA in Serious Stats (Baguley, 2012).
2

I wrote a brief introduction to logistic regression aimed at psychology students. You can take a look at the pdf here:  

A more comprehensive introduction in terms of the generalised linear model can be found in my book:

Baguley, T. (2012). Serious stats: a guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.

I wrote a short blog (with R Code) on how to calculate corrected CIs for rho and tau using the Fisher z transformation.

I have written a short article on Type II versus Type III SS in ANOVA-like models on my Serious Stats blog:

https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/

I have just published a short blog on the Egon Pearson correction for the chi-square test. This includes links to an R function to run the corrected test (and also provides residual analyses for contingency tables).

The blog is here and the R function here.

Bayesian Data Analysis in the Social Sciences Curriculum

Supported by the ESRC’s Advanced Training Initiative

Venue:           Bowden Room Nottingham Conference Centre

Burton Street, Nottingham, NG1 4BU

Booking information online

Provisional schedule:

Organizers:

Thom Baguley   twitter: @seriousstats

Mark Andrews  twitter: @xmjandrews

The third and (possibly) final round of the workshops of our introductory workshops was overbooked in April, but we have managed to arrange some additional dates in June.

There are still places left on these. More details at: http://www.priorexposure.org.uk/

As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before).

In my Serious Stats blog I have a new post on providing CIs for a difference between independent R square coefficients.

You can find the post there or go direct to the function hosted on RPubs. I have been experimenting with knitr  but can't yet get the html from R Markdown to work with my blogger or wordpress blogs.
1
Links
Blog Archive
Subscribe
Subscribe
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.