I wrote a short blog (with R Code) on how to calculate corrected CIs for rho and tau using the Fisher z transformation.



0

Add a comment

  1. I have been thinking to write a paper about MANOVA (and in particular why it should be avoided) for some time, but never got round to it. However, I recently discovered an excellent article by Francis Huang that pretty much sums up most of what I'd cover. In this blog post I'll just run through the main issues and refer you to Francis' paper for a more in-depth critique or the section on MANOVA in Serious Stats (Baguley, 2012).

    I have three main issues with MANOVA:

    1) It doesn't do what people think it does

    2) It doesn't offer Type I error protection for subsequent univariate tests (even though many text books say it does)

    3) There are generally better approaches available if you really are interested in multivariate research questions

    Let's start with the first point. People think MANOVA analyses multiple outcome variables (DVs). This isn't really correct. It creates a composite DV by combining the outcome variables in an atheoretical way. Then analysis proceeds on the composite DV. The composite is in a sense 'optimal' because weights are selected to maximise the variance explained from the set of predictors in the model. However, this optimisation will capitalise on chance. Furthermore it will be unique to your sample – invalidating (or at least making difficult) comparisons between studies. It will also be hard to interpret. This has implications knock-on implications for things like standardised effect sizes as generally effect size metric for MANOVA relate to the composite DV rather than the original outcome variables. For further discussion see Grayson (2004).

    In relation to the second point the issue is one that is fairly well known in other contexts. In ANOVA one can use an omnibus test of a factor to decide whether to proceed with post hoc pairwise comparisons. This is the logic behind the Fisher LSD test and it is well known that this test doesn't protect Type I error very well if there are more than 3 means being compared – specially it protects against the complete null hypothesis and not the partial null hypothesis (see Serious Stats p. 495-501). For adequate Type I error protection it would be better to use something like the Holm or Hochberg correction (the latter having greater statistical power if the univariate test statistics are correlated – which they generally are if MANOVA is being considered). That said if you do just want a test of omnibus null hypothesis – that there are no effects on any of the DVs – MANOVA may be a convenient way to summarise a large set of univariate tests that are non-significant.

    Last but not least, there exist multivariate regression (and other) approaches that are more appropriate for multivariate research questions (see also Huang, 2019). However, I've rarely seen MANOVA used for multivariate research questions. In fact, I've rarely if ever seen a MANOVA reported that actually aided interpretation of the data.

    References

    Baguley, T. (2012). Serious stats: A guide to advanced statistics for the behavioral sciences. Palgrave Macmillan. (see pages 647-650)
    Grayson, D. (2004). Some Myths and Legends in Quantitative Psychology. Understanding Statistics, 3(2), 101–134. https://doi.org/10.1207/s15328031us0302_3
    Huang, F. L. (2020). MANOVA: A Procedure Whose Time Has Passed? Gifted Child Quarterly64(1), 56–60. https://doi.org/10.1177/0016986219887200
    Huberty, C. J., & Morris, J. D. (1989). Multivariate analysis versus multiple univariate analyses. Psychological Bulletin, 105(2), 302–308. https://doi.org/10.1037/0033-2909.105.2.302
    2

    View comments

  2. I wrote a brief introduction to logistic regression aimed at psychology students. You can take a look at the pdf here:  



    A more comprehensive introduction in terms of the generalised linear model can be found in my book:

    0

    Add a comment


  3. I wrote a short blog (with R Code) on how to calculate corrected CIs for rho and tau using the Fisher z transformation.



    0

    Add a comment

  4. I have written a short article on Type II versus Type III SS in ANOVA-like models on my Serious Stats blog:


    https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/

    0

    Add a comment


  5. I have just published a short blog on the Egon Pearson correction for the chi-square test. This includes links to an R function to run the corrected test (and also provides residual analyses for contingency tables).

    The blog is here and the R function here.


    0

    Add a comment



  6. Bayesian Data Analysis in the Social Sciences Curriculum

    Supported by the ESRC’s Advanced Training Initiative


    Venue:           Bowden Room Nottingham Conference Centre
                           Burton Street, Nottingham, NG1 4BU


    Provisional schedule:

    Time
    Speaker
    Title
    9.30

    Registration (and coffee!)

    9.50
    Thom Baguley
    Introduction And Welcome

    10.00
    Mark Andrews
    Thom Baguley
    Teaching Bayesian Data Analysis To Social Scientists
    10.50
    Zoltan Dienes
    Principles For Teaching And Using Bayes Factors

    11.40

    Coffee

    12.00
    Colin Foster
    Bayes Factors Show Equivalence Between Two Contrasting Approaches To Developing School Pupils’ Mathematical Fluency
    12.20
    Helen Hodges
    Towards A Bayesian Approach In Criminology:
    A Case Study Of Risk Assessment In Youth Justice
    12.40

    Lunch

    1.40
    Jayne Pickering
    Matthew Inglis
    Nina Attridge
    Does Pain Affect Performance On The Attentional Networking Task?

    2.00
    Oliver Clark
    First Steps Towards A Bayesian Model Of Video Game Avatar Influence
    2.20

    Coffee

    2.40
    Richard Morey
    The Fallacy Of Placing Confidence In Confidence Intervals

    3.30
    Daniel Lakens
    Learning Bayes As A Frequentist: A Personal Tragedy In Three Parts
    4.20

    Close and farewell



    Organizers:

                Thom Baguley   twitter: @seriousstats


                Mark Andrews  twitter: @xmjandrews
    0

    Add a comment


  7. The third and (possibly) final round of the workshops of our introductory workshops was overbooked in April, but we have managed to arrange some additional dates in June.

    There are still places left on these. More details at: http://www.priorexposure.org.uk/

    As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before). Unfortunately we can't offer bursaries for these additional workshops (as this wasn't part of the original ESRC funding).

    They are primarily (but not exclusively) aimed at UK social science PhD students (so not just Psychology or Neuroscience, but very much also Sociology, Criminology, Politics and other social science disciplines). We hope the workshops will also appeal to early career researchers and others doing quantitative social science research (but with little or no Bayesian experience).

    The registration cost for each workshop is £20 (for postgrads) and £30 (or others).
    0

    Add a comment


  8. In my Serious Stats blog I have a new post on providing CIs for a difference between independent R square coefficients.

    You can find the post there or go direct to the function hosted on RPubs. I have been experimenting with knitr  but can't yet get the html from R Markdown to work with my blogger or wordpress blogs.
    1

    View comments

Links
Blog Archive
Subscribe
Subscribe
About Me
About Me
Loading
Dynamic Views theme. Powered by Blogger. Report Abuse.