One fascinating thing about working in the area of psychological statistics is how hard it is to move people away from reliance on bad, inefficient or otherwise problematic methods. My own view - informed to some extent by the literature, by experience and by anecdote is that it isn't sufficient merely to establish than the standard approach is wrong. It isn't even sufficient to provide an obviously superior alternative. You also need to three other things: i) get the message out to the people using the method, ii) reduce barriers to implementing the method (provide user-friendly software, easy to understand tutorial sand so forth), and iii) get the new method taught at undergraduate or masters level. A good illustration is the need to provide confidence intervals (CIs) as well as point estimates of statistics. This has been advocated for decades and has only relatively gradually trickled through to standard practice. In addition, CIs are commonly reported only where popular software such as SPSS reports them by default. For instance, few psychology papers report a CI for the correlation coefficient r (probably because it isn't in many introductory texts and isn't part of the default SPSS output).
A case in point is the problem of internal reliability estimation. There are dozens of papers in the psychometrics literature that have shown that the most popular internal consistency reliability measure, coefficient alpha (or Cronbach's alpha) is seriously flawed. A number of alternative approaches or measures have been proposed that are relatively easy to estimate and have good properties when applied to scales in psychology. However, these measures rarely get used in practice. The main barriers here are probably awareness of the problem and availability of appropriate software. My guess is that once these barriers are reduced then alternatives to alpha will also get into text books and be more widely taught.
Tom Dunn (a former PhD student) has just written a paper (co-authored with myself and Viv Brunsden) aiming to change people's attitude to coefficient alpha. This has just been accepted in the British Journal of Psychology. In it we try to summarize with as little jargon as possible the criticisms of coefficient alpha and recommend a simple alternative: McDonald's coefficient omega (McDonald, 1999). Crucially we also provide a mini-tutorial on calculating omega using R. We chose mainly R because it is free, open source and runs on Mac, PC and linux systems. A further, major advantage is that the MBESS package will estimate a bootstrap CI for omega. A reliability estimate (of any kind) is pretty useless if presented as a point estimate because it could be measured very imprecisely. In many cases the lower bound of the 95% CI is a more useful guide to whether a test is reliable. The lower bound will usually be conservative but it is better to be safe than sorry in most cases.
A pre-print of the paper (links to the online version will be added as soon as they are available) can be found here.
The R script that runs the example in the paper can be accessed here.
The data sets (in a zipped folder called "omega example") can be downloaded here. Unzip this folder and put it on your desktop. (If you move it elsewhere you need to specify the path in the R code or change the R working directory to the folder where the data files are located. You can also download the .csv formatted data file directly from here.
References
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16, 297–334.
Dunn, T., Baguley, T., & Brunsden, V. (2013, in press). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology.
McDonald, R. P. (1999). Test theory: A unified approach. Mahwah, NJ: Lawrence Erlbaum Associates.
Aug
17
I Will Not Ever, NEVER Run a MANOVA
I have been thinking to write a paper about MANOVA (and in particular why it should be avoided) for some time, but never got round to it. However, I recently discovered an excellent article by Francis Huang that pretty much sums up most of what I'd cover. In this blog post I'll just run through the main issues and refer you to Francis' paper for a more in-depth critique or the section on MANOVA in Serious Stats (Baguley, 2012).
Jan
19
A brief introduction to logistic regression
I wrote a brief introduction to logistic regression aimed at psychology students. You can take a look at the pdf here:
A more comprehensive introduction in terms of the generalised linear model can be found in my book:
Baguley, T. (2012). Serious stats: a guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.
A more comprehensive introduction in terms of the generalised linear model can be found in my book:
Baguley, T. (2012). Serious stats: a guide to advanced statistics for the behavioral sciences. Palgrave Macmillan.
May
18
Serious Stats: Obtaining CIs for Spearman's rho or Kendall's tau
I wrote a short blog (with R Code) on how to calculate corrected CIs for rho and tau using the Fisher z transformation.
May
13
Serious stats: Type II versus Type III Sums of Squares
I have written a short article on Type II versus Type III SS in ANOVA-like models on my Serious Stats blog:
https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/
https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/
Sep
5
Egon Pearson correction for Chi-Square
I have just published a short blog on the Egon Pearson correction for the chi-square test. This includes links to an R function to run the corrected test (and also provides residual analyses for contingency tables).
The blog is here and the R function here.
The blog is here and the R function here.
Sep
15
Provisional programme: ESRC funded conference: Bayesian Data Analysis in the Social Sciences Curriculum (Nottingham, UK 29th Sept 2017)
Bayesian Data Analysis in the Social Sciences Curriculum
Supported by the ESRC’s Advanced Training Initiative
Venue: Bowden Room Nottingham Conference Centre
Burton Street, Nottingham, NG1 4BU
Booking information online
Provisional schedule:
Organizers:
Thom Baguley twitter: @seriousstats
Mark Andrews twitter: @xmjandrews
Supported by the ESRC’s Advanced Training Initiative
Venue: Bowden Room Nottingham Conference Centre
Burton Street, Nottingham, NG1 4BU
Booking information online
Provisional schedule:
Organizers:
Thom Baguley twitter: @seriousstats
Mark Andrews twitter: @xmjandrews
Jun
13
STOP PRESS Introductory Bayesian data analysis workshops for social scientists (June 2017 Nottingham UK)
The third and (possibly) final round of the workshops of our introductory workshops was overbooked in April, but we have managed to arrange some additional dates in June.
There are still places left on these. More details at: http://www.priorexposure.org.uk/
As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before).
There are still places left on these. More details at: http://www.priorexposure.org.uk/
As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before).
May
25
Serious Stats blog: CI for differences in independent R square coefficients
In my Serious Stats blog I have a new post on providing CIs for a difference between independent R square coefficients.
You can find the post there or go direct to the function hosted on RPubs. I have been experimenting with knitr but can't yet get the html from R Markdown to work with my blogger or wordpress blogs.
You can find the post there or go direct to the function hosted on RPubs. I have been experimenting with knitr but can't yet get the html from R Markdown to work with my blogger or wordpress blogs.
Add a comment