A claim I see quite often in mediation is that there is full (or complete) mediation if the indirect effect (ab) is statistically significant and the direct effect (c') is not. That never really made any sense to me as this is essentially a claim about the size of the mediation effect. For instance the overall effect c could be 10, ab could be 5.01 and c' could be 4.99 and the proportion of mediation accounted for by ab and by c' would be about equal at 0.5 or 50%. (If using common approaches like bootstrapping its feasible, albeit unlikely, for c' to be bigger than ab and yet the latter not the former is statistically significant).
In my view, claiming full mediation requires that the ab effect is large and accounts for most and perhaps nearly all of the mediation effect. There are some effect size metrics proposed to determine if mediation is full or substantial. For example, one example includes the proportion of the mediation effect account for by ab being > 0.80 as part of the condition. My personal view is that it's more helpful to report a mediation effect that could go from 0 to 1 (or 0 to 100%) and just interpret that. In practice full or complete mediation expressed as 100% of the effect is likely to be rare and its more useful to know how substantial the effect is. You could also set a threshold of say 90% or 95% in advance if you pre-registered the study (and if being able to declare full mediation really mattered).
Having determined the need for an effect size estimate, what should you use? Until recently I think the choice wasn't obvious. I always recommended the simple proportion measure above because it's simple and easy to interpret and I value that more than other statistical properties. However, a 2026 paper by Yuan, wang and Liu reviews a range of different metrics and proposed a new R-square measure. Unlike many other R-square type measure it looks solely at the variance accounted for by the total effect of X on Y (c). So it takes the value 1 if the variance accounted for by ab is 100% of this value (and 0 if it explains none). I am usually wary of R-square measures but this seems reasonable (and in fact it behaves similarly to the proportion of effect measure described above). So one can see it as a refinement of this simpler approach to use variance rather than the magnitude of effect.
All of the above holds for consistent mediation, but not inconsistent mediation. Yuan et al. do describe a version of their measure for the inconsistent case, but while interesting I think it isn't useful in practice. I think it makes more sense to generalise the proportion measure to the inconsistent case.
I think the paper is well worth a read as it looks at many different measures and also includes R code to calculate most of the measures. I ended up adapting their code to make it easier to use and adding to the range of statistics provided. I also include my attempt to generalise a proportion measure to the inconsistent case. You can find my function, examples of how to use it and a longer note on my thoughts about effect size for simple mediation here.
No comments:
Post a Comment