tag:blogger.com,1999:blog-278622472024-11-01T08:11:38.195+00:00Psychological Statisticsthomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.comBlogger60125tag:blogger.com,1999:blog-27862247.post-49358374620792920962021-09-08T22:59:00.000+01:002021-09-08T22:59:22.480+01:00A quick overview of the egocentric relation event model (EREM) with linked examples in R: part 1<p><span style="font-family: georgia;">This post is based on an interdisciplinary collaboration (between anthropologists and psychologists) with Kate Ellis-Davies and Sheina Lew-Levy (who initiated the project), Eleanor Fleming and Adam Boyette. The work was recently published in <i>Field Methods</i> and is available here:&nbsp;<a href="https://journals.sagepub.com/doi/full/10.1177/1525822X20987073" target="_blank">Demonstrating the Utility of Egocentric Relational Event Modeling Using Focal Follow Data from Congolese BaYaka Children and Adolescents Engaging in Work and Play</a>. I led on the statistical modeling of the data including adapting the implementation for focal follow data (which I'll explain more about in part 2) with the other authors contributing knowledge of the theory and literature (as we were keen to ensure that we demonstrated the approach using real data addressing a substantive research question). My only regret about the project is that, as of writing, I've only met been able to meet one of my co-authors in person.</span></p><p><span style="font-family: georgia;">The purpose of this blog post is to give a bit more context to the approach and make the it easier to learn about the <i>egocentric relational event model</i>&nbsp;(EREM) by providing access to the R code and data. A relational event is a discrete event&nbsp;involving an actor and one or more targets. For example you might observe children in a play group and code their &nbsp;interactions as discrete events in time such as child 1 approaching child 2, child 2 offering child 1 a toy and so on.&nbsp;Relational event models are one approach to modeling such data (and fall under the broader umbrella of network methods). However, not all relational event data involves such complete and detailed network data. In some cases you are only interested in or have access to data involving one actor at a time in relation to their environment (which&nbsp;may include interactions with other actors). Such data lend themselves&nbsp;to analysis using the egocentric relational event model. Although this sort of data might seem limiting it is often very rich. In particular it lends itself to&nbsp;analysing patterns of sequences among discrete non-overlapping events.</span></p><p><span style="font-family: georgia;">The model comes in two flavours: ordinal and interval. In the ordinal version you only have access to information about the order of events. In the interval version you have (or can infer) start and finish times for each event. This means that you can model not only patterns in the sequences of events but their duration. For example, does a particular type of event increase the frequency or duration of another event?</span></p><p><span style="font-family: georgia;">The Marcum and Butts example uses the <a href="American Time Use Survey (ATUS) " target="_blank">American Time Use Survey</a> in which "</span><span style="background-color: white;"><span style="font-family: georgia;"><span style="color: #333333;"><span style="caret-color: rgb(51, 51, 51);">measures the amount of time people spend doing various activities, such as paid work, childcare, volunteering, and socializing". Using these data we can answer questions such as how often does sleep get interrupted by other activities, which activities cause more sleep interruptions (or more prosaic&nbsp;questions such as how much time do people spend doing a particular activity). We can also look at how covariates impact the frequency or duration of events. For example, are some sleep interruptions more common for men than women?</span></span></span></span></p><p><span style="font-family: georgia;">Please note that the following examples assume you have some familiarity with linear regression models (and ideally&nbsp;generalised linear models for discrete data such as logistic or Poisson regression).&nbsp;You'll also need a working knowledge of R (or at least similar statistical software programming environments). If R is new to you I'd suggest finding a tutorial on using R with RStudio first. (There are a lot of these online - including many videos.)</span></p><p><span style="font-family: georgia;">In this first part I'm actually going to focus not on our data but a paper by <a href="https://doi.org/10.18637/jss.v064.i05" target="_blank">Marcum and Butts (2015)</a>.&nbsp;</span><span style="font-family: georgia;">This introduces the egocentric relational event model and is a fantastic resource for a lot of the technical details of the model.&nbsp;</span><span style="font-family: georgia;">You can run an EREM with the&nbsp;</span><span style="font-family: courier; font-size: x-small;">relevent</span><span style="font-family: georgia;"> package in R, but setting up and running EREMs is more than a little bit fiddly. Marcum and Butts' have made this easier with the </span><span style="font-family: courier; font-size: x-small;">informR</span><span style="font-family: georgia;"> package in R. This is essentially a helper package to make running EREM models easier. They include a really useful tutorial with R code in the&nbsp;paper. So if you want to learn about the EREM I'd suggest working through the relevant parts of the paper (no pun intended). To make this easier (I hope) I've added some&nbsp;commentary and made some minor tweaks to their example.</span></p><p><span style="font-family: georgia;">You can access the <a href="https://rpubs.com/seriousstats/erem1" target="_blank">Marcum and Butts (2015) worked example here</a>.&nbsp;</span></p><p><span style="font-family: georgia;">In part 2 I'll focus on our <i>Field Methods</i>&nbsp;paper.</span></p><div><div class="expandable-author" style="display: inline;"><span class="contribDegrees" style="padding-right: 0px; position: relative;"><span style="font-family: georgia;"><br /></span></span></div></div><div><div class="expandable-author" style="display: inline;"><span class="contribDegrees" style="padding-right: 0px; position: relative;"><span style="font-family: georgia;"><b>References</b></span></span></div></div><div><div class="expandable-author" style="display: inline;"><span style="font-family: georgia;"><br /></span></div></div><div><div><span style="font-family: georgia;">Ellis-Davies, K., Lew-Levy, S., Fleming, E., Boyette, A. H., &amp; Baguley, T. (2021). Demonstrating the Utility of Egocentric Relational Event Modeling Using Focal Follow Data from Congolese BaYaka Children and Adolescents Engaging in Work and Play. <i>Field Methods</i>. 33(3):287-304.</span></div><div><span style="font-family: georgia;"><br /></span></div><div><span style="font-family: georgia;">Marcum, C. S., &amp; Butts, C. T. (2015). Constructing and Modifying Sequence Statistics for relevent Using informR in R. <i>Journal of Statistical Software</i>, 64(5).</span></div></div><div><div class="csl-bib-body" style="line-height: 2; margin-left: 2em; text-indent: -2em;"><span class="Z3988" title="url_ver=Z39.88-2004&amp;ctx_ver=Z39.88-2004&amp;rfr_id=info%3Asid%2Fzotero.org%3A2&amp;rft_id=info%3Adoi%2F10.18637%2Fjss.v064.i05&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&amp;rft.genre=article&amp;rft.atitle=Constructing%20and%20Modifying%20Sequence%20Statistics%20for%20%3Cb%3Erelevent%3C%2Fb%3E%20Using%20%3Cb%3EinformR%3C%2Fb%3E%20in%20%3Ci%3ER%3C%2Fi%3E&amp;rft.jtitle=Journal%20of%20Statistical%20Software&amp;rft.stitle=J.%20Stat.%20Soft.&amp;rft.volume=64&amp;rft.issue=5&amp;rft.aufirst=Christopher%20Steven&amp;rft.aulast=Marcum&amp;rft.au=Christopher%20Steven%20Marcum&amp;rft.au=Carter%20T.%20Butts&amp;rft.date=2015&amp;rft.issn=1548-7660&amp;rft.language=en"></span></div></div>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-31894410300924907122021-08-17T15:20:00.000+01:002021-08-17T15:20:02.494+01:00I Will Not Ever, NEVER Run a MANOVA<p>I have been thinking to write a paper about MANOVA (and in particular why it should be avoided) for some time, but never got round to it. However, I recently discovered an excellent article by Francis Huang that pretty much sums up most of what I'd cover. In this blog post I'll just run through the main issues and refer you to Francis' paper for a more in-depth critique or the section on MANOVA in Serious Stats (Baguley, 2012).</p><p>I have three main issues with MANOVA:</p><p>1) It doesn't do what people think it does</p><p>2) It doesn't offer Type I error protection for subsequent univariate tests (even though many text books say it does)</p><p>3) There are generally better approaches available if you really are interested in multivariate research questions</p><p>Let's start with the first point. People think MANOVA analyses multiple outcome variables (DVs). This isn't really correct. It creates a composite DV by combining the outcome variables in an atheoretical way. Then analysis proceeds on the composite DV. The composite is in a sense 'optimal' because weights are selected to maximise the variance explained from the set of predictors in the model. However, this optimisation will capitalise on chance. Furthermore it will be unique to your sample – invalidating (or at least making difficult) comparisons between studies. It will also be hard to interpret. This has implications knock-on implications for things like standardised effect sizes as generally effect size metric for MANOVA relate to the composite DV rather than the original outcome variables. For further discussion see Grayson (2004).</p><p>In relation to the second point the issue is one that is fairly well known in other contexts. In ANOVA one can use an omnibus test of a factor to decide whether to proceed with post hoc pairwise comparisons. This is the logic behind the Fisher LSD test and it is well known that this test doesn't protect Type I error very well if there are more than 3 means being compared – specially it protects against the complete null hypothesis and not the partial null hypothesis (see Serious Stats p. 495-501). For adequate Type I error protection it would be better to use something like the Holm or Hochberg correction (the latter having greater statistical power if the univariate test statistics are correlated – which they generally are if MANOVA is being considered). That said if you do just want a test of omnibus null hypothesis – that there are no effects on any of the DVs – MANOVA may be a convenient way to summarise a large set of univariate tests that are non-significant.</p><p>Last but not least, there exist multivariate regression (and other) approaches that are more appropriate for multivariate research questions (see also Huang, 2019). However, I've rarely seen MANOVA used for multivariate research questions. In fact, I've rarely if ever seen a MANOVA reported that actually aided interpretation of the data.</p><p><i>References</i></p><div class="csl-bib-body" style="line-height: 2; margin-left: 2em; text-indent: -2em;"><div class="csl-entry"><span style="text-indent: -2em;">Baguley, T. (2012).&nbsp;</span><i style="text-indent: -2em;"><a href="https://amzn.to/3sqKCQ7" target="_blank">Serious stats: A guide to advanced statistics for the behavioral sciences</a></i><span style="text-indent: -2em;">. Palgrave Macmillan. (see pages 647-650)</span></div><div class="csl-entry"><span style="text-indent: -2em;">Grayson, D. (2004). Some Myths and Legends in Quantitative Psychology.&nbsp;</span><i style="text-indent: -2em;">Understanding Statistics</i><span style="text-indent: -2em;">,</span><span style="text-indent: -2em;">&nbsp;</span><i style="text-indent: -2em;">3</i><span style="text-indent: -2em;">(2), 101–134.</span><span style="text-indent: -2em;">&nbsp;</span><a href="https://doi.org/10.1207/s15328031us0302_3" style="text-indent: -2em;">https://doi.org/10.1207/s15328031us0302_3</a></div><div class="csl-entry">Huang, F. L. (2020). MANOVA: A Procedure Whose Time Has Passed?&nbsp;<i>Gifted Child Quarterly</i>,&nbsp;<i>64</i>(1), 56–60.&nbsp;<a href="https://doi.org/10.1177/0016986219887200">https://doi.org/10.1177/0016986219887200</a></div><div class="csl-entry"><span style="text-indent: -2em;">Huberty, C. J., &amp; Morris, J. D. (1989). Multivariate analysis versus multiple univariate analyses.</span><span style="text-indent: -2em;">&nbsp;</span><i style="text-indent: -2em;">Psychological Bulletin</i><span style="text-indent: -2em;">,</span><span style="text-indent: -2em;">&nbsp;</span><i style="text-indent: -2em;">105</i><span style="text-indent: -2em;">(2), 302–308.</span><span style="text-indent: -2em;">&nbsp;</span><a href="https://doi.org/10.1037/0033-2909.105.2.302" style="text-indent: -2em;">https://doi.org/10.1037/0033-2909.105.2.302</a></div><div class="csl-entry"><div class="csl-bib-body" style="line-height: 2; margin-left: 2em; text-indent: -2em;"><span class="Z3988" title="url_ver=Z39.88-2004&amp;ctx_ver=Z39.88-2004&amp;rfr_id=info%3Asid%2Fzotero.org%3A2&amp;rft_id=info%3Adoi%2F10.1037%2F0033-2909.105.2.302&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&amp;rft.genre=article&amp;rft.atitle=Multivariate%20analysis%20versus%20multiple%20univariate%20analyses.&amp;rft.jtitle=Psychological%20Bulletin&amp;rft.stitle=Psychological%20Bulletin&amp;rft.volume=105&amp;rft.issue=2&amp;rft.aufirst=Carl%20J.&amp;rft.aulast=Huberty&amp;rft.au=Carl%20J.%20Huberty&amp;rft.au=John%20D.%20Morris&amp;rft.date=1989&amp;rft.pages=302-308&amp;rft.spage=302&amp;rft.epage=308&amp;rft.issn=1939-1455%2C%200033-2909&amp;rft.language=en"></span></div></div><div class="csl-entry"><div class="csl-bib-body" style="line-height: 2; margin-left: 2em; text-indent: -2em;"><span class="Z3988" title="url_ver=Z39.88-2004&amp;ctx_ver=Z39.88-2004&amp;rfr_id=info%3Asid%2Fzotero.org%3A2&amp;rft_id=urn%3Aisbn%3A978-0-230-57717-6&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&amp;rft.genre=book&amp;rft.btitle=Serious%20stats%3A%20a%20guide%20to%20advanced%20statistics%20for%20the%20behavioral%20sciences&amp;rft.place=Houndmills%2C%20Basingstoke%2C%20Hampshire%20%5BEngland%5D%20%3B%20New%20York&amp;rft.publisher=Palgrave%20Macmillan&amp;rft.aufirst=Thomas&amp;rft.aulast=Baguley&amp;rft.au=Thomas%20Baguley&amp;rft.date=2012&amp;rft.tpages=830&amp;rft.isbn=978-0-230-57717-6"></span></div></div><span class="Z3988" title="url_ver=Z39.88-2004&amp;ctx_ver=Z39.88-2004&amp;rfr_id=info%3Asid%2Fzotero.org%3A2&amp;rft_id=info%3Adoi%2F10.1177%2F0016986219887200&amp;rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&amp;rft.genre=article&amp;rft.atitle=MANOVA%3A%20A%20Procedure%20Whose%20Time%20Has%20Passed%3F&amp;rft.jtitle=Gifted%20Child%20Quarterly&amp;rft.stitle=Gifted%20Child%20Quarterly&amp;rft.volume=64&amp;rft.issue=1&amp;rft.aufirst=Francis%20L.&amp;rft.aulast=Huang&amp;rft.au=Francis%20L.%20Huang&amp;rft.date=2020-01-01&amp;rft.pages=56-60&amp;rft.spage=56&amp;rft.epage=60&amp;rft.issn=0016-9862&amp;rft.language=en"></span></div>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-86722016488228967512021-01-19T14:34:00.002+00:002021-01-19T14:34:22.385+00:00A brief introduction to logistic regression<div>I wrote a brief introduction to logistic regression aimed at psychology students. You can take a look at the pdf here: &nbsp;</div><div><br /></div><div><br /></div> <iframe height="480" src="https://drive.google.com/file/d/1ZIuvzen6kztKomY6Brir3-8gsSdmgUoi/preview" width="640"></iframe> <div><br /></div><div>A more comprehensive introduction in terms of the generalised linear model can be found in my book:</div><div><br /></div><div><div class="csl-bib-body" style="line-height: 2; padding-left: 1em; text-indent: -1em;"><div class="csl-entry">Baguley, T. (2012).&nbsp;<i><a href="https://www.amazon.co.uk/gp/product/0230577180/ref=as_li_tf_tl?ie=UTF8&amp;tag=psychologic05-21&amp;linkCode=as2&amp;camp=1634&amp;creative=6738&amp;creativeASIN=0230577180">Serious stats: a guide to advanced statistics for the behavioral sciences</a></i>. Palgrave Macmillan.</div><div class="csl-entry"><br /></div><div class="csl-entry"><br /></div></div></div>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-85872682729505985142020-05-18T20:29:00.000+01:002020-05-18T20:29:02.049+01:00Serious Stats: Obtaining CIs for Spearman's rho or Kendall's tau<br /> I wrote a short blog (with R Code) on how to calculate corrected CIs for rho and tau using the Fisher <i>z</i> transformation.<br /> <br /> <div style="text-align: center;"> <a href="https://seriousstats.wordpress.com/2020/05/18/cis-for-spearmans-rho-and-kendalls-tau/">Serious Stats blog post on CIs for rho and tau</a></div> <br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-81133647232368801682020-05-13T22:28:00.002+01:002020-05-13T22:28:56.351+01:00Serious stats: Type II versus Type III Sums of SquaresI have written a short article on Type II versus Type III SS in ANOVA-like models on my <a href="http://www.amazon.co.uk/gp/product/0230577180/ref=as_li_tf_tl?ie=UTF8&amp;tag=psychologic05-21&amp;linkCode=as2&amp;camp=1634&amp;creative=6738&amp;creativeASIN=0230577180">Serious Stats</a> blog:<br /> <br /> <br /> <a href="https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/">https://seriousstats.wordpress.com/2020/05/13/type-ii-and-type-iii-sums-of-squares-what-should-i-choose/</a><br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-8740594862415406412019-09-05T16:37:00.003+01:002019-09-05T16:37:46.448+01:00Egon Pearson correction for Chi-Square<br /> I have just published a short blog on the Egon Pearson correction for the chi-square test. This includes links to an R function to run the corrected test (and also provides residual analyses for contingency tables).<br /> <br /> The blog is<a href="https://seriousstats.wordpress.com/2019/09/05/chi-square-and-the-egon-pearson-correction/"> here</a> and the R function <a href="http://rpubs.com/seriousstats/epcs_test">here</a>.<br /> <br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-11991481110749113022017-09-15T14:50:00.000+01:002017-09-15T14:50:51.882+01:00Provisional programme: ESRC funded conference: Bayesian Data Analysis in the Social Sciences Curriculum (Nottingham, UK 29th Sept 2017)<br /> <br /> <div align="center" class="MsoNormal" style="margin-bottom: 3.75pt; margin-left: 0cm; margin-right: 0cm; margin-top: 8.05pt; mso-outline-level: 1; text-align: center;"> <b style="mso-bidi-font-weight: normal;"><span style="color: #333333; font-family: Palatino; font-size: 27.0pt; mso-ansi-language: EN-GB; mso-bidi-font-family: &quot;Times New Roman&quot;; mso-fareast-font-family: &quot;Times New Roman&quot;; mso-font-kerning: 18.0pt;">Bayesian Data Analysis in the Social Sciences Curriculum<o:p></o:p></span></b></div> <div class="MsoNormal"> <br /></div> <div align="center" class="MsoNormal" style="text-align: center;"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Supported by the ESRC’s Advanced Training Initiative<o:p></o:p></span></i></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino;">Venue:<span style="mso-tab-count: 1;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>Bowden Room Nottingham Conference Centre</span></div> <div class="MsoNormal" style="text-indent: 0px;"> <span style="font-family: Palatino; text-indent: 36pt;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Burton Street, Nottingham, NG1 4BU</span></div> <div class="MsoNormal" style="margin-left: 36.0pt; text-indent: 36.0pt;"> <br /></div> <div align="center" class="MsoNormal" style="text-align: center;"> <span lang="EN-US"><span style="font-family: Palatino;"><a href="https://www.ntu.ac.uk/about-us/events/events/2017/09/esrc-conference-bayesian-data-analysis-in-the-social-sciences-curriculum">Booking information online</a></span></span><span lang="EN-US" style="font-family: Palatino;"><o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <b style="mso-bidi-font-weight: normal;"><i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Provisional schedule:<o:p></o:p></span></i></b></div> <div class="MsoNormal"> <br /></div> <table border="1" cellpadding="0" cellspacing="0" class="MsoTableGrid" style="border-collapse: collapse; border: none; margin-left: -30.05pt; mso-border-alt: solid windowtext .5pt; mso-padding-alt: 0cm 5.4pt 0cm 5.4pt; mso-yfti-tbllook: 1184; width: 489px;"> <tbody> <tr style="mso-yfti-firstrow: yes; mso-yfti-irow: 0;"> <td style="background: #D9D9D9; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 217; mso-border-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <b style="mso-bidi-font-weight: normal;"><i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Time<o:p></o:p></span></i></b></div> </td> <td style="background: #D9D9D9; border-left: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 217; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div align="center" class="MsoNormal" style="text-align: center;"> <b style="mso-bidi-font-weight: normal;"><i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Speaker<o:p></o:p></span></i></b></div> </td> <td style="background: #D9D9D9; border-left: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 217; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div align="center" class="MsoNormal" style="text-align: center;"> <b style="mso-bidi-font-weight: normal;"><i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Title<o:p></o:p></span></i></b></div> </td> </tr> <tr style="mso-yfti-irow: 1;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">9.30<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <br /></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Registration (and coffee!)<o:p></o:p></span></i></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 2;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">9.50<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Thom Baguley<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Introduction And Welcome<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 3;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">10.00<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Mark Andrews<o:p></o:p></span></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Thom Baguley<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Teaching Bayesian Data Analysis To Social Scientists<o:p></o:p></span></div> </td> </tr> <tr style="mso-yfti-irow: 4;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">10.50<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Zoltan Dienes<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Principles For Teaching And Using Bayes Factors<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 5;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">11.40<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <br /></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Coffee<o:p></o:p></span></i></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 6;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">12.00<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Colin Foster<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Bayes Factors Show Equivalence Between Two Contrasting Approaches To Developing School Pupils’ Mathematical Fluency<o:p></o:p></span></div> </td> </tr> <tr style="mso-yfti-irow: 7;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">12.20<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Helen Hodges<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Towards A Bayesian Approach In Criminology:<o:p></o:p></span></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">A Case Study Of Risk Assessment In Youth Justice<o:p></o:p></span></div> </td> </tr> <tr style="mso-yfti-irow: 8;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">12.40<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <br /></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Lunch<o:p></o:p></span></i></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 9;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">1.40<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Jayne Pickering<o:p></o:p></span></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Matthew Inglis<o:p></o:p></span></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Nina Attridge<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Does Pain Affect Performance On The Attentional Networking Task?<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 10;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">2.00<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Oliver Clark<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">First Steps Towards A Bayesian Model Of Video Game Avatar Influence<o:p></o:p></span></div> </td> </tr> <tr style="mso-yfti-irow: 11;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">2.20<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <br /></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Coffee<o:p></o:p></span></i></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 12;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">2.40<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Richard Morey<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">The Fallacy Of Placing Confidence In Confidence Intervals<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> </td> </tr> <tr style="mso-yfti-irow: 13;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">3.30<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Daniel Lakens<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Learning Bayes As A Frequentist: A Personal Tragedy In Three Parts<o:p></o:p></span></div> </td> </tr> <tr style="mso-yfti-irow: 14; mso-yfti-lastrow: yes;"> <td style="background: #F2F2F2; border-top: none; border: solid windowtext 1.0pt; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 49.65pt;" valign="top" width="50"> <div align="right" class="MsoNormal" style="text-align: right;"> <span lang="EN-US" style="font-family: Palatino;">4.20<o:p></o:p></span></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 92.15pt;" valign="top" width="92"> <div class="MsoNormal"> <br /></div> </td> <td style="background: #F2F2F2; border-bottom: solid windowtext 1.0pt; border-left: none; border-right: solid windowtext 1.0pt; border-top: none; mso-background-themecolor: background1; mso-background-themeshade: 242; mso-border-alt: solid windowtext .5pt; mso-border-left-alt: solid windowtext .5pt; mso-border-top-alt: solid windowtext .5pt; padding: 0cm 5.4pt 0cm 5.4pt; width: 347.25pt;" valign="top" width="347"> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino; font-size: 11.0pt;">Close and farewell<o:p></o:p></span></i></div> </td> </tr> </tbody></table> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <i style="mso-bidi-font-style: normal;"><span lang="EN-US" style="font-family: Palatino;">Organizers</span></i><span lang="EN-US" style="font-family: Palatino;">:<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino;"><span style="mso-tab-count: 1;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; </span>Thom Baguley</span><span lang="EN-US" style="font-family: Palatino;"><span style="mso-tab-count: 1;">&nbsp; &nbsp;</span>twitter: @seriousstats<o:p></o:p></span></div> <div class="MsoNormal"> <br /></div> <!--[if gte mso 9]><xml> <o:DocumentProperties> <o:Revision>0</o:Revision> <o:TotalTime>0</o:TotalTime> <o:Pages>1</o:Pages> <o:Words>234</o:Words> <o:Characters>1339</o:Characters> <o:Company>NTU</o:Company> <o:Lines>11</o:Lines> <o:Paragraphs>3</o:Paragraphs> <o:CharactersWithSpaces>1570</o:CharactersWithSpaces> <o:Version>14.0</o:Version> </o:DocumentProperties> <o:OfficeDocumentSettings> <o:AllowPNG/> </o:OfficeDocumentSettings> </xml><![endif]--> <!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves/> <w:TrackFormatting/> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF/> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>JA</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> <w:SplitPgBreakAndParaMark/> <w:EnableOpenTypeKerning/> <w:DontFlipMirrorIndents/> <w:OverrideTableStyleHps/> <w:UseFELayout/> </w:Compatibility> <m:mathPr> <m:mathFont m:val="Cambria Math"/> <m:brkBin m:val="before"/> <m:brkBinSub m:val="&#45;-"/> <m:smallFrac m:val="off"/> <m:dispDef/> <m:lMargin m:val="0"/> <m:rMargin m:val="0"/> <m:defJc m:val="centerGroup"/> <m:wrapIndent m:val="1440"/> <m:intLim m:val="subSup"/> <m:naryLim m:val="undOvr"/> </m:mathPr></w:WordDocument> </xml><![endif]--><!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="276"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal"/> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9"/> <w:LsdException Locked="false" Priority="39" Name="toc 1"/> <w:LsdException Locked="false" Priority="39" Name="toc 2"/> <w:LsdException Locked="false" Priority="39" Name="toc 3"/> <w:LsdException Locked="false" Priority="39" Name="toc 4"/> <w:LsdException Locked="false" Priority="39" Name="toc 5"/> <w:LsdException Locked="false" Priority="39" Name="toc 6"/> <w:LsdException Locked="false" Priority="39" Name="toc 7"/> <w:LsdException Locked="false" Priority="39" Name="toc 8"/> <w:LsdException Locked="false" Priority="39" Name="toc 9"/> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption"/> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title"/> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font"/> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong"/> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text"/> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision"/> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote"/> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title"/> <w:LsdException Locked="false" Priority="37" Name="Bibliography"/> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading"/> </w:LatentStyles> </xml><![endif]--> <!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid windowtext 1.0pt; mso-border-alt:solid windowtext .5pt; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-border-insideh:.5pt solid windowtext; mso-border-insidev:.5pt solid windowtext; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} </style> <![endif]--> <!--StartFragment--> <!--EndFragment--><br /> <div class="MsoNormal"> <span lang="EN-US" style="font-family: Palatino;"><span style="mso-tab-count: 1;">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;</span>Mark Andrews </span><span lang="EN-US" style="font-family: Palatino;"><span style="mso-tab-count: 1;">&nbsp;</span>twitter: @xmjandrews<o:p></o:p></span></div> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-52150582503694701162017-07-27T20:36:00.003+01:002017-07-27T20:36:57.062+01:00Announcement: ESRC funded conference: Bayesian Data Analysis in the Social Sciences Curriculum (29th Sept 2017)<br /> I am pleased to announce booking is now open for the end of Prior Exposure grant conference on&nbsp;<i>Bayesian Data Analysis in the Social Sciences Curriculum </i>on 29th September. We are still finalising the programme but have confirmed contributions from&nbsp;Richard Morey (University of Cardiff), Zoltan Dienes (University of Sussex) and Daniel Lakens (Eindhoven University of Technology) as well as presentations from Mark Andrews and myself.<br /> <br /> We are also inviting seeking submissions from PhD students and others - on using and teaching Bayes. Booking for the final round of workshops will also open shortly.<br /> <br /> <a href="https://www.ntu.ac.uk/about-us/events/events/2017/09/esrc-conference-bayesian-data-analysis-in-the-social-sciences-curriculum" id="LPlnk219438" previewinformation="1" rel="noopener noreferrer" style="font-family: wf_segoe-ui_normal, 'Segoe UI', 'Segoe WP', Tahoma, Arial, sans-serif, serif, EmojiFont; font-size: 13.333333015441895px;" target="_blank">https://www.ntu.ac.uk/about-us/events/events/2017/09/esrc-<span class="highlight" id="0.6441063691687166" name="searchHitInReadingPane" style="background-color: #ffee94;">conference</span>-<span class="highlight" id="0.48891940908459963" name="searchHitInReadingPane" style="background-color: #ffee94;">bayes</span>ian-data-analysis-in-the-social-sciences-curriculum</a><br /> <br /> You can book via this link:<br /> <br style="color: #212121; font-family: wf_segoe-ui_normal, 'Segoe UI', 'Segoe WP', Tahoma, Arial, sans-serif, serif, EmojiFont; font-size: 13.333333015441895px;" /><a href="http://onlinestore.ntu.ac.uk/conferences-events/school-of-social-sciences/events/bayesian-data-analysis-in-the-social-sciences-curriculum" rel="noopener noreferrer" style="font-family: wf_segoe-ui_normal, 'Segoe UI', 'Segoe WP', Tahoma, Arial, sans-serif, serif, EmojiFont; font-size: 13.333333015441895px;" target="_blank">http://onlinestore.ntu.ac.uk/<span class="highlight" id="0.8445057565398446" name="searchHitInReadingPane" style="background-color: #ffee94;">conference</span>s-events/school-of-social-sciences/events/<span class="highlight" id="0.9321045169502344" name="searchHitInReadingPane" style="background-color: #ffee94;">bayes</span>ian-data-analysis-in-the-social-sciences-curriculum</a><br /> <br /> Delegate Registration: Registration is £20 (Early Bird fee of £10 if booked before Friday 1 September 2017). The cost includes lunch and coffee in the Nottingham Conference Centre (Newton building, Nottingham Trent University).thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-23022747848601451962017-06-13T15:50:00.003+01:002017-06-13T15:50:48.996+01:00STOP PRESS Introductory Bayesian data analysis workshops for social scientists (June 2017 Nottingham UK)<span style="font-family: &quot;trebuchet ms&quot; , sans-serif;"><br /></span> <span style="font-family: 'trebuchet ms', sans-serif;">The third and (possibly) final round of the workshops of our introductory workshops was overbooked in April, but we have managed to arrange some additional dates in&nbsp;</span><span style="font-family: trebuchet ms, sans-serif;">June.</span><br /> <span style="font-family: trebuchet ms, sans-serif;"><br /></span> <span style="font-family: trebuchet ms, sans-serif;">There are still places left on these.&nbsp;</span><span style="font-family: '&quot;trebuchet ms&quot;', sans-serif;">More details at:</span><span style="font-family: '&quot;trebuchet ms&quot;', sans-serif;">&nbsp;</span><a href="http://www.priorexposure.org.uk/" style="font-family: '&quot;trebuchet ms&quot;', sans-serif;">http://www.priorexposure.org.uk/</a><br /> <span style="font-family: 'trebuchet ms', sans-serif;"><br /></span> <span style="font-family: 'trebuchet ms', sans-serif;">As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before). Unfortunately we can't offer bursaries for these additional workshops (as this&nbsp;</span><span style="font-family: trebuchet ms, sans-serif;">wasn't part of the original ESRC funding).</span><br /> <span style="font-family: 'trebuchet ms', sans-serif;"><br /></span> <div style="text-align: start;"> <span style="font-family: Trebuchet MS, sans-serif;"><span style="background-color: #fafafa; color: #333333; font-family: 'trebuchet ms', sans-serif; text-align: justify;">They are primarily (but not exclusively) aimed at UK social science PhD students (so not just Psychology or Neuroscience, but very much also Sociology, Criminology, Politics and other social science disciplines). We hope the workshops will also appeal to early career researchers and others doing quantitative social science research (but with little or no Bayesian experience).</span></span></div> <span style="font-family: &quot;trebuchet ms&quot; , sans-serif;"><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">The registration cost for each workshop is £20 (for postgrads) and £30 (or others).</span></span>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-48320783805749105462017-05-25T22:30:00.001+01:002017-05-25T22:33:32.981+01:00Serious Stats blog: CI for differences in independent R square coefficients<span style="font-family: inherit;"><span style="font-weight: normal;"><br /></span></span> <span style="font-family: inherit;"><span style="font-family: &quot;trebuchet ms&quot; , sans-serif; font-weight: normal;">In my <a href="https://seriousstats.wordpress.com/">Serious Stats blog</a> I have a new post on providing CIs for a difference between independent R square coefficients.</span></span><br /> <span style="font-family: &quot;trebuchet ms&quot; , sans-serif;"><span style="font-weight: normal;"><br /></span><span style="color: #333333; font-weight: normal;">You can find the post <a href="https://seriousstats.wordpress.com/2017/05/25/ci-for-difference-between-independent-r-square-coefficients/">there</a> or go direct to the function hosted on <a href="http://rpubs.com/seriousstats/ci_diff_rsq">RPubs</a>. I have been experimenting with </span><span style="color: #333333; font-weight: normal;">knitr</span><span style="color: #333333;">&nbsp; but can't yet get the&nbsp;html from R Markdown to work with my blogger or wordpress blogs.</span></span> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-63043030321736480742017-01-24T21:16:00.004+00:002017-01-24T21:16:59.587+00:00ESRC funded Bayesian data analysis workshops for social scientists<span style="font-family: Trebuchet MS, sans-serif;"><br /></span> <span style="font-family: Trebuchet MS, sans-serif;">The third and (possibly) final round of the workshops is open for booking. As with the last round we are planning a free R workshop before hand (reccomended if you need a refresher or have never used R before), but can't offer bursaries for this.</span><br /> <span style="font-family: Trebuchet MS, sans-serif;"><br /></span> <span style="font-family: Trebuchet MS, sans-serif;">More details at: <a href="http://www.priorexposure.org.uk/">http://www.priorexposure.org.uk/</a></span><br /> <span style="font-family: Trebuchet MS, sans-serif;"><br /></span> <span style="font-family: Trebuchet MS, sans-serif;"><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">This is part of the&nbsp;</span><a href="http://www.esrc.ac.uk/skills-and-careers/studentships/doctoral-training-centres/advanced-training/esrc-advanced-training-initiative-awards-2014/"><span style="color: #009eb8;"><span style="background-color: #fafafa; line-height: 19.6px; text-align: justify;">ESRC Advanced Training Initiative</span></span><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">.</span></a><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">The first two workshops are available for booking now (though places are filling up quite fast). They are primarily (but not exclusively) aimed at UK social science PhD students (so not just Psychology, but very much also Sociology, Criminology, Politics and other social science disciplines). We hope the workshops will also appeal to early career researchers and others doing quantitative social science research (but with little or no Bayesian experience).</span><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">The ESRC is supporting us with bursary funding for travel and subsistence (see web site for details). These are eligible to all UK social science PhD students (not just for those with ESRC funding), but such funded places are limited.</span><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">If the demand is sufficient we may try and put on additional workshops this year (though maybe I'm being too optimistic!). We ran extra workshops last June for this reason.</span><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><br style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;" /><span style="background-color: #fafafa; color: #333333; line-height: 19.6px; text-align: justify;">The registration cost for each workshop is £10 (for postgrads) and £20 (or others) - the information is buried in the booking link but we'll try and make that clearer ... The workshops are non-profit so this fee is to cover basic running costs (e.g., lunch etc.).</span></span>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-76355424242577656752016-09-02T15:17:00.002+01:002016-09-02T15:18:31.160+01:00ESRC Prior Exposure workshops: advanced Bayesian data analysisThere are still a few places left on our September Bayesian Data analysis workshops&nbsp;<span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;">held in Nottingham Trent University on September 15 and 16, 2016.</span><br /> <br style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;">These are part of the ESRC's Advanced Training Initiative and are aimed at PhD students and researchers (postdocs, lecturers, etc.) in social sciences.</span><br /> <br style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;">The fees are £20 per workshop (£10 for PhD students). A limited number of bursaries to cover travel expenses for students are also available.</span><br /> <br style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;">Full details about the workshops, as well as the online booking system can be found <a href="http://www.priorexposure.org.uk/">here</a>.</span><br /> <br style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 16px;" /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;">Announcements and news about these workshops are also made using our twitter account: @priorexposure</span><br /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;"><br /></span> <span style="background-color: white;"><span style="font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">Apologies for the delay in announcing these here - owing to&nbsp;building work over the Summer and the usual holiday absences I wasn't able to post details earlier</span></span><br /> <span style="background-color: white; font-family: &quot;calibri&quot; , &quot;arial&quot; , &quot;helvetica&quot; , sans-serif; font-size: 16px;"><br /></span> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-45131092619724433832016-01-29T15:59:00.002+00:002016-05-09T20:26:40.621+01:00Stop Press: Additional dates for the 2016 Prior Exposure Bayesian Data Analysis workshops<div style="font-family: Helvetica; font-size: 12px; line-height: normal; min-height: 14px;"> <span style="font-kerning: none;"></span><br /></div> <div style="font-family: Helvetica; font-size: 12px; line-height: normal; min-height: 14px;"> <span style="font-kerning: none;"></span><br /></div> <div style="font-family: Times; font-size: 16px; line-height: normal;"> <span style="font-kerning: none;">Places on the Easter <i>Prior Exposure&nbsp;</i>(introductory) workshops filled up very quickly and we had to turn away quite a few people. In response we’ve managed to arrange another set of events on 16 and 17 June (again with an optional R bootcamp on June 15th). Booking is now open:</span></div> <div style="font-family: Times; font-size: 16px; line-height: normal; min-height: 19px;"> <span style="font-kerning: none;"></span><br /></div> <div style="color: #551a8b; font-family: Times; font-size: 16px; line-height: normal;"> <span style="font-kerning: none; text-decoration: underline;"><a href="http://www.priorexposure.org.uk/schedule">http://www.priorexposure.org.uk/schedule</a></span></div> <div style="font-family: Times; font-size: 16px; line-height: normal; min-height: 19px;"> <span style="font-kerning: none;"></span><br /></div> <div style="font-family: Times; font-size: 16px; line-height: normal;"> <span style="font-kerning: none;">(Details of the R bootcamp are <a href="http://www.priorexposure.org.uk/media/schedule_R_workshop.pdf"><span style="color: #551a8b; line-height: normal;">here</span></a>)</span></div> <div style="font-family: Times; font-size: 16px; line-height: normal; min-height: 19px;"> <span style="font-kerning: none;"></span><br /></div> <div style="font-family: Helvetica; font-size: 12px; line-height: normal; min-height: 14px;"> </div> <div style="font-family: Times; font-size: 16px; line-height: normal;"> <span style="font-kerning: none;">Unfortunately these extra dates aren't covered by the ESRC funding so we are not able to offer bursaries and have had to raise the booking fee slightly (£15 for students and £25 for others). Workshops 3 and 4 (the more advanced topics) will run later in the year (September) and will offer some bursaries (for UK doctoral students). We are also running everything again in 2017 (our final year before funding runs out).</span></div> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-66659092739277214292015-07-23T22:29:00.001+01:002015-07-23T22:29:42.497+01:00PLS think twice about partial least squaresOne of the great things about writing a statistics book was finding an excuse to read about dozens of topics that I knew a little about but hadn't got around to studying in depth. Even so, there were a number of topics I ended up missing out on completely (apparently once the book gets to over a 900 pages or so they make you leave stuff out). One of those topics is partial least squares (PLS).<br /> <br /> I knew a bit about the technique (but it turns out even less than I thought). I recently came across <a href="https://www.researchgate.net/publication/280083330_On_the_Adoption_of_Partial_Least_Squares_in_Psychological_Research_Caveat_Emptor">an excellent paper on partial least squares</a> by Mikko Rönkkö, Cameron McIntosh and John Antonakis. The main thrust of the paper is simple - partial least squares is a widely used technique outside psychology, and it has been suggested should be more widely used within psychology. Rönkkö et al., however argue that this is probably a bad idea. A very bad idea. Their argument rests on two main arguments. First, that partial least squares is equivalent to a regression model using indicator variables to create weighted composite predictors. Second, that the benefits of partial least squares &nbsp;have been greatly overstated. In particular the claim that PLS can deal with measurement error seems simply to be be false (as just creating composites from indicator variables can't do this). Worryingly, some implementations of PLS seem to have dangerous properties (notably one with a 100% false positive rate) and PLS generally seems to inflate Type I error for small effects. The latter property may give the impression of attenuating measurement error (but merely provides a bias that that may sometimes counteract attenuation arising from measurement error).<br /> <br /> Rönkkö et al. paper is, I think, a model of clarity and implies that PLS is going to be of limited value to psychologists. I found the paper particularly interesting because I have mostly seen PLS advocated as a way of dealing with multicollinearity. This makes sense as multicollinearity can reasonably be handled by replacing predictors with composites. The main drawback of PLS, however, is that the composites are derived automatically by the PLS algorithm. This sort of 'black box' solution produces good prediction but can overcapitalise on quirks in the sample and thus may not generalise (especially for small samples). More importantly, the composites may well be uninterpretable. For most psychological applications I'd rather use an interpretable but 'non-optimal' composite (e.g., a simple average of highly correlated predictors) than go down this route.<br /> <br /> For the same reason I'd generally rather not use MANOVA (which &nbsp;finds an optimum linear combination of DVs in your sample). Of common analytic methods MANOVA is one of the least well understood techniques in psychology (and I have rarely seen a published application of MANOVA that wouldn't be enhanced using a different, often simpler, technique).<br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-53354252041697461152015-07-17T09:17:00.001+01:002015-07-21T14:01:03.176+01:00Prior exposure workshops 3 and 4 (Bayesian data analysis for social scientists)<div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> Booking is now open for workshops three and four of our Prior Exposure Bayesian data analysis training (all taking place in Nottingham). The dates are 22 and 23 September 2015.</div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> These follow on from the first two workshops but if you have some training in regression (especially multilevel regression) and familiarity with Bayesian statistics this is roughly where workshop three will start.</div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> Workshop 3: Introduction to advanced Bayesian data analysis. This workshop focuses on advanced probabilistic modeling in Bayesian data analysis, and in particular, Bayesian data analysis using multilevel regression models.</div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> Workshop 4: Nonlinear and latent variable models. This final workshop focuses on Bayesian latent variable modeling, particularly using mixture models.</div> <div class="p4" style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> </div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> Further details can be found here:</div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <a href="http://www.priorexposure.org.uk/" id="LPlnk639258" title="http://www.priorexposure.org.uk Cmd+Click or tap to follow the link">http://www.priorexposure.org.uk</a></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> <br /> Fees are £20 per workshop (£10 for PhD students) and some ESRC bursary funding is available for UK social sciences PhD students.<br /> <br /></div> <div style="font-family: Calibri, Arial, Helvetica, sans-serif, 'Apple Color Emoji', 'Segoe UI Emoji', NotoColorEmoji, 'Segoe UI Symbol', 'Android Emoji', EmojiSymbols; font-size: 16px; widows: 1;"> Thom</div> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-15757152176547958072015-02-09T13:41:00.001+00:002015-02-09T14:48:13.936+00:00Prior exposure: Bayesian data analysis workshops (ESRC Advanced Training Initiative)Mark Andrews and I have just launched the web site for our <i>Prior Exposure </i>Bayesian Data Analysis workshop series. This is part of the <a href="http://www.esrc.ac.uk/funding-and-guidance/funding-opportunities/29788/advanced-training-initiative-2014.aspx">ESRC Advanced Training Initiative</a>.<br /> <br /> Further details are available <a href="http://www.priorexposure.org.uk/">here</a>.<br /> <br /> The first two workshops are available for booking now (though places are filling up quite fast). They are primarily (but not exclusively) aimed at UK social science PhD students (so not just Psychology, but very much also Sociology, Criminology, Politics and other social science disciplines). We hope the workshops will also appeal to early career researchers and others doing quantitative social science research (but with little or no Bayesian experience).<br /> <br /> <br /> The ESRC is supporting us with bursary funding for travel and subsistence (see web site for details). These are eligible to all UK social science PhD students (not just for those with ESRC funding), but such funded places are limited.<br /> <br /> We will run similar workshops next year and if the demand is sufficient we may try and put on additional workshops this year (though maybe I'm being too optimistic!).<br /> <br /> Update: As of writing the registration cost for each workshop is £10 (for postgrads) and £20 (or others) - the information is buried in the booking link but we'll try and make that clearer ... The workshops are non-profit so this fee is to cover basic running costs (e.g., lunch etc.) and we will try and keep these low costs for subsequent workshops.<br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-41080945190227251942014-07-04T22:38:00.000+01:002014-07-04T22:40:41.930+01:00Guest post: PNAS, facebook and the ethics of online experimentation<i><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">This is a guest blog post by <a href="https://applications.bathspa.ac.uk/staff-profiles/profile.asp?user=academic%5Cmarg1" target="_blank">Gerry Markopoulos</a>. I'm posting it because I think it is an important topic that deserves wider discussion.</span></i><br /> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Recently, an article was published in the prestigious journal ‘<i>Proceedings of the National Academy of Sciences</i><span style="font-style: normal;">’ (PNAS), titled ‘Experimental evidence of massive-scale emotional contagion though social networks’. The article was published online on the 2<sup>nd</sup> of June, 2014, and it is available <a href="http://www.pnas.org/content/111/24/8788.full" target="_blank">here</a>.</span></span><br /> <div class="MsoNormal"> <span style="font-family: Georgia, 'Times New Roman', serif; font-style: normal;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">I would like to argue that the article needs to be retracted on the basis of violating fundamental ethical principles, it should not have been considered for publication in the first place (on the basis of the journal’s stated principles), and that it could damage the reputation of psychology on an international level.<span style="mso-spacerun: yes;">&nbsp; </span>The scientific community’s disapproval needs to be made explicit in order to safeguard the public’s trust in its work and procedures especially when involving human participants.</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">I am very happy to report that the BPS has responded to the publication in a very timely and unambiguous fashion via a letter to <a href="http://www.theguardian.com/technology/2014/jul/01/facebook-socially-irresponsible" target="_blank">The Guardian</a></span><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">. The letter makes clear what ethical principles were violated, and how. It would have been perhaps more practical to list the principles that were not violated. (It might have been a far shorter list!) Understandably, I presume the BPS cannot go any further than merely condemn the article, considering the authors are based in the USA, and the PNAS is a US journal. I will not pretend to know anything about the legal aspect of the situation, but legality is largely irrelevant. When psychologists and other scientistis propose projects to ethics committees, they are not looking for legal loopholes. They are looking to protect their participants from any harm whether or not there is a legal provision for it. That is partly the role of ethics committees, to anticipate and try to predict how research could violate wide ethical principles such as ‘maximising benefits and minimising harm’, especially where innovative research is concerned.</span><br /> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">The only mention of ethical issues in the PNAS article is the following:</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">“<i>…<span style="background: white; color: #333333;">it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research</span></i><span style="background: white; color: #333333; font-style: normal;">.</span>”</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">One can easily see that this constitutes consent, but certainly not informed consent. It is my understanding that participants were not aware they were taking part in a psychological study, they had not been informed of the nature of the study, they had not been informed of their right to withdraw their data, and they were not debriefed. Furthermore, no steps were taken to ensure the continuing wellbeing of the participants considering the sensitive nature of the experimental manipulation – according to the article, the manipulation led to a successful induction of negative emotions. There were no exclusion criteria protecting vulnerable populations (such as depressive or emotionally unstable participants). Such issues would be raised by any informed ethics committee. Why weren’t these issues raised? One can only assume that no ethics committee approved the project. This is the only logical explanation available. I personally contacted the first author on the 29<sup>th</sup> of June requesting clarifications, but – perhaps not surprisingly - I never received a response. At this point, I should concede that fully informed consent could compromise the outcome of the study, but such a (perhaps) necessary omission ought to be counteracted with an extensive and carefully-worded debrief minimising the risk of potential harm to participants. Having said that, in the quote above, the authors claim that accepting the terms and conditions constitutes informed consent, which it certainly does not.</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">The issue of the ethics committee approval (or lack thereof) leads me to what we can do as individuals to protect psychology and the reputation of the scientific community. According to the <a href="http://www.pnas.org/site/authors/journal.xhtml" target="_blank">PNAS website</a>&nbsp;when research with human participants is involved:</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">“<span style="background: white; color: #333333;"><i>Authors must include in the Methods section a brief statement identifying the institutional and/or licensing committee approving the experiments. For experiments involving human participants, authors must also include a statement confirming that informed consent was obtained from all participants</i></span><span style="background: white; color: #333333;">.</span>”</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">In this case, PNAS appears to have ignored its own rules. On this basis, I contacted PNAS through their <a href="http://intl.pnas.org/site/misc/contact.xhtml)" target="_blank">contact page</a>&nbsp;<a href="https://www.blogger.com/blogger.g?blogID=27862247" name="_GoBack"></a>firmly requesting the retraction of the article.</span></div> <div class="MsoNormal"> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></div> <span class="Apple-style-span" style="color: #333333; font-family: Georgia, 'Times New Roman', serif; line-height: 18px;">A few days after my email, I received a response from PNAS directing me to an <a href="http://www.pnas.org/content/early/2014/07/02/1412469111.full.pdf+html" target="_blank">editorial piece </a>on this issue. The editorial confirmed earlier suspicions that no committee had scrutinized the research proposal. There had never been a research proposal to begin with. When the article was submitted for publication, the authors stated:</span><br /> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></span> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">“<i>Because this experiment was conducted by Facebook, Inc. for internal purposes, the Cornell University IRB [Institutional Review Board] determined that the project did not fall under Cornell’s Human Research Protection Program</i><span style="font-style: normal;">”.</span></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">To summarise the editorial response, it says:</span></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">“We were aware that no ethics committee had approved the project or the data collection method, we were aware that participants were not given the opportunity to opt out, but the company that collected the data is not obligated to adhere to such rules. Therefore, we published the data. However, we are concerned”.</span></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></span></span></div> <div> <span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span style="font-style: normal;">At this stage, I would like to reiterate this is not an issue of legality. It is an issue of ethics where loopholes have no place.&nbsp;</span></span><span class="Apple-style-span" style="color: #333333; line-height: 18px;">Now more than ever it is obvious that the article needs to be retracted.</span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"><br /></span></span></div> <div> <span class="Apple-style-span" style="color: #333333; line-height: 18px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">Unfortunately, we cannot undo the harm that potentially has been caused by this research. Considering the sample size (over 600,000) and the reported significant effect of the experimental manipulation, it is possible that vulnerable participants were harmed. What we can do is demonstrate to the public that this type of research is not representative of what we do, and that we are as indignant as they are. Can we stop private companies from conducting research in secret? I would think this is unlikely. Secret research cannot be overseen by definition. However, scientists and scientific journals should actively stay away from data collected under questionable circumstances. Publication means condoning the research process from the design stage to the write-up stage. The condoning of unethical data collection methods (through publication) only encourages such practices. This is where a difference can be made, and that is why retraction of the specific article is essential.</span></span><span class="Apple-style-span" style="color: #333333;"><span class="Apple-style-span" style="line-height: 18px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;"></span></span></span></div> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-86934254323835766102013-11-09T14:06:00.000+00:002017-05-31T22:18:01.119+01:00Multicollinearity and collinearity (in multiple regression) - a tutorial<em>This blog post was written for undergraduate research methods teaching. I have therefore tried to keep everything relatively simple and equation-free. The content is based loosely on more detailed material in my book <a href="http://www.amazon.co.uk/gp/product/0230577180/ref=as_li_tf_tl?ie=UTF8&amp;camp=1634&amp;creative=6738&amp;creativeASIN=0230577180&amp;linkCode=as2&amp;tag=psychologic05-21" target="_blank">Serious stats</a>. </em> <br /> <em><br /></em> <br /> <h2> What are collinearity and multicollinearity?</h2> Collinearity occurs when two predictor variables (e.g., <em>x</em><sub>1</sub> and <em>x</em><sub>2</sub>) in a multiple regression have a non-zero correlation. Multicollinearity occurs when more than two predictor variables (e.g., <em>x</em><sub>1</sub>, <em>x</em><sub>2</sub> and <em>x</em><sub>3</sub>) are inter-correlated. <br /> <h2> How common is collinearity or multicollinearity?</h2> If you collect observational data or data from a non-experimental or quasi-experimental study collinearity or multicollinearity will nearly always be present. The only studies where it won’t tend to occur (unless you are very, very lucky) is in certain designed experiments – notably fully balanced designs such as a factorial ANOVA with equal <em>n</em> per cell. Thus the most important issue is not whether multicollinearity or collinearity is present but what impact it has on your analysis. <br /> <h2> Do collinearity or multicollinearity matter?</h2> I find it helpful to break down collinearity and multicollinearity into three situations, of which only the third is common.<br /> <em><br /></em> <em>[Note: From here on I’ll just use the terms collinearity and multicollinearity more or less interchangeably for convenience. This is also common practice in the literature]</em><br /> <em><br /></em> <em>1. Perfect collinearity</em>. If two or more predictors are perfectly collinear (you can perfectly predict one from some combination of the others) then your multiple regression software will either not run (e.g., return an error) or it will drop one or more predictors (and possibly also return an error). Perfect collinearity happens a lot by accident (e.g., if you enter two versions of an identical variable such as the mean score and total score of a scale, or dummy codes for every category of a categorical predictor).<br /> <em><br /></em> <em>2. Almost perfect collinearity</em>. If the correlation between predictors isn’t quite perfect (but is very close to <em>r</em> = 1) then this can sometimes cause “estimation problems” (meaning that the software you are using might be able to run the analysis or might generate incomplete output). Most modern software can cope with this situation just fine – but even then estimates will be hard to interpret (e.g., be implausibly high or low and have very large standard errors). If your software can cope with this situation then technically the estimates will be correct and you just have an extreme form of situation 3 below.<br /> <em><br /></em> <em>3. Multicollinearity</em>. As already noted, most situations in which you would use regression (apart from certain designed experiments) involve a degree of multicollinearity. So if a method section ever claims that “multicollinearity is not present”, generally this will be untrue. A better statement to make is something along the lines of “there were no problems with multicollinearity”. However, generally this will also be untrue. For this to be true, the degree of multicollinearity needs to be very small or the sample size very large (or both). Neither is common in psychological research.<br /> <br /> To understand why it is necessary to consider what impact multicollinearity has on:<br /> <br /> i) the overall regression model,<br /> ii) estimates of the effects of individual predictors. <br /> <h3> The good news</h3> Multicollinearity has no impact on the overall regression model and associated statistics such as <em>R</em><sup>2</sup>, <em>F</em> ratios and <em>p</em> values. It also should not generally have an impact on predictions made using the overall model. (The latter might not be true if the predictor correlations in the sample don’t reflect the correlations in the situation you are making predictions for – but that isn’t really a multicollinearity issue, but a consequence of having an unrepresentative sample). <br /> <h3> The bad news</h3> Multicollinearity is a problem if you are interested in the effects of individual predictors. This turns out to be a major issue in psychology because this is probably the main reason that psychologists use multiple regression: to tease apart the effects of different predictors. There are two main (albeit related) issues here: the first is a philosophical problem and the second is a statistical one.<br /> <br /> <i>The philosophical issue</i>. If two or more predictors are correlated then it is inherently difficult to tease apart their effects. For instance, imagine a study that looks at the effect of happiness and depression on alcohol consumption. If happiness is highly correlated with depression (e.g., <em>r</em> = -.90) then regression commands in packages such as SPSS or R will come up with estimates of the unique effect of happiness on alcohol consumption (by holding depression constant). This estimate is an estimate of the effect of happiness that ignores their shared variance. However, depression isn’t generally constant if happiness varies; they tend to vary together.<br /> <br /> The philosophical issue is this: is it meaningful to interpret the unique impact of happiness if happiness and depression are intimately related. Although this philosophical issue is potentially important, researchers often tend to ignore it. The main advice here is to think carefully before trying to interpret individual effects if there is a high level of multicollinearity in your model.<br /> <br /> <i>The statistical issue.</i> The underlying statistical issue with multicollinearity is fairly simple. The unique effects of individual predictors are estimated by holding all other predictors constant and thus ignoring any shared variance between predictors. A regression model uses information about the variation between predictors and the associated variation in the outcome (<em>y</em> variable) to calculate estimates. As <i>n</i> (the number of participants or cases) or sample size) increases, the more information you have and the greater the statistical power of the analysis. You also get more information from cases or participants that are more variable relative to each other. So a participant who is more extreme on a predictor has a bigger impact on the analysis than one that is less extreme. If multicollinearity is present then each data point tends to contribute less information to the estimate of individual effects than it does to the overall analysis. (Holding the effects of other predictors constant effectively reduces the variability of a predictor and thus reduces its influence).<br /> <br /> Multicollinearity therefore reduces the effective amount of information available to assess the unique effects of a predictor. You can also think of it reducing the effect sample size of the analysis. For instance, in the happiness and depression example happiness and depression (where <em>r</em> = -.90) share (-.91)2 = .81 or 81% of their variance. Thus the tests of their unique use only (100 – 81) = 19% of the information (about a fifth) in the overall model and thus the effective sample size is over 5 times smaller.<br /> <br /> Thus the fundamental statistical impact of multicollinearity is to reduce effective sample size and thus statistical power for estimates of individual predictors. It is worth looking at each of the main statistics in turn: <br /> <blockquote> <em>b</em> (the unstandardized slope) – this parameter estimate remains unbiased, but is estimated less accurately when multicollinearity is present (i.e., its standard error is larger)</blockquote> <blockquote> <em>β</em> (the standardized slope) – this parameter estimate remains unbiased, but is estimated less accurately when multicollinearity is present (i.e., its standard error is larger)</blockquote> <blockquote> <em>t</em> (the <em>t</em> test statistic) – this is the ratio of the estimate to its standard error and thus will be smaller (and further from statistical significance)</blockquote> <blockquote> 95% CI (the 95% confidence interval) – this is the estimate plus or minus approximately two standard errors, thus the CI will be wider (reflecting greater uncertainty in the estimate)</blockquote> <i>Stability of estimates</i>. Many textbooks refer to problems with the stability of estimates when multicollinearity is present. What this means is that estimates will jump around a lot if you add or drop predictors or between the same model in different data sets. This isn’t really a separate issue – just a logical consequence of having a smaller effective sample size. Any estimate based on a small effective sample size will be unstable in this sense. Statistics from small (effective) samples tend to be less similar to the population than large samples. <br /> <h2> Detecting problems with multicollinearity</h2> <em>Two predictors.</em> A natural starting point is to look at the simple correlations between predictors. If you have only two predictors this is sufficient to detect any problems with collinearity: if the simple correlation between the two predictors is zero then there is no problem. If the correlation is low then collinearity is probably just a minor nuisance – but will still reduce statistical power (meaning that you are less likely to detect an effect and the effect will be measured less accurately). A larger correlation indicates a more serious problem. Working out how severe the problem is not that easy and it is generally a good idea to use a collinearity diagnostic such as tolerance or VIF for this purpose.<br /> <em><br /></em> <em>More than two predictors.</em> With more than two predictors the simple correlations between predictors can be misleading. Even if they are all very low (and unless they are exactly zero) they could conceal important multicollinearity problems. This will happen if the predictor’s correlations don’t overlap – and thus they have a cumulative effect (e.g., if the correlation between <em>x</em><sub>1</sub> and <em>x</em><sub>2</sub> explains a different bit of the variance in the outcome <em>y</em> than the correlation between <em>x</em><sub>1</sub> and <em>x</em><sub>3</sub>).<br /> <br /> Fortunately there are a number of multicollinearity diagnostics that can help detect problems. I will focus on perhaps the simplest of these: <em>tolerance</em> and <em>VIF</em>.<br /> <em><br /></em> <em>Tolerance.</em> One way to think of tolerance is that it is the proportion of unique information that a predictor provides in the regression analysis. To calculate the tolerance you first obtain the proportion of predictor variance that overlaps with the other predictors. You then subtract this number from 1. For example, if the other predictors explain 60% of the variance in <em>x</em><sub>1</sub> then the tolerance of <em>x</em><sub>1</sub> (in a model with those predictors) is 1 – .6 = .4. Tolerance of 1 indicates no multicollinearity (for that predictor) and tolerance values approaching 0 indicate a severe multicollinearity problem.<br /> <br /> Tolerance indicates how much information multicollinearity has cost the analysis. Thus tolerance of .4 indicates that parameter estimates, confidence intervals and significance tests for a predictor are only using 40% of the available information.<br /> <em><br /></em> <em>VIF. </em> The VIF statistic of a predictor in a model is merely the reciprocal of its tolerance (i.e., VIF = 1/tolerance). So if tolerance is .4 then the VIF is 1/.4 = 2.5. VIF stands for&nbsp;<i>variance inflation factor</i>. This number indicates how much larger the error variance for the unique effect of a predictor (relative to a situation where there is no multicollinearity). The VIF can also be thought of the factor by which your sample size needs to be increased to match the efficiency of an analysis with no multicollinearity. So a VIF of 2.5 implies that you’d need a sample size 2.5 times larger than the one you actually have to overcome the degree of multicollinearity in your analysis.<br /> <h2> Remedies</h2> The best remedy for multicollinearity is either: i) to design a study to avoid it (e.g., using an appropriate experimental design), or ii) increase your sample size to make your estimates sufficiently accurate. If these are not feasible there are other options that may be helpful (but which can also be harmful).<br /> <br /> <em>Dropping a predictor.</em> Generally this is a bad option (though many text books recommend it). The reason it is usually a bad idea is that hides the problem rather than solving it. For instance, if <em>x</em><sub>1</sub> and <em>x</em><sub>2</sub> are moderately correlated it is quite possible that each of them significantly predicts <em>y</em> on its own but neither unique effect is statistically significant when both are in the model. Dropping <em>x</em><sub>1</sub> will thus make it look as though <em>x</em><sub>2</sub> is predicting <em>y</em> on its own (an vice versa). The true state of affairs is that they are jointly predicting <em></em>y and that their precise individual contribution to this joint prediction is unknown.<br /> <br /> Worse still, dropping a predictor can be actively misleading (e.g., if you select the predictor you drop so that the final model supports your favoured hypothesis or theory). Sometimes dropping a predictor is a somewhat reasonable thing to do. One situation is when two variables are measuring more or less the same thing. For instance, if you have two measures of trait anxiety and they are highly correlated, it may well be reasonable to drop one of them (though in this case there are still better other options. Another situation is when you believe one variable is just a proxy for the other. For instance, both age and school year are highly correlated and both are predictors of arithmetic ability. In this case age may just be a proxy for school year (on the assumption that arithmetic is taught rather than acquired spontaneously as you age).<br /> <em><br /></em> <em>Combining or transforming predictors.</em> If you have highly correlated predictors it is usually better to combine them in some way rather than drop them from the analysis. There are many ways that predictors could be combined (and statistical procedures such as factor analysis exist that are designed to do exactly this). However, even crude methods such as adding predictors together (or averaging them) can be surprisingly effective (though it may be necessary to rescale them if they are not on the same scale). Other options may also suggest themselves (e.g., using the difference between predictors or some weighted combination) depending on the theory motivating your model.<br /> <em><br /></em> <em>Do nothing.</em> Sometimes the best thing to do is nothing. You may just wish to honestly report that a set of predictors jointly predicts some outcome and that more data are required to tease their individual effects apart. Alternatively, you may not care that some of your predictors are highly correlated. For instance if you have some predictors of theoretical interest and some that are not (e.g., because they are potential confounding variables), as long as the predictors you are interested in have high tolerance it won’t matter if the other predictors have low tolerance. Such predictors are sometimes called nuisance variables – and what matters is that you have dealt with them in some way (not whether you have estimated them accurately).<br /> <h2> Conclusions</h2> There are four main conclusions to take from this tutorial:<br /> <br /> <blockquote class="tr_bq"> 1) Multicollinearity is nearly always a problem in multiple regression models</blockquote> <blockquote class="tr_bq"> 2) Even small degrees of multicollinearity can cause serious problems for an analysis if you are interested in the effects of individual predictors</blockquote> <blockquote class="tr_bq"> 3) Small samples are particularly vulnerable to multicollinearity problems because multicollinearity reduces your effective sample size for the effects of individual predictors</blockquote> <blockquote class="tr_bq"> 4) There are no ‘easy’ solutions (e.g., dropping predictors is generally a bad idea) </blockquote> <br /> <h2> Further reading</h2> More detail on multicollinearity can be found in: <a href="http://www.amazon.co.uk/gp/product/0230577180/ref=as_li_tf_tl?ie=UTF8&amp;camp=1634&amp;creative=6738&amp;creativeASIN=0230577180&amp;linkCode=as2&amp;tag=psychologic05-21" target="_blank">Baguley, T. (2012). <em>Serious stats: A guide to advanced statistics for the behavioral sciences</em>. Basingstoke: Palgrave.</a> <br /> <br /> <h2> <b>Update</b></h2> I have a short note on my book <a href="https://seriousstats.wordpress.com/2017/05/31/vif-and-multicollinearity-diagnostics/">blog about getting multicollinearity diagnostics in R</a>.<br /> <br /> <br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-27328200208547321602013-08-06T19:58:00.001+01:002013-08-06T20:06:55.898+01:00Cronbach to the futureOne fascinating thing about working in the area of psychological statistics is how hard it is to move people away from reliance on bad, inefficient or otherwise problematic methods. My own view - informed to some extent by the literature, by experience and by anecdote is that it isn't sufficient merely to establish than the standard approach is wrong. It isn't even sufficient to provide an obviously superior alternative. You also need to three other things: i) get the message out to the people using the method, ii) reduce barriers to implementing the method (provide user-friendly software, easy to understand tutorial sand so forth), and iii) get the new method taught at undergraduate or masters level. A good illustration is the need to provide confidence intervals (CIs) as well as point estimates of statistics. This has been advocated for decades and has only relatively gradually trickled through to standard practice. In addition, CIs are commonly reported only where popular software such as SPSS reports them by default. For instance, few psychology papers report a CI for the correlation coefficient <em>r</em> (probably because it isn't in many introductory texts and isn't part of the default SPSS output).<br /> <br /> A case in point is the problem of internal reliability estimation. There are dozens of papers in the psychometrics literature that have shown that the most popular internal consistency reliability measure, coefficient alpha (or Cronbach's alpha) is seriously flawed. A number of alternative approaches or measures have been proposed that are relatively easy to estimate and have good properties when applied to scales in psychology. However, these measures rarely get used in practice. The main barriers here are probably awareness of the problem and availability of appropriate software. My guess is that once these barriers are reduced then alternatives to alpha will also get into text books and be more widely taught.<br /> <br /> Tom Dunn (a former PhD student) has just written a paper (co-authored with myself and Viv Brunsden) aiming to change people's attitude to coefficient alpha. This has just been accepted in the British Journal of Psychology. In it we try to summarize with as little jargon as possible the criticisms of coefficient alpha and recommend a simple alternative: McDonald's coefficient omega (McDonald, 1999). Crucially we also provide a mini-tutorial on calculating omega using R. We chose mainly R because it is free, open source and runs on Mac, PC and linux systems. A further, major advantage is that the MBESS package will estimate a bootstrap CI for omega. A reliability estimate (of any kind) is pretty useless if presented as a point estimate because it could be measured very imprecisely. In many cases the lower bound of the 95% CI is a more useful guide to whether a test is reliable. The lower bound will usually be conservative but it is better to be safe than sorry in most cases.<br /> <br /> A pre-print of the paper (links to the online version will be added as soon as they are available) can be found <a href="http://www.academia.edu/4182709/From_alpha_to_omega_A_practical_solution_to_the_pervasive_problem_of_internal_consistency_estimation" target="_blank">here</a>. The R script that runs the example in the paper can be accessed <a href="http://www2.ntupsychology.net/~baguley/R%20code%20for%20omega.R" target="_blank">here</a>. The data sets (in a zipped folder called "omega example") can be downloaded <a href="http://www2.ntupsychology.net/~baguley/omega%20example.zip" target="_blank">here</a>. Unzip this folder and put it on your desktop. (If you move it elsewhere you need to specify the path in the R code or change the R working directory to the folder where the data files are located. You can also download the .csv formatted data file directly from <a href="http://www2.ntupsychology.net/~baguley/SES.csv" target="_blank">here</a>.<br /> <br /> <em>References</em><br /> <em><br /></em> Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. <i>Psychometrika, 16</i>, 297–334.<br /> <br /> Dunn, T., Baguley, T., &amp; Brunsden, V. (2013, in press). <a href="http://onlinelibrary.wiley.com/doi/10.1111/bjop.12046/abstract" target="_blank">From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation</a>. <em>British Journal of Psychology</em>.<br /> <br /> McDonald, R. P. (1999). <i>Test theory: A unified approach</i>. Mahwah, NJ: Lawrence Erlbaum Associates.<br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-10374316899409913572013-06-21T16:32:00.003+01:002013-11-08T11:09:45.839+00:00Why faking data is bad ...It never occurred to me until today to write a post about why faking data is bad. However, I noticed an interesting exchange on <a href="https://en.wikipedia.org/wiki/Murray_Gell-Mann" target="_blank">Andrew Gelman's</a> blog (see the <a href="http://andrewgelman.com/2013/06/16/evilicious-why-we-evolved-a-taste-for-being-bad/" target="_blank">comments on this post about Marc Hauser</a>). One commenter argued that it was not clear that Hauser had faked his data (though I don't think that plausible given the results of the investigations and Hauser's dismissal from Harvard), and - more interestingly - that any data fraud was not serious because his supposedly fraudulent work has been replicated. This argument is in my opinion deeply flawed.<br /> <br /> Andrew Gelman's response was:<br /> <blockquote class="tr_bq"> T<span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">o a statistician, the data are substance, not form.</span></blockquote> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">I would generalize that to all of science. We'd certainly be better of thinking about data collection and analysis as integral to doing science rather than merely a necessary step in publishing papers, getting tenure or generating outputs for government research assessments.</span><br /> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">Replication in this context just means getting the direction of effect correct.&nbsp;</span><span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">When you fake data you mess up the scientific record in multiple ways. A replication doesn't solve this or remove the distortion. For instance a common problem in meta-analysis is that people republish the same data two or more times (e.g., writing it up for different journals or through publishing interim analyses or salami slicing). This can be very hard to spot through accidental or deliberate obscuring of data sources. The upshot is that any biases or quirks of the data are magnified in the meta-analysis. Publishing fake data is worse than this because the biases, quirks, effect sizes, moderator variables are made up. Even publishing an incorrect effect size could be hugely damaging. In fact, most problems with medical and other applied research are related to effect size rather than presence of an effect.</span><br /> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">Furthermore, the replication defense (albeit flawed in practice) has additional problems. One is that the replication probably isn't independent of the fake result. It is hard to publish failed replications - and researchers will be more lenient in their criteria to decide that they have replicated an established effect (e.g., using a one tailed test or re-running a failed replication on the assumption that they were unlucky). The most obvious problem is that you can't be sure in advance that the effect is real unless you run the experiment in the first place. I have run several experiments that have failed to show an effect or have gone in the opposite direction from what I believe.</span><br /> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;">Faking data is a bad idea - even you are remarkably insightful (and undoubtedly Hauser was clever) - the real data are a necessary part of the scientific process. Making up data distorts the scientific record.</span><br /> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span> <span class="Apple-style-span" style="font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span> <span class="Apple-style-span" style="color: #444444; font-family: 'Lucida Grande', sans-serif; font-size: 13px;"><br /></span>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-86759126025969086212013-06-13T12:55:00.001+01:002013-06-13T12:55:24.463+01:00Serious stats: using multilevel models to get accurate inferences for repeated measures ANOVA This article from my other blog may be of interest to readers of this blog: <a href="http://seriousstats.wordpress.com/2013/04/18/using-multilevel-models-to-get-accurate-inferences-for-repeated-measures-anova-designs/">http://seriousstats.wordpress.com/2013/04/18/using-multilevel-models-to-get-accurate-inferences-for-repeated-measures-anova-designs/ </a> thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-57784031600680425622013-04-21T20:50:00.000+01:002013-05-05T21:06:26.185+01:00Neuroscience, statistical power and how to increase itThere has been quite a bit of buzz recently about the <a href="http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html">Button et al. Nature Reviews Neuroscience paper on statistical power</a>. Several similar reviews have been published in psychology and other disciplines and come to broadly the same conclusion - that most studies are underpowered.&nbsp;The main difference with the Button et al. study is that they don't just find that typical studies are underpowered to &nbsp;detect the average size of effect in a field, but they find extremely low power in neuroscience research (around 20%, and below 10% for some subfields). Contrast this with a typical review from psychology and related disciplines. Sedlmeier and Gigerenzer (1989, Table) report power to detect a medium effect size ranging from 37% to 89%. David Clark-Carter (1997) reviewed papers in the British Journal of Psychology and found power to detect a median effect of 59%. Thus the power of typical research in psychology is not that high, but (if we make fairly reasonable assumptions about the size of typical effects in the discipline) estimates appear to be around 60% rather than the 20% found in the Button et al. paper. What caught me interest, however, was some of the responses to the publication in blogs and blog comments. For example one of the comments on <a href="http://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/?preview=true">Ed Yong's piece</a> stated <blockquote>Another argument for parallel recording. Traditional, one-neuron-at-a-time neurophysiological papers study 10s of neurons. Multi-electrode studies have 100s or 1000s of neurons. Enough power? Maybe not, but way more power than single neuron recording.</blockquote> A similar sentiment arises in <a href="http://computingforpsychologists.wordpress.com/2013/04/11/comment-on-the-button-et-al-2013-neuroscience-power-failure-article-in-nrn/">Matt Wall's piece</a>: <blockquote>MRI scanners have significantly improved in the last ten years, with 32 or even 64-channel head-coils becoming common, faster gradient switching, shorter TRs, higher field strength, and better field/data stability all meaning that the signal-to-noise has improved considerably. This serves to cut down one source of noise in fMRI data – intra-subject variance. The inter-subject variance of course remains the same as it always was, but that’s something that can’t really be mitigated against, and may even be of interest in some (between-group) studies. On the analysis side, new multivariate methods are much more sensitive to detecting differences than the standard mass-univariate approach.</blockquote> <p>Matt's piece is thoughtful and I would agree which much of what he writes, but the idea that increasing observations within a person will do much to resolve the problem is probably not correct (and for reasons that Matt mentions). To understand why, consider the typical nature of the experimental designs being used. As I understand it there are essentially two main types of design: a nested repeated measures design or a factorial design with fully crossed random effects. There are many variants (e.g., additional layers of nesting, additional fully crossed random factors), but the aforementioned characteristics capture characteristics most of the designs I'm familiar with in cognitive neuroscience (and possibly in many other areas of neuroscience).</p> <p>In a nested repeated measures design there are m multiple measurements within each of <em>n</em> persons. The multiple measurement are correlated in some way so - in general - the power of the design has an effective sample size that is less than <em>N</em> (where <em>N</em> = <em>n</em> * <em>m</em>). It turns out that for most such designs the limiting factor in power and precision is <em>n</em> and not <em>m</em> or <em>N</em>. </p> <p>This isn't always true, but generally experimental designs get refined quite quickly to reduce the impact of sources of error in the repeated measurements. This could be increasing the number of trials or tightening up the experimental procedures (e.g., instructions, quality of materials) or by technical advances that reduce measurement error for each measurement occasion. Once you get measurement error per trial moderately low, improving measurement error further has very little impact on power. That's because the error at each measurement occasion includes transient error that can't really be eliminated (many behaviours are just inherently variable from occasion to occasion) and because as you reduce these errors the other sources of error in the study become the main limiting factors on power.</p> <p>For example, when I was a PhD student many reaction time experiments used computers with dodgy clocks that couldn't time more accurately than 1/60th of a second or around 17 ms (and perhaps many still do). If you are looking for a priming effect of say 30 milliseconds this would seem like a major problem. However, you can get pretty accurate inferences without much bias or loss of power as long as the variability of the RTs are fairly large - which they generally are (Ulrich & Giray, 1989). For most neuroscience work involving humans the limiting factors in power (once you are dealing with a reasonably refined experimental set-up) are therefore related to <em>n</em>. A further consideration is that top level <em>n</em> generally needs to be in the 30-50 range or (preferably) greater just to get vaguely reasonable estimates of the variances and covariances if you are dealing with data sampled from approximately normal distributions. Smaller samples also make the study more vulnerably to a atypical 'outlier' at the person level (e.g., a participant using a weird strategy or responding randomly) or to selective bias by the experimenters (dropping a 'noisy' participant because they go against the hypothesis). Having small <em>n</em> at the top level may also make focus on statistical significance rather than interval estimates of effects more attractive (because it reduces precision of measurement). In other words it encourages studies that find 'evidence' of an effect and discourages focus on accurate estimates of the size of an effect.</p> <p>For fully crossed random factor designs the situation is worse. In these designs you sample both people and stimuli (e.g., faces, words, etc.) from a large (conservatively assumed to be infinite) population. The limiting factor on power now probably depends not on <em>n</em>1 (the number of people) or <em>n</em>2 (the number of stimuli) but the the smaller of <em>n</em>1 and <em>n</em>2 (assuming you want to make inferences that generalise to people and stimuli not in your experiment). Thus having 1000 people has little effect on power if your study uses only two faces (and you want to make general inferences about face perception rather than perception of those two faces). This is a slight oversimplification - as it assumes that the stimuli and people are equally variable in terms of what you measure - however it is a good rule of thumb unless variability in either people or stimuli is large enough to swamp the other source.</p> <p>There is also an important caveat here - I'm assuming that you do the statistics correctly. Many, many studies still analyse fully crossed random factor designs as if they are nested, resulting in spuriously high power (see <a href="http://psychologicalstatistics.blogspot.co.uk/2012/06/stimuli-as-fixed-effect-fallacy.html">here for an earlier blog post on this</a>). </p> <p>This analysis should hold whenever: i) the basic experimental procedure is fairly well-refined, ii) variability between people (or stimuli in appropriate designs) on the measures of interest are non-negligible. Thus it should hold more often than not in psychology and related areas of neuroscience. There are undoubtedly subfields in which it won't hold (e.g., some areas of vision research where <em>n</em> = 2 studies are common because individual differences on the crucial effects are low). </p> <em>Postscript</em> <p>One objection to my conclusion is that if neuroscience power is limited by number of participants and number of stimuli, why do small samples persist? This is a good question. I offer three main answers: i) As with psychology (where power is also generally low, remember) you can have low power for each test if you have multiple tests. Maxwell (2004) pointed out that a typical 2 x 2 factorial design might only have 50% power per test but that means 87.5% chance of at least one significant result. Thus low power generally produces something statistically significant (though it also predicts that replications will generally fail to show consistent patterns of statistical significance), ii) researcher degrees of freedom (see Simmons et al., 2011), and iii) many research teams run many small studies (e.g., undergraduate and masters projects) so (in some cases) there are many unreported studies with null results.</p> <em>References</em> <p>Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. <em>Psychological Methods, 9</em>, 147–63.</p><p>Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. <em>Psychological Science, 22</em>, 1359-66.</p> <p>Ulrich, R., & Giray, M. (1989). Time resolution of clocks: Effects on reaction time measurement - Good news for bad clocks. <em>British Journal of Mathematical & Statistical Psychology, 42</em>, 1-12.</p>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-71623930950335643082013-04-10T21:38:00.002+01:002013-04-10T21:39:21.828+01:00Reflecting on the end of history illusion illusion<span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">A while back Jon Sutton at <a href="http://www.bps.org.uk/publications/psychologist/psychologist" target="_blank">The Psychologist</a> asked my opinion on the end of history illusion. This was sparked by&nbsp;an interesting&nbsp;<a href="http://www.wjh.harvard.edu/~dtg/Quoidbach%20et%20al%202013.pdf" target="_blank">Science paper by Quoidbach, Gilbert and Wilson</a>. Blogger and mathematician Jordan Ellenberg had written a blog post arguing that the paper makes a mistake: </span><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">"</span><span class="Apple-style-span" style="line-height: 22px;"><span class="Apple-style-span" style="font-family: Georgia, 'Times New Roman', serif;">a somewhat subtle mistake, but a bad mistake, and one which kills a big chunk of the paper"</span><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">.</span></span><br /> <span class="Apple-style-span" style="line-height: 22px;"><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span></span><span class="Apple-style-span" style="line-height: 22px;"><span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">Jon wanted a second opinion, and after a bit of reading I replied that Ellenberg's criticisms were valid. I meant to blog about it at the time but got caught up in other things. Consequently I missed the <a href="http://bps-research-digest.blogspot.co.uk/2013/01/the-end-of-history-illusion-illusion.html" target="_blank">BPS research digest piece</a> on it.&nbsp;</span></span><br /> <span class="Apple-style-span" style="color: #555555; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 13px; line-height: 22px;"><br /><span class="Apple-style-span" style="color: black; font-family: 'Trebuchet MS', sans-serif; font-size: small;">The reason for writing this blog post is because the flaw that Ellenberg spotted is quite interesting in its own right and because both the description by Ellenberg and the description in the Research Digest article probably don't explain it clearly enough for some readers to appreciate. Ellenberg's piece is (I hasten to add) crystal clear but relies on a reader being comfortable with the formal, mathematical approach he takes (which many psychologists won't be). The Research Digest description just gives the brief gist (with a link to Ellenberg for the full picture). Here is my belated attempt at a psychologist-friendly interpretation with no formal notation - and as little maths as possible.</span></span><br /> <br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif; line-height: 22px;">According to the end of history illusion people underestimate how much they will change in the future. For example, someone asked to predict how their personality would change in the next ten years would come up with a prediction closer to their original position than their actual position. Quoidbach et al. tested this mainly by asking people to predict future values on some psychological variable (e.g., a personality test score) and then showing that actual change is much greater than the difference between the original and predicted scores. This seems highly plausible, but Ellenberg pointed out that the difference in the predicted and original scores is a different quantity from the expected (absolute) change in scores.</span><br /> <br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif; line-height: 22px;">Why is this? Perhaps the easiest way to understand is to work through a simple example. Imagine that my extraversion score is 50 on a scale that goes from 0 (extremely introverted) to 100 (extremely extraverted). A researcher then asks me to predict my extraversion score in 10 years time. I, being a keen observer of human nature (bear with me on this if you know me - it is just an example), am aware that personality is not fixed and judge that I am likely to change quite a bit - say 15 points - on the scale. However, I might get more extraverted or I might get more introverted (depending on how life treats me over the next ten years). Given that I'm in the middle of the scale, I could end with a score of 35 or a score of 65. Thus I predict that my extraversion score after 10 years will be (35 + 65)/2 = 50. It looks as though I've predicted zero change, when what I've done is give the best prediction I can (one that minimizes my prediction error). Had I instead been asked to give the absolute change I expected, my answer would have been different. It would have been (15 + 15)/2 = 15 (not zero).</span><br /> <br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif; line-height: 22px;">Although the example is simple it captures the essence of the problem. Commenters on Ellenberg's blog looked again at the raw data that Quoidback et al. provided. According to their analyses the end of history illusion largely disappears when analyzed correctly (though only some of the data sets support such a reanalysis). Thus if the end of history illusion effect exists (and the basic premise seems highly plausible) it is quite probably a much smaller and more fragile effect than originally thought. That makes sense to me - because I'm not sure that such a bias could be both pervasive and large in the face of the counter-evidence available to people about past change in themselves and change in others.</span><br /> <br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif; line-height: 22px;">My continued interest in the effect is slightly different. There seems to be a cognitive illusion at work here - one that makes the difference between the original score and predicted score appear to be a good measure of an entirely different quantity - the expected absolute change in score ...</span><br /> <br /> <br /> <br /> <br /> <br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-46292603752184873712013-01-28T21:27:00.001+00:002013-01-28T21:31:32.321+00:00The growth of Bayesian methods in psychology<span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><a href="http://onlinelibrary.wiley.com/doi/10.1111/bmsp.2013.66.issue-1/issuetoc" target="_blank">The British Journal of Mathematical and Statistical Psychology </a>has published a target article (with commentaries and reply) by Andrew Gelman and Cosma Shalizi on philosophy and the practice of Bayesian statistics.</span><br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">Mark Andrews and I introduce the target article with an editorial aimed at providing some background to psychologists who are interested in Bayesian statistics but need a little back story. Our main aim was to try and indicate that the debate about Bayesian statistics has moved on from the frequentist vs. Bayesian argument and on to more interesting territory - illustrated both by the target article and the commentaries.</span><br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">Also I believe that as of writing access is free to the target article and commentary ...</span><br /> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><br /></span> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;"><span class="Apple-style-span" style="line-height: 21px;"></span></span><br /> <div style="margin-left: 24pt; text-indent: -24.0pt;"> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif;">Andrews, M., &amp; Baguley, T. (2013). <a href="http://onlinelibrary.wiley.com/doi/10.1111/bmsp.12004/abstract" target="_blank">Prior approval: The growth of Bayesian methods in psychology</a>. <i>British Journal of Mathematical and Statistical Psychology</i>, <i>66</i>, 1–7. doi:10.1111/bmsp.12004</span></div> <br /> <span class="Apple-style-span" style="line-height: 21px;"></span><br /> <div style="font: normal normal normal 12px/normal Times; margin-bottom: 12px; margin-left: 32px; margin-right: 0px; margin-top: 0px; text-indent: -32px;"> <span class="Apple-style-span" style="font-family: 'Trebuchet MS', sans-serif; font-size: small;">Gelman, A., &amp; Shalizi, C. R. (2013). <a href="http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8317.2011.02037.x/abstract" target="_blank">Philosophy and the practice of Bayesian statistics</a>. <i>British Journal of Mathematical and Statistical Psychology</i>, <i>66</i>, 8–38. doi:10.1111/j.2044-8317.2011.02037.x</span></div> <div style="color: #494848; font-family: ff-dagny-web-pro, 'Helvetica Neue', Arial, sans-serif; font-size: 13px; font: normal normal normal 12px/normal Times; margin-bottom: 12px; margin-left: 32px; margin-right: 0px; margin-top: 0px; text-indent: -32px;"> <br /></div> <br /> <span class="Apple-style-span" style="color: #494848; font-family: ff-dagny-web-pro, 'Helvetica Neue', Arial, sans-serif; font-size: 13px; line-height: 21px;"><br /></span> <span class="Apple-style-span" style="color: #494848; font-family: ff-dagny-web-pro, 'Helvetica Neue', Arial, sans-serif; font-size: 13px; line-height: 21px;"><br /></span> <span class="Apple-style-span" style="color: #494848; font-family: ff-dagny-web-pro, 'Helvetica Neue', Arial, sans-serif; font-size: 13px; line-height: 21px;"><br /></span>thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0tag:blogger.com,1999:blog-27862247.post-84185035353442148532012-09-25T23:10:00.001+01:002012-09-26T12:27:51.835+01:00Guest post: Visualizing data using a 3D printer<em>In a break from my usual obsessions and interests here is a guest blog post by <a href="http://www.drianwalker.com/work.html">Ian Walker</a>. I'm posting it because I think it is rather cool and hope it will be of interest to some of my regular readers. Ian is perhaps best known (in the blogosphere) for his work on transport psychology - particularly cycling - but is also an expert on psychological statistics.</em><br /> <em><br /></em> Some time ago, I had some data that lent themselves to a three-dimensional surface plot. The problem was, the plot was quite asymmetrical, and finding the right viewing angle to see it effectively on a computer screen was extremely difficult. I spent ages tweaking angles and every possible view seemed to involve an unacceptable compromise.<br /> <br /> Of course, displaying fundamentally three-dimensional items in two dimensions is an ancient problem, as any cartographer will tell you. That night, as I lay thinking in bed, a solution presented itself. I had recently been reading about the work of a fellow University of Bath researcher, Adrian Bowyer, and his <a href="http://www.reprap.org/wiki/RepRap">RepRap project</a>, to produce an open-source three-dimensional printer. The solution was obvious: I had to find a way to print R data on one of these printers!<br /> <br /> I managed to meet up with Adrian back in May 2012, and he explained to me the structure of the STL (stereolithography) files commonly used for three-dimensional printing. These describe an object as a large series of triangles. I decided I'd have a go at writing R code to produce valid STL files.<br /> <br /> I'm normally a terrible hacker when it comes to programming; I usually storm in and try to make things work as quickly as possible then fix all the mistakes later. This time, I was much more methodical. As a little lesson to us all, the methodical approach worked: I had the core code producing valid STL files in under 3 hours. <br /> <br /> Unfortunately, it then took until September 2012 before I could get hold of somebody with a 3D printer who'd let me test my code. A few days ago the first prototype was produced, as you can see in this photograph:<br /> <br /> <div style="text-align: center;"> <img alt="3dfunctionr.jpg" border="0" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhyQ6Xn7AYbNY_t84QLAR2ZtWKp7UEOSzc7M37dPUajr-_3oUg7u9ECB9sOIY0I17tdsGdDfgK_hRogIAcGCYTJE3QS5jS4FJc-QDuFIZG51m92bm3ljtaCOdNbE9B1T71JVSE8/?imgmax=800" width="300" /></div> <br /> <br /> So now I'd like to share the code under a Creative Commons BY-NC-SA licence, in case anybody else finds it useful. You can download the code <a href="http://drianwalker.com/r2stl.r" target="_blank">here</a>, in a file called <span style="font-family: monospace;">r2stl.r</span>. One day, when I learn how, I might try to make this a library, but for now you can just call this code with R's <span style="font-family: monospace;">source()</span> command. All that is in the file is the function <span style="font-family: monospace;">r2stl()</span>, and having once called the file with <span style="font-family: monospace;">source()</span>, you can then use the r2stl function to generate your STL files. The command is:<br /> <br /> <code>r2stl(x, y, z, filename='3d-R-object.stl', object.name='r2stl-object', z.expand=FALSE, min.height=0.008, show.persp=FALSE, strict.stl=FALSE)</code><br /> <br /> <br /> <br /> <br /> <ul><br /> <li><b>x</b>, <b>y</b> and <b>z</b> should be vectors of numbers, exactly as with R's normal <span style="font-family: monospace;">persp()</span> plot. x and y represent a flat grid and z represents heights above this grid</li> <br /> <li><b>filename</b> is pretty obvious, I hope</li> <br /> <li><b>object.name</b> The STL file format requires the object that is being described to have a name specified inside the file. It's unlikely anybody will ever see this, so there's probably no point changing it from the default</li> <br /> <li><b>z.expand</b> By default, r2stl() normalizes each axis so it runs from 0 to 1 (this is an attempt to give you an object that is agnostic with regard to how large it will eventually be printed). Normally, the code then rescales the z axis back down so its proportions relative to x and y are what they were originally. If, for some reason, you want your 3D plot to touch all six faces of the imaginary cube that surrounds it, set this parameter to TRUE</li> <br /> <li><b>min.height</b> Your printed model would fall apart if some parts of it had z values of zero, as this would mean zero material is laid down in those parts of the plot. This parameter therefore provides a minimum height for the printed material. The default of 0.008 ensures that, when printed, no part of your object is thinner than around 0.5 mm, assuming that it is printed inside a 60 mm x 60 mm x 60 mm cube. Recall that the z axis gets scaled from 0 to 1. If you are printing a 60mm-tall object then a z-value of 1 represents 60mm. The formula is min.height=min.mm/overall.mm, so if we want a minimum printed thickness of 0.5mm and the overall height of your object will be 60mm, 0.5/60 = 0.008, which is the default. If you want the same minimum printed thickness of 0.5mm but want to print your object to 100mm, this parameter would be set to 0.5/100 = 0.005</li> <br /> <li><b>show.persp</b> Do you want to see a <span style="font-family: monospace;">persp()</span> plot of this object on your screen as the STL is being generated? Default is FALSE</li> <br /> <li><b>strict.stl</b> To make files smaller, this code cheats and simply describes the entire rectangular base of your object as two huge triangles. This seems to work fine for printing, but isn't strictly proper STL format. Set this to TRUE if you want the base of your object described as a large number of triangles and don't mind larger files</li> <br /> </ul> <br /> <br /> <br /> To view and test your STL files before you print them, you can use various programs. I have had good experiences with the free, open-source <a href="http://meshlab.sourceforge.net/">Meshlab</a>, which even has iPhone and Android versions so you can let people interact with your data even when you're in the pub. Even if all you ever do is show people your 3D plots using Meshlab, I believe <span style="font-family: monospace;">r2stl()</span> still offers a useful service, as it makes viewing data far more interactive than static <span style="font-family: monospace;">persp()</span> plots. To actually get your hands on a printer, you might try your local school - apparently lots of schools have got rapid prototypers these days.<br /> <br /> <br /> <b>Demo</b><br /> <br /> <code><br />source('r2stl.r')<br /><br /># Let's do the classic <span style="font-family: monospace;">persp()</span> demo plot, as shown in the photograph above<br /><br />x &lt;- seq(-10, 10, length= 100)<br /><br />y &lt;- x<br /><br />f &lt;- function(x,y) { r &lt;- sqrt(x^2+y^2); 10 * sin(r)/r }<br /><br />z &lt;- outer(x, y, f)<br /><br />z[is.na(z)] &lt;- 1<br /><br />r2stl(x, y, z, filename="lovelyfunction.stl", show.persp=TRUE)<br /><br /><br /><br /># Now let's look at R's Volcano data<br /><br />z &lt;- volcano<br /><br />x &lt;- 1:dim(volcano)[1]<br /><br />y &lt;- 1:dim(volcano)[2]<br /><br />r2stl(x, y, z, filename="volcano.stl", show.persp=TRUE)</code><br /> <br /> <br /> I hope you might find this code useful. Any questions or suggestions, then please get in touch.<br /> <br /> <br /> September 2012 - <a href="http://www.drianwalker.com/work.html">Ian Walker</a>, Department of Psychology, University of Bath.<br /> <br />thomhttp://www.blogger.com/profile/00392478801981388165noreply@blogger.com0