antivax-attitudes.Rmd (22639B)
1 --- 2 title: "Bayesian estimation of anti-vaccination belief changes" 3 author: "Eamon Caddigan" 4 date: "September 3, 2015" 5 output: html_document 6 --- 7 8 ```{r global_options, include=FALSE} 9 knitr::opts_chunk$set(cache=TRUE, echo=FALSE, warning=FALSE, message=FALSE, 10 fig.width=9, fig.align="center") 11 ``` 12 13 ```{r setup_data, results="hide"} 14 # Required librarys and external files ---------------------------------------- 15 16 library(readxl) 17 library(tidyr) 18 library(dplyr) 19 library(ggplot2) 20 library(gridExtra) 21 library(rjags) 22 library(runjags) 23 source("DBDA2E-utilities.R") 24 source("ggPostPlot.R") 25 26 # Clean and process the data -------------------------------------------------- 27 28 # Generates warnings for the Ps who didn't do day 2 29 suppressWarnings(expData <- read_excel("Vacc_HPHH_publicDataset.xlsx", sheet = 2)) 30 31 # Exclude Ps who didn't do day 2 and failed the attention checks 32 expData.clean <- expData %>% 33 # It's good to add a subject number so we can go back to original data 34 mutate(subject_number = 1:nrow(.)) %>% 35 filter(Returned == 1, 36 `AttentionCheck_PostTest (if = 4 then include)` == 4, 37 `AttentionChecks_Sum(include if = 4)` == 4, 38 Paid_Attention == 1) 39 40 # Get all the dependent measures into a DF 41 questionnaireData <- expData.clean %>% 42 # Pull out the columns and use consistent names 43 select(subject_number, 44 intervention = Condition, 45 pretest.healthy = Healthy_VaxscalePretest, 46 posttest.healthy = Healthy_VaxscalePosttest, 47 pretest.diseases = Diseases_VaxScalePretest, 48 posttest.diseases = Diseases_VaxScalePosttest, 49 pretest.doctors = Doctors_VaxScalePreTest, 50 posttest.doctors = Doctors_VaxScalePostTest, 51 pretest.side_effects = Sideeffects_VaxScalePreTest, 52 posttest.side_effects = Sideeffects_VaxScalePostTest, 53 pretest.plan_to = Planto_VaxScalePreTest, 54 posttest.plan_to = Planto_VaxScalePostTest) %>% 55 # Reverse-code the approrpiate columns 56 mutate(pretest.diseases = 7 - pretest.diseases, 57 posttest.diseases = 7 - posttest.diseases, 58 pretest.side_effects = 7 - pretest.side_effects, 59 posttest.side_effects = 7 - posttest.side_effects) %>% 60 # Tidy the data 61 gather("question", "response", -subject_number, -intervention) %>% 62 separate(question, c("interval", "question"), sep = "\\.") %>% 63 mutate(intervention = factor(intervention, 64 c("Control", "Autism Correction", "Disease Risk")), 65 interval = factor(interval, 66 c("pretest", "posttest"), ordered = TRUE), 67 question = factor(question, 68 c("healthy", "diseases", "doctors", "side_effects", "plan_to"))) 69 # ----------------------------------------------------------------------------- 70 ``` 71 72 ## Introduction 73 74 How easy is it to change people's minds about vaccinating their children? According to a recent study ([Horne, Powell, Hummel & Holyoak, 2015](http://www.pnas.org/content/112/33/10321.abstract)), a simple intervention -- which consisted of showing participants images, an anecdote, and some short warnings about diseases -- made participants more likely to support childhood vaccinations. [Here's a good writeup](https://news.illinois.edu/blog/view/6367/234202) of the article if you're unable to read the original. 75 76 The authors [placed their data online](https://osf.io/nx364/), which comprises pre- and post-intervention survey responses for three groups of participants: 77 78 1. A control group 79 2. An "autism correction" group that were shown evidence that vaccines don't cause autism. 80 3. A "disease risk" group that were shown images, an anecdote, and some short warnings about the diseases (such as rubella and measles) that the vaccines prevent. 81 82 I chose to look over this data for a couple reasons. First, I'm friends with two of the authors (University of Illinois Psychologists Zach Horne and John Hummel) and it's good to see them doing cool work. Second, my own research has given me little opportunity to work with survey data, and I wanted more experience with the method. I was excited to try a Bayesian approach because it makes it possible to perform post hoc comparisons without inflating the "type I"" (false positive) error rates (see below). 83 84 Participants were given a surveys with five questions and asked to rate their level of agreement with each on a six-point scale. 85 86 code | question 87 -------------|------------- 88 healthy | Vaccinating healthy children helps protect others by stopping the spread of disease. 89 diseases | Children do not need vaccines for diseases that are not common anymore. *reverse coded* 90 doctors | Doctors would not recommend vaccines if they were unsafe. 91 side_effects | The risk of side effects outweighs any protective benefits of vaccines. *reverse coded* 92 plan_to | I plan to vaccinate my children. 93 94 ```{r plot_responses, dependson="setup_data"} 95 # Calculate the change-in-attitude for each subject on each question 96 questionnaireData <- questionnaireData %>% 97 group_by(subject_number, question) %>% 98 spread(interval, response) %>% mutate(change = posttest-pretest) %>% 99 gather("interval", "response", pretest, posttest) 100 101 p2 <- ggplot(questionnaireData, aes(x = interval, y = response, group = subject_number, color = change)) + 102 geom_line(alpha = 0.2, position = position_jitter(w = 0.1, h = 0.1)) + 103 facet_grid(intervention ~ question) + 104 scale_color_gradient2(low="red", mid="grey20", high="blue") 105 print(p2) 106 ``` 107 108 The above figure shows the data. Each line represents represents a single participant's responses before and after the intervention, organized by intervention group and question. Lines are colored by the magnitude of the change in response; blue lines indicate an increase in agreement (toward a more pro-vaccine stance) and red lines indicate a reduction in agreement (a more anti-vaccine stance). 109 110 The JAGS code for the model is part of the source of this document, which is [available on Github](https://github.com/eamoncaddigan/antivax-attitudes). It uses a Bayesian analog to a three-factor ANOVA, with a thresholded cummulative normal distribution serving as a link function. Such models fit ordinal responses (such as those obtained from surveys) well. The thresholds and variance of the link function were fit independently for each question. The mean of the function was estimated for each response using a linear combination of the levels of the question, the interval (pre-test vs. post-test), the intervention group, and all interactions between these factors. 111 112 ## Results 113 114 ```{r run_model, dependson="setup_data"} 115 # Get the data ready for JAGS 116 x1 <- as.numeric(as.factor(questionnaireData[["question"]])) 117 Nx1Lvl <- max(x1) 118 x2 <- as.numeric(as.factor(questionnaireData[["intervention"]])) 119 Nx2Lvl <- max(x2) 120 x3 <- as.numeric(as.factor(questionnaireData[["interval"]])) 121 Nx3Lvl <- max(x3) 122 y <- as.numeric(questionnaireData[["response"]]) 123 Ntotal <- length(y) 124 nYlevels <- max(y) 125 126 # Threshold 1 and nYlevels-1 are fixed; other thresholds are estimated. 127 # This allows all parameters to be interpretable on the response scale. 128 thresh <- matrix(data = NA, nrow = Nx1Lvl, ncol = nYlevels-1) 129 thresh[, 1] <- 1 + 0.5 130 thresh[, nYlevels-1] <- nYlevels-1 + 0.5 131 # Specify the data in a list, for later shipment to JAGS: 132 dataList <- list( 133 x1 = x1, 134 Nx1Lvl = Nx1Lvl, 135 x2 = x2, 136 Nx2Lvl = Nx2Lvl, 137 x3 = x3, 138 Nx3Lvl = Nx3Lvl, 139 y = y, 140 NyLvl = nYlevels, 141 thresh = thresh, 142 Ntotal = Ntotal 143 ) 144 145 # Prepare the model for JAGS 146 modelString <- " 147 model { 148 for (i in 1:Ntotal) { 149 # Thresholded cummulative normal distribution 150 y[i] ~ dcat(pr[i,1:NyLvl]) 151 pr[i,1] <- pnorm(thresh[x1[i], 1], mu[i], 1/sigma[x1[i]]^2) 152 for (k in 2:(NyLvl-1)) { 153 pr[i,k] <- max(0, pnorm(thresh[x1[i], k] , mu[i] , 1/sigma[x1[i]]^2 ) - 154 pnorm(thresh[x1[i], k-1] , mu[i] , 1/sigma[x1[i]]^2 )) 155 } 156 pr[i,NyLvl] <- 1 - pnorm(thresh[x1[i], NyLvl-1] , mu[i] , 1/sigma[x1[i]]^2) 157 158 # mu ~ x1*x2*x3 159 mu[i] <- a0 + a1[x1[i]] + a2[x2[i]] + a3[x3[i]] + 160 a1a2[x1[i], x2[i]] + a1a3[x1[i], x3[i]] + a2a3[x2[i], x3[i]] + 161 a1a2a3[x1[i], x2[i], x3[i]] 162 } 163 164 a0 ~ dnorm((1+NyLvl)/2, 1/(NyLvl)^2) 165 166 for (j1 in 1:Nx1Lvl) { 167 # Constant sigma for beta1, we're treating all Qs as independent 168 a1[j1] ~ dnorm(0.0, 1/(NyLvl)^2) 169 170 # Sigma for normal CDF, unique for each x1. 171 sigma[j1] ~ dunif(NyLvl/1000, NyLvl*10) 172 173 # Threshold distributions. 1 and NyLvl-1 are fixed, not stochastic 174 for (k in 2:(NyLvl-2)) { 175 thresh[j1, k] ~ dnorm(k+0.5, 1/2^2) 176 } 177 } 178 179 # Constant sigma for beta2, the interventions are independent 180 for (j2 in 1:Nx2Lvl) { 181 a2[j2] ~ dnorm(0.0, 1/(NyLvl)^2) 182 } 183 184 # Constant sigma for beta3 185 for (j3 in 1:Nx3Lvl) { 186 a3[j3] ~ dnorm(0.0, 1/(NyLvl)^2) 187 } 188 189 # Interaction terms also have homogenous variance 190 for (j1 in 1:Nx1Lvl) { 191 for (j2 in 1:Nx2Lvl) { 192 a1a2[j1, j2] ~ dnorm(0.0, 1/(NyLvl)^2) 193 } 194 } 195 for (j1 in 1:Nx1Lvl) { 196 for (j3 in 1:Nx3Lvl) { 197 a1a3[j1, j3] ~ dnorm(0.0, 1/(NyLvl)^2) 198 } 199 } 200 for (j2 in 1:Nx2Lvl) { 201 for (j3 in 1:Nx3Lvl) { 202 a2a3[j2, j3] ~ dnorm(0.0, 1/(NyLvl)^2) 203 } 204 } 205 for (j1 in 1:Nx1Lvl) { 206 for (j2 in 1:Nx2Lvl) { 207 for (j3 in 1:Nx3Lvl) { 208 a1a2a3[j1, j2, j3] ~ dnorm(0.0, 1/(NyLvl)^2) 209 } 210 } 211 } 212 213 # Compute cell means 214 for (j1 in 1:Nx1Lvl) { 215 for (j2 in 1:Nx2Lvl) { 216 for (j3 in 1:Nx3Lvl) { 217 m[j1, j2, j3] <- a0 + a1[j1] + a2[j2] + a3[j3] + 218 a1a2[j1, j2] + a1a3[j1, j3] + a2a3[j2, j3] + 219 a1a2a3[j1, j2, j3] 220 } 221 } 222 } 223 224 # Convert a0, a1[], a2[], &c. to sum-to-zero b0, b1[], b2[], &c. 225 b0 <- mean(m[1:Nx1Lvl, 1:Nx2Lvl, 1:Nx3Lvl]) 226 for (j1 in 1:Nx1Lvl) { 227 b1[j1] <- mean(m[j1, 1:Nx2Lvl, 1:Nx3Lvl]) - b0 228 } 229 for (j2 in 1:Nx2Lvl) { 230 b2[j2] <- mean(m[1:Nx1Lvl, j2, 1:Nx3Lvl]) - b0 231 } 232 for (j3 in 1:Nx3Lvl) { 233 b3[j3] <- mean(m[1:Nx1Lvl, 1:Nx2Lvl, j3]) - b0 234 } 235 for (j1 in 1:Nx1Lvl) { 236 for (j2 in 1:Nx2Lvl) { 237 b1b2[j1, j2] <- mean(m[j1, j2, 1:Nx3Lvl]) - (b0 + b1[j1] + b2[j2]) 238 } 239 } 240 for (j1 in 1:Nx1Lvl) { 241 for (j3 in 1:Nx3Lvl) { 242 b1b3[j1, j3] <- mean(m[j1, 1:Nx2Lvl, j3]) - (b0 + b1[j1] + b3[j3]) 243 } 244 } 245 for (j2 in 1:Nx2Lvl) { 246 for (j3 in 1:Nx3Lvl) { 247 b2b3[j2, j3] <- mean(m[1:Nx1Lvl, j2, j3]) - (b0 + b2[j2] + b3[j3]) 248 } 249 } 250 for (j1 in 1:Nx1Lvl) { 251 for (j2 in 1:Nx2Lvl) { 252 for (j3 in 1:Nx3Lvl) { 253 b1b2b3[j1, j2, j3] <- m[j1, j2, j3] - (b0 + b1[j1] + b2[j2] + b3[j3] + 254 b1b2[j1, j2] + b1b3[j1, j3] + b2b3[j2, j3]) 255 } 256 } 257 } 258 } 259 " # close quote for modelString 260 # Write out modelString to a text file 261 writeLines(modelString , con="TEMPmodel.txt") 262 263 # Tell JAGS which parameters to return 264 parameters <- c("b0", "b1", "b2", "b3", "b1b2", "b1b3", "b2b3", "b1b2b3", 265 "sigma", "thresh") 266 267 # JAGS parameters. We'll let it iniaialize itself 268 initsList <- NULL 269 adaptSteps <- 500 # Number of steps to "tune" the samplers 270 burnInSteps <- 1000 271 numSavedSteps <- 15000 272 thinSteps <- 10 273 nChains <- 4 274 fileNameRoot <- "antivax-mcmc" 275 276 # Since running JAGS takes forever, we'll skip redoing it every time we knit. 277 saveName <- paste0(fileNameRoot, "-coda.Rdata") 278 if (file.exists(saveName)) { 279 load(saveName) 280 } else { 281 runJagsOut <- run.jags(method="parallel", 282 model="TEMPmodel.txt", 283 monitor=parameters, 284 data=dataList, 285 #inits=initsList, 286 n.chains=nChains, 287 adapt=adaptSteps, 288 burnin=burnInSteps, 289 sample=ceiling(numSavedSteps/nChains), 290 thin=thinSteps, 291 summarise=FALSE, 292 plots=FALSE) 293 codaSamples <- as.mcmc.list(runJagsOut) 294 save(codaSamples, file=saveName) 295 } 296 mcmcMat <- as.matrix(codaSamples) 297 ``` 298 299 ### A "risk" intervention changes attitudes toward vaccination 300 301 When fitting model parameters using Monte Carlo methods, it's important to inspect the posterior distribution to make sure the samples converged. Here's an example of one parameter, the intercept for the mean of the cummulative normal. 302 303 ```{r plot_diag, dependson="run_model", fig.width=5, fig.height=5} 304 diagMCMC(codaObject = codaSamples, 305 parName = "b0", 306 saveName = NULL) 307 ``` 308 309 It's also important to check the predictions made by a model against the data being fit, as "[we cannot really interpret the parameters of the model very meaningfully when the model doesn't describe the data very well](http://doingbayesiandataanalysis.blogspot.com/2015/08/a-case-in-which-metric-data-are-better.html)". Here are response histograms for each question, averaged across the levels of the other factors. Model predictions are superimposed on the histograms, along with the 95% HDI for each response. 310 311 ```{r plot_ppc, dependson="run_model", fig.height=6} 312 plots <- list() 313 for (x1Level in seq_along(levels(questionnaireData$question))) { 314 p <- ggPosteriorPredictive(questionnaireData, codaSamples, x1Level = x1Level) 315 p <- p + ggtitle(levels(questionnaireData$question)[x1Level]) 316 p <- p + theme_classic() 317 plots[[length(plots)+1]] <- p 318 } 319 do.call(grid.arrange, c(plots, ncol=3)) 320 ``` 321 322 Since the sampling procedure was well-behaved and the model describes the data well, we can use the parameter estimates to judge the size of the effects. Here are is the estimate of the change in attitude (post-test - pre-test) for each intervention group. 323 324 ```{r plot_change, dependson="run_model", fig.height=3} 325 par(mfrow = c(1, 3), mar=c(2, 1, 1, 1), oma=c(0, 0, 4, 0)) 326 for (x2Level in seq_along(levels(questionnaireData$intervention))) { 327 plotPost((mcmcMat[, "b3[2]"] + mcmcMat[, paste0("b2b3[", x2Level, ",2]")]) - 328 (mcmcMat[, "b3[1]"] + mcmcMat[, paste0("b2b3[", x2Level, ",1]")]), 329 main = "", 330 compVal = 0.0, ROPE = c(-0.05, 0.05), 331 xlab = "") 332 mtext(levels(questionnaireData$intervention)[x2Level], side=3, line=1) 333 } 334 title("Attitude change", outer=TRUE) 335 ``` 336 337 These plots highlight the 95% highest density interval (HDI) for the posterior distributions of the parameters. Also highlighted are a comparison value, which in this case is a pre- vs. post-test difference of 0, and a "range of practical equivalence" (ROPE) around the comparison value. The HDI of the posterior distribution of attitude shifts for the "disease risk" group" (but no other group) falls completely outside this ROPE, so we can reasonably conclude that this intervention changes participants' attitudes toward vaccination. 338 339 we can also use the posterior distributions to directly estimate the shifts relative to the control group. Here is the difference between the attitude change observed for both the "autism correction" and "disease risk" groups compared to the attitude change in the control group. 340 341 ```{r plot_change_rel, dependson="run_model", fig.width=6, fig.height=3} 342 controlLevel = which(levels(questionnaireData$intervention) == "Control") 343 344 par(mfrow = c(1, 2), mar=c(2, 1, 1, 1), oma=c(0, 0, 4, 0)) 345 for (x2Level in which(levels(questionnaireData$intervention) != "Control")) { 346 plotPost((mcmcMat[, paste0("b2b3[", x2Level, ",2]")] - mcmcMat[, paste0("b2b3[", x2Level, ",1]")]) - 347 (mcmcMat[, paste0("b2b3[", controlLevel, ",2]")] - mcmcMat[, paste0("b2b3[", controlLevel, ",1]")]), 348 compVal = 0.0, ROPE = c(-0.05, 0.05), 349 main = "", 350 xlab = "") 351 mtext(levels(questionnaireData$intervention)[x2Level], side=3, line=1) 352 } 353 title("Change relative to control", outer=TRUE) 354 ``` 355 356 The posterior distribution above shows that "disease risk" participants shifted their response about half an interval relative to the control group following the intervention. The "autism correction" participants, however, did not show a credible change in vaccination attitudes. Bayesian estimation replicates the conclusions drawn by Horne and colleagues. 357 358 ### Post hoc comparisons 359 360 An analysis following the tradition of null-hypothesis significance testing (NHST) attempts to minimize the risk of "type I" errors, which occur when the "null" hypothesis (i.e., there is no effect) is erroneously rejected. The more tests performed in the course of an analysis, the more likely that such an error will occur due to random variation. The [Wikipedia article on the "Multiple Comparisons Problem"](https://en.wikipedia.org/wiki/Multiple_comparisons_problem) is an approachable read on the topic and explains many of the corrections that are applied when making mulitple comparisons in a NHST framework. 361 362 Instead of focusing on type I error, the goal of Bayesian estimation is to estimate values of the parameters of a model of the data. The posterior distribution provides a range of credible values that these parameters can take. Inferences are made on the basis of these estimates; e.g., we see directly that the "disease risk" intervention shifts participants' attitude toward vaccination about one half of an interval. Since a single model was fit to all the data, additional comparisons of parameter distributions don't increase the chance of generating false positives. [Gelman, Hill, and Yajima (2008)](http://www.stat.columbia.edu/~gelman/research/unpublished/multiple2.pdf) is a great resource on this. 363 364 For example, we can look at the size of the shift in attitude toward each question for each group. If we used an NHST approach, these 15 additional comparisons would either seriously inflate the type I error rate (using a p-value of 0.05 on each test would result in an overall error rate of `r round(1 - (1 - 0.05)^15, 2)`), or require much smaller nominal p-values for each test. 365 366 ```{r plot_posthoc, dependson="run_model", fig.height=9} 367 368 # 5 x 3 grid of plots. So understandable! 369 par(mfrow = c(5, 3), mar=c(2, 1, 1, 1), oma=c(0, 0, 2, 4)) 370 for (x1Level in seq_along(levels(questionnaireData$question))) { 371 for (x2Level in seq_along(levels(questionnaireData$intervention))) { 372 plotPost((mcmcMat[, "b3[2]"] + 373 mcmcMat[, paste0("b1b2[", x1Level, ",", x2Level, "]")] + 374 mcmcMat[, paste0("b1b3[", x1Level, ",2]")] + 375 mcmcMat[, paste0("b2b3[", x2Level, ",2]")] + 376 mcmcMat[, paste0("b1b2b3[", x1Level, ",", x2Level, ",2]")]) - 377 (mcmcMat[, "b3[1]"] + 378 mcmcMat[, paste0("b1b2[", x1Level, ",", x2Level, "]")] + 379 mcmcMat[, paste0("b1b3[", x1Level, ",1]")] + 380 mcmcMat[, paste0("b2b3[", x2Level, ",1]")] + 381 mcmcMat[, paste0("b1b2b3[", x1Level, ",", x2Level, ",1]")]), 382 main = "", 383 compVal = 0.0, ROPE = c(-0.05, 0.05), 384 xlab = "") 385 386 # Label the top row with the name of the intervention and the right column 387 # with the question 388 if (x1Level == 1) { 389 mtext(levels(questionnaireData$intervention)[x2Level], side=3, line=1) 390 } 391 if (x2Level == length(levels(questionnaireData$intervention))) { 392 mtext(levels(questionnaireData$question)[x1Level], side=4, line=2) 393 } 394 } 395 } 396 ``` 397 398 The only credible differences for single questions both occur for participants in the "disease risk" group. The "healthy" ("Vaccinating healthy children helps protect others by stopping the spread of disease.") and "diseases" ("Children do not need vaccines for diseases that are not common anymore.") questions show a reliable positive shift, which makes a lot of sense given the nature of the intervention. However, it's important to note that the HDIs are very wide for these posteriors compared to the ones shown earlier. This is driven primarily by the fact that this comparison relies on a three-way interaction, which has greater variance (as is typical in traditional ANOVA models). The posterior mode of the change for the "plan_to" question ("I plan to vaccinate my children") is fairly large for the "disease risk" group, but the wide HDI spans the ROPE around 0. 399 400 ### Expanding the models 401 402 My goal was to examine the conclusions made in the original report of these data. However, this is just one way to model the data, and different models are more appropriate for different questions. For instance, the standard deviation and thereshold values were fit separately for each question here, but these could instead be based on a hyperparameter that could iteself be modelled. I also excluded subject effects from the model; there were many subjects (over 300), so a full model with these included would take much longer to fit, but may produce more generalizable results. Bayesian estimation requires an investigator to be intentional about modelling decisions, which I consider to be an advantage of the method. 403 404 ### Prior probabilities 405 406 A defining characteristic of Bayesian analyses is that prior information about the model parameters is combined with their likelihood (derived from the data) to produce posterior distributions. In this analysis, I used priors that put weak constraints on the values of the parameters. If an investigator has reason to assume that parameters will take on certain values (e.g., the results of a previous study), this prior information can -- and should -- be incorporated into the analysis. Again, I like that these decisions have to be made deliberately. 407 408 ## Conclusions 409 410 Concerns about a possible link between childhood vaccination and autism is causing some parents to skip childhood vaccinations, which is dangerous ([Calandrillo, 2004](http://www.ncbi.nlm.nih.gov/pubmed/15568260)). However, an intervention that exposes people to the consequences of the diseases that vaccinations prevent makes them respond more favorably toward childhood vaccination. A separate group of participants did not change their attitudes after being shown information discrediting the vaccination-autism link, nor did a group of control participants. 411 412 ### Acknowledgements 413 414 [Zach Horne](http://www.zacharyhorne.com/) made the data available for analysis (by anyone!), and gave useful feedback on an earlier version of this write-up. Much of the code for Bayesian estimation was cobbled together from programs distributed with Doing Bayesian Data Analysis (2nd ed) by [John K. Kruschke](http://www.indiana.edu/~kruschke/).