class: center, middle, inverse, title-slide .title[ # Week 3: Mixed-effect models ] .author[ ### Dr Nemanja Vaci ] .institute[ ### University of Sheffield ] .date[ ### 09/03/2022 ] --- <style type="text/css"> body, td { font-size: 15px; } code.r{ font-size: 15px; } pre { font-size: 20px } .huge .remark-code { /*Change made here*/ font-size: 200% !important; } .tiny .remark-code { /*Change made here*/ font-size: 80% !important; } </style> ## Press record --- ## Intended aims - Motivate usage of mixed-effect models, argument differences between types of pooling and explain how multilevel models differ from linear regression models - Build a mixed-effect model in R environment with different random adjustments, estimate and interpret the coefficients - Derive specific level estimate and evaluate the fit of the model --- ## Regression __Gaussian distribution__: `\(y_{i} \sim \mathcal{N^{iid}}(\mu_{i},\sigma)\)` <br/> <br/> `\(\mu_{i} = \alpha+\beta*x_i\)` <br/><br/> Model mean and standard deviation <br/> <br/> <br/> <br/> a) Errors: `\(\mathcal{N}^{iid}(0,\sigma)\)` <br/> <br/> b) Linearity and additivity <br/> <br/> c) Validity <br/> <br/> d) Lack of perfect multicolinearity <br/> <br/> ??? We talked about what happens when the outcome is not following normal distributio and does not follow linearity assumption. This time around we will talk about IID assumptions and what to do in that case. --- ## Correlated observations __Nested structures__: observations nested hierarchically under general groups (eg. answers of pupils in the UK) <br/> <br/> - Pupils are nested in ther classes < schools < counties <br/> <br/> __Repeated measurements__: observations clustered within the participants <br/> <br/> - Participants answering on a number of questions in an experiment <br/> <br/> - NBA players contributing multiple observations of their performance throughout their career <br/><br/> __Clustering of the observations__ that are not nested: <br/> <br/> - Questionnaire that collects profession of people and their country <br/> <br/> --- ## Multilevel models Generalization of regression `\((y_i=\alpha+\beta*x_i+\epsilon_i)\)`, where we allow adjustments of: <br/><br/> - __Intercept__: `\(y_i = \alpha_{j[i]}+\beta*x_i+\epsilon_i\)` <br/><br/> - __Slope__: `\(y_i=\alpha*\beta_{j[i]}*x_i+\epsilon_i\)` <br/><br/> - __Both__: `\(y_i=\alpha_{j[i]}*\beta_{j[i]}*x_i+\epsilon_i\)`<br/><br/> <br/><br/> Visualisation of the idea: [Link](http://mfviz.com/hierarchical-models/) --- ## How to understand multilevel models 1. __Complete pooling__ - take into account all observations, without categorical information <br/><br/> 2. __No pooling__ - adjust regression for each category <br/> <br/> 3. __Partial pooling__ - use categories to adjust individual estimates <br/> <br/> ```r knitr::kable(head(Babies), format = 'html') ``` <table> <thead> <tr> <th style="text-align:right;"> Babies_id </th> <th style="text-align:right;"> T_0s </th> <th style="text-align:right;"> T_1s </th> <th style="text-align:right;"> Age </th> <th style="text-align:right;"> Surounding </th> <th style="text-align:right;"> Weight </th> <th style="text-align:right;"> CalorieIntake </th> </tr> </thead> <tbody> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 10 </td> <td style="text-align:right;"> -5.375273 </td> <td style="text-align:right;"> 14.65666 </td> <td style="text-align:right;"> 665.2148 </td> </tr> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 13 </td> <td style="text-align:right;"> 3.168467 </td> <td style="text-align:right;"> 12.91480 </td> <td style="text-align:right;"> 781.1619 </td> </tr> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 2 </td> <td style="text-align:right;"> 43.613152 </td> <td style="text-align:right;"> 11.96138 </td> <td style="text-align:right;"> 850.2196 </td> </tr> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 5 </td> <td style="text-align:right;"> 14.991462 </td> <td style="text-align:right;"> 30.50425 </td> <td style="text-align:right;"> 777.9477 </td> </tr> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 3 </td> <td style="text-align:right;"> 12.254607 </td> <td style="text-align:right;"> 29.12190 </td> <td style="text-align:right;"> 703.8290 </td> </tr> <tr> <td style="text-align:right;"> 1 </td> <td style="text-align:right;"> 27.47432 </td> <td style="text-align:right;"> -40.30564 </td> <td style="text-align:right;"> 3 </td> <td style="text-align:right;"> 4.587097 </td> <td style="text-align:right;"> 22.35996 </td> <td style="text-align:right;"> 741.4472 </td> </tr> </tbody> </table> --- ## Complete pooling Linear model where we __ignore information__ about the countries: ```r mod1CP<-lm(CalorieIntake~Age, data=Babies) summary(mod1CP) ``` ``` ## ## Call: ## lm(formula = CalorieIntake ~ Age, data = Babies) ## ## Residuals: ## Min 1Q Median 3Q Max ## -1544.49 -226.93 79.93 295.13 1035.00 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 470.846 20.356 23.13 <2e-16 *** ## Age 49.603 1.143 43.38 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 428.1 on 1998 degrees of freedom ## Multiple R-squared: 0.485, Adjusted R-squared: 0.4848 ## F-statistic: 1882 on 1 and 1998 DF, p-value: < 2.2e-16 ``` <style type="text/css"> pre { max-height: 300px; overflow-y: auto; } pre[class] { max-height: 100px; } </style> <style type="text/css"> .scroll-100 { max-height: 100px; overflow-y: auto; background-color: inherit; } </style> --- ## Complete pooling Looking at residuals: ```r par(mfrow=c(1,1), bty='n',mar = c(5, 4, .1, .1), cex=1.1, pch=16) plot(resid(mod1CP), ylab='Residuals', xlab='Index') ``` <img src="Week3_files/figure-html/unnamed-chunk-6-1.png" width="100%" style="display: block; margin: auto;" /> --- ## No pooling Linear model where we estimates parameters for __each country__ separately: ```r mod1NP<-lm(CalorieIntake~Age+factor(Babies_id)-1, data=Babies) summary(mod1NP) ``` ``` ## ## Call: ## lm(formula = CalorieIntake ~ Age + factor(Babies_id) - 1, data = Babies) ## ## Residuals: ## Min 1Q Median 3Q Max ## -809.29 -160.35 -6.41 162.10 855.07 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## Age 50.2790 0.6783 74.124 < 2e-16 *** ## factor(Babies_id)1 102.5872 20.9058 4.907 9.99e-07 *** ## factor(Babies_id)2 684.2652 20.6108 33.199 < 2e-16 *** ## factor(Babies_id)3 862.3240 21.2219 40.634 < 2e-16 *** ## factor(Babies_id)4 -305.3031 20.8831 -14.620 < 2e-16 *** ## factor(Babies_id)5 487.4414 20.9251 23.295 < 2e-16 *** ## factor(Babies_id)6 132.7962 21.0883 6.297 3.72e-10 *** ## factor(Babies_id)7 659.7740 20.8901 31.583 < 2e-16 *** ## factor(Babies_id)8 656.3745 20.7328 31.659 < 2e-16 *** ## factor(Babies_id)9 679.1863 20.4171 33.266 < 2e-16 *** ## factor(Babies_id)10 642.8545 20.7242 31.019 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 253.2 on 1989 degrees of freedom ## Multiple R-squared: 0.9668, Adjusted R-squared: 0.9666 ## F-statistic: 5259 on 11 and 1989 DF, p-value: < 2.2e-16 ``` --- ## No pooling ```r par(mfrow=c(1,1), bty='n',mar = c(5, 4, .1, .1), cex=1.1, pch=16) plot(resid(mod1NP), ylab='Residuals',xlab='Index') ``` <img src="Week3_files/figure-html/unnamed-chunk-8-1.png" width="100%" style="display: block; margin: auto;" /> --- ## Trade-off between the two approaches _Complete pooling_ ignores variation between the categorical outcomes <br/> <br/> _No pooling_ can overfit the data within each group, especially in the case when there are few observations per group or extreme cases <br/> <br/> <br/> <br/> __Multilevel model (partial pooling)__<br/> Multilevel estimates for a given country `\(j\)` can be approximated as a __weighted average__ of the mean of observations in the country (the unpooled estimate, `\(y_j\)`) and the mean over all countries (completely pooled estimate `\(y_{all}\)`) <br/> <br/> --- ## Shrinkage estimator Averages from countries with **.red[smaller sample size]** carry **.green[less information]**, and the weighting pulls multilevel estimates closer to the **.red[overall average]**. <br/> If the `\(n_j=0\)` then overal average <br/><br/> Averages from countries with **.red[larger sample size]** carry **.green[more information]**, and the multilevel estimates are close to the country averages. <br/> If `\(n_j->\infty\)` then country average <br/><br/> Every other time, multilevel estimates are between the two extremes <br/> <img src="Multi.png" width="40%" style="display: block; margin: auto;" /> --- ## Multilevel model: intercept `$$y_i \sim N(\alpha_{j[i]}+\beta*x_i, \sigma_y)$$` Estimation: `\(\alpha_j \sim N(\mu_\alpha, \sigma_\alpha)\)` <br/> -- ```r #install.packages('lme4') require(lme4) mult1<-lmer(CalorieIntake~Age+(1|Babies_id), data=Babies) print(summary(mult1), cor=FALSE) ``` ``` ## Linear mixed model fit by REML ['lmerMod'] ## Formula: CalorieIntake ~ Age + (1 | Babies_id) ## Data: Babies ## ## REML criterion at convergence: 27858.6 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -3.1994 -0.6327 -0.0245 0.6410 3.3694 ## ## Random effects: ## Groups Name Variance Std.Dev. ## Babies_id (Intercept) 132275 363.7 ## Residual 64118 253.2 ## Number of obs: 2000, groups: Babies_id, 10 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 460.2558 115.6422 3.98 ## Age 50.2773 0.6783 74.12 ``` --- ## Multilevel model: intercept and slope adjustments `$$y_i \sim N(\alpha_{j[i]}+\beta_{j[i]}*x_i, \sigma^2_y)$$` <img src="insl.png" width="40%" style="display: block; margin: auto;" /> -- ```r mult2<-lmer(CalorieIntake~Age+(1+Age|Babies_id), data=Babies) print(summary(mult2), cor=FALSE) ``` ``` ## Linear mixed model fit by REML ['lmerMod'] ## Formula: CalorieIntake ~ Age + (1 + Age | Babies_id) ## Data: Babies ## ## REML criterion at convergence: 25474.9 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -3.7037 -0.6672 0.0194 0.6576 4.0965 ## ## Random effects: ## Groups Name Variance Std.Dev. Corr ## Babies_id (Intercept) 35937.8 189.57 ## Age 701.9 26.49 -0.50 ## Residual 18938.3 137.62 ## Number of obs: 2000, groups: Babies_id, 10 ## ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 469.579 60.307 7.786 ## Age 50.048 8.386 5.968 ``` --- ## Comparison of residuals ```r par(mfrow=c(1,3), bty='n',mar = c(5, 4,1, .1), cex=1.1, pch=16) plot(resid(mod1CP)[1:200], ylab='Residuals',xlab='Index', main='Complete pooling', ylim=c(-1200,1200)) plot(resid(mod1NP)[1:200], ylab='Residuals',xlab='Index', main='No pooling', ylim=c(-1200,1200)) plot(resid(mult2)[1:200], ylab='Residuals',xlab='Index', main='Partial pooling', ylim=c(-1200,1200)) ``` <img src="Week3_files/figure-html/unnamed-chunk-13-1.png" width="100%" style="display: block; margin: auto;" /> --- ## Fixed and random effects Many different definitions: [LINK](https://stats.stackexchange.com/questions/4700/what-is-the-difference-between-fixed-effect-random-effect-and-mixed-effect-mode) <br/> ```r fixef(mult2) ``` ``` ## (Intercept) Age ## 469.57946 50.04775 ``` ```r ranef(mult2) ``` ``` ## $Babies_id ## (Intercept) Age ## 1 273.96266 -40.083989 ## 2 -75.56727 19.518621 ## 3 15.73010 22.659262 ## 4 -130.81904 -40.356098 ## 5 362.47248 -21.416831 ## 6 -129.73763 -12.337377 ## 7 -105.14316 18.855336 ## 8 94.77461 6.176257 ## 9 -209.91822 29.259636 ## 10 -95.75452 17.725183 ## ## with conditional variances for "Babies_id" ``` --- ## Intra-class correlation (ICC) How strongly units withing the group correlate with each other `\(\frac{\sigma^2_{\alpha}}{\sigma^2_{\alpha}+\sigma^2_y} = \frac{132275}{132275+64118}=0.673522\)` <br/><br/> Works for only intercept random effect models ```r mod1<-lmer(CalorieIntake~Age+(1|Babies_id), data=Babies) performance::icc(mod1) ``` ``` ## # Intraclass Correlation Coefficient ## ## Adjusted ICC: 0.674 ## Unadjusted ICC: 0.354 ``` --- ## Why multilevel modelling 1. Helps us with better inference about fixed effects <br/> 2. Interest in between group effects <br/> 3. Better inferences about levels of group not included in the sample <br/> 4. Deals better with the non-normality of outcomes <br/> 5. Deals better with outliers (shrinks them towards the mean) <br/> 6. Deals better with unequal number of observations per group <br/><br/> <br/><br/> When is multilevel model equal to linear regression: <br/> - Very little group-level variation - complete pooling <br/> - Too much group-level variation - no pooling ??? How many groups/observations? - understanding partial pooling as a compromise between complete and no pooling indicates that the discussion about exact number of groups and observations is misguided. With few groups, we get classical regression - complete pooling and our multilevel model should be at least as good as linear regression. Having only two observations per group or even one observation in most of the groups might result in imprecise estimate of between group variance, but it should provide partial information that allows better estimation of parameters. --- class: inverse, middle, center # Random effect structure --- ## Specification of the structure In all R packages, random effect structure can be specified using: <br/><br/> object <- lmer(Dependent ~ Fixed (predictor) + __(1+|...)__, data=data) <br/><br/> - (1|Factor) = intercept adjustments for each level of factor <br/><br/> - (1+Predictor|Factor) = Intercept for each level of factor and slope adjustment for Predictor for each level of factor <br/><br/> - (0 + Predictor|Factor) or (Predictor - 1|Factor) = Slope adjustment for Predictor for each level of factor <br/><br/> ??? LMERs cheat sheet: <br/><br/> - [LINK 1](https://stats.stackexchange.com/a/13173) <br/><br/> - [LINK 2](https://stats.stackexchange.com/a/61466) --- ## Nested effects <img src="nested.png" width="80%" style="display: block; margin: auto;" /> <br/><br/> Random effect specification: (1|Country/Region) <br/><br/> Responses are nested within regions, that are further nested within Countries <br/><br/> --- ## Crossed random effects <img src="crossed2.png" width="40%" style="display: block; margin: auto;" /> Random effect specification: (1|Country) + (1|Region) <br/> Responses have multimembership status, they are nested in a combination of countries <br/> In the experimental situations (psycholinguistics): this will be the case. Responses in experimental paradigm are nested in two random structures simultaneously: participants and stimuli --- ## How to decide what to include There are no recipes, except that you need to think about your model and what it does <br/> There is some advice from the experts and it is that we should keep it maximal! [LINK](https://talklab.psy.gla.ac.uk/KeepItMaximalR2.pdf) <br/> <br/> However, this is not always possible <br/> ??? What happens when you have singular or convergence issues: [link](https://stats.stackexchange.com/questions/110004/how-scared-should-we-be-about-convergence-warnings-in-lme4) <br/> <br/> Additional information on random effects: [link](https://rpsychologist.com/r-guide-longitudinal-lme-lmer) --- ## Significance testing: fixed effects ```r #install.packages('lmerTest') require(lmerTest) mult2<-lmer(CalorieIntake~Age+(1+Age|Babies_id), data=Babies) print(summary(mult2), cor=F) ``` ``` ## Linear mixed model fit by REML. t-tests use Satterthwaite's method [ ## lmerModLmerTest] ## Formula: CalorieIntake ~ Age + (1 + Age | Babies_id) ## Data: Babies ## ## REML criterion at convergence: 25474.9 ## ## Scaled residuals: ## Min 1Q Median 3Q Max ## -3.7037 -0.6672 0.0194 0.6576 4.0965 ## ## Random effects: ## Groups Name Variance Std.Dev. Corr ## Babies_id (Intercept) 35937.8 189.57 ## Age 701.9 26.49 -0.50 ## Residual 18938.3 137.62 ## Number of obs: 2000, groups: Babies_id, 10 ## ## Fixed effects: ## Estimate Std. Error df t value Pr(>|t|) ## (Intercept) 469.579 60.307 8.995 7.786 2.75e-05 *** ## Age 50.048 8.386 9.001 5.968 0.000211 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` ??? REML: [LINK](https://towardsdatascience.com/maximum-likelihood-ml-vs-reml-78cf79bef2cf) <br/><br/> Satterweight approximation of degrees of freedom: [LINK](https://link.springer.com/article/10.3758/s13428-016-0809-y) --- ## Significance testing: random effects We are not interested in understanding exactly which levels of random factors are different in comparison to the mean <br/><br/> We are interested in making correct estimates of our coefficients <br/><br/> However, we can compare whether random structure adds more information by comparing the nested models <br/> --- class: inverse, middle, center # Practical aspect --- ## Data Relation between digital-screen use and the mental well-being of adolescents: [LINK](https://journals.sagepub.com/doi/10.1177/0956797616678438) <br/> <br/> .pull-left[ Outcome: - Mental well-being: mwbi <br/><br/><br/> Predictors: - TV weekdays: Watchwk_adj <br/> - TV weekends: Watchwe_adj <br/> - Computer weekdays: Compwk_adj <br/> - Computer weekends: Compwe_adj <br/> - Smartphones weekday: Smartwk_adj <br/> - Smartphones weekends: Smartwe_adj <br/> - Total: sum of those ] .pull-right[ Ethnicity: Ethnicg <br/> Region: Region <br/> Local area: LANAME <br/><br/><br/> Factors: male <br/> minority <br/> deprived <br/> ] --- ## Reading the data in R ```r #install.packages('foreign') require(foreign) mwell<-read.spss('data.sav', to.data.frame = T) dim(mwell) ``` ``` ## [1] 120115 74 ``` ```r mwell$Total=mwell$Watchwe_adj+mwell$Watchwk_adj+mwell$Comphwe_adj+mwell$Comphwk_adj+mwell$Smartwe_adj+mwell$Smartwk_adj ``` --- ## Important aspects: theory - Why and when would we want to use multilevel models <br/> - What can we include in our random structure - types of adjustments <br/> - Multilevel models versus complete and no pooling <br/> - Understanding how to specify random structure --- ## Important aspects: practice - Building linear mixed-effect model - Specifying random intercepts and slopes - Comparing nested models - Interpreting coefficients from linear mixed-effect models --- ## Literature Chapter 11, 12 and 13 of "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman and Jennifer Hill <br/> <br/> --- # Thank you for your attention