The use of both linear and generalized linear mixed-effects models (LMMs and GLMMs) has become popular not only in social and medical sciences, but also in biological sciences, especially in the field of ecology and evolution. Information criteria, such as Akaike Information Criterion (AIC), are usually presented as model comparison tools for mixed-effects models.
The presentation of ‘variance explained’ (R2) as a relevant summarizing statistic of mixed-effects models, however, is rare, even though R2 is routinely reported for linear models (LMs) and also generalized linear models (GLMs). R2 has the extremely useful property of providing an absolute value for the goodness-of-fit of a model, which cannot be given by the information criteria. As a summary statistic that describes the amount of variance explained, R2 can also be a quantity of biological interest.
One reason for the under-appreciation of R2 for mixed-effects models lies in the fact that R2 can be defined in a number of ways. Furthermore, most definitions of R2 for mixed-effects have theoretical problems (e.g. decreased or negative R2 values in larger models) and/or their use is hindered by practical difficulties (e.g. implementation).
Here, we make a case for the importance of reporting R2 for mixed-effects models. We first provide the common definitions of R2 for LMs and GLMs and discuss the key problems associated with calculating R2 for mixed-effects models. We then recommend a general and simple method for calculating two types of R2 (marginal and conditional R2) for both LMMs and GLMMs, which are less susceptible to common problems.
This method is illustrated by examples and can be widely employed by researchers in any fields of research, regardless of software packages used for fitting mixed-effects models. The proposed method has the potential to facilitate the presentation of R2 for a wide range of circumstances.
Many biological datasets have multiple strata due to the hierarchical nature of the biological world, for example, cells within individuals, individuals within populations, populations within species and species within communities. Therefore, we need statistical methods that explicitly model the hierarchical structure of real data. Linear mixed-effects models (LMMs; also referred to as multilevel/hierarchical models) and their extension, generalized linear mixed-effects models (GLMMs) form a class of models that incorporate multilevel hierarchies in data. Indeed, LMMs and GLMMs are becoming a part of standard methodological tool kits in biological sciences (Bolker et al. 2009), as well as in social and medical sciences (Gelman & Hill 2007; Congdon 2010; Snijders & Bosker 2011). The widespread use of GLMMs demonstrates that a statistic that summarizes the goodness-of-fit of mixed-effects model to the data would be of great importance. There seems currently no such summary statistic that is widely accepted for mixed-effects models.
Many scientists have traditionally used the coefficient of determination, R2 (ranging from 0 to 1), as a summary statistic to quantify the goodness-of-fit of fixed effects models such as multiple linear regressions, anova, ancova and generalized linear models (GLMs). The concept of R2 as ‘variance explained’ is intuitive. Because R2 is unitless, it is extremely useful as a summary index for statistical models because one can objectively evaluate the fit of models and compare R2 values across studies in a similar manner as standardized effect size statistics under some circumstances (e.g. models with the same responses and similar set of predictors or in other words, it can be utilized for meta-analysis; Nakagawa & Cuthill 2007).
In Table 1, we briefly summarize 12 properties of R2 (based on Kvålseth 1985 and Cameron & Windmeijer 1996; compilation adopted from Orelien & Edwards 2008) that will provide the reader with a good sense of what a ‘traditional’ R2 statistic should be and also provide a benchmark for generalizing R2 to mixed-effects models. Generalizing R2 from linear models (LMs) to LMMs and GLMMs turns out to be a difficult task. A number of ways of obtaining R2 for mixed models have been proposed (e.g. Snijders & Bosker 1994; Xu 2003; Liu, Zheng & Shen 2008; Orelien & Edwards 2008). These proposed methods, however, share some theoretical problems or practical difficulties (discussed in detail below), and consequently, no consensus for a definition of R2 for mixed-effects models has emerged in the statistical literature. Therefore, it is not surprising that R2 is rarely reported as a model summary statistic when mixed models are used.
|R2 must represent a goodness-of-fit and have intuitive interpretation||Kvålseth (1985)|
|R2 must be unit free; that is, dimensionless||Kvålseth (1985)|
|R2 should range from 0 to 1 where 1 represents a perfect fit||Kvålseth (1985)|
|R2 should be general enough to apply to any type of statistical model||Kvålseth (1985)|
|R2 values should not be affected by different model fitting techniques||Kvålseth (1985)|
|R2 values from different models fitted to the same data should be directly comparable||Kvålseth (1985)|
|Relative R2 values should be comparable to other accepted goodness-of-fit measures||Kvålseth (1985)|
|All residuals (positive and negative) should be weighted equally by R2||Kvålseth (1985)|
|R2 values should always increase as more predictors are added (without degrees-of-freedom correction)||Cameron & Windmeijer (1996)|
|R2 values based on residual sum of squares and those based on explained sum of squares should match||Cameron & Windmeijer (1996)|
|R2 values and statistical significance of slope parameters should show correspondence||Cameron & Windmeijer (1996)|
|R2 should be interpretable in terms of the information content of the data||Cameron & Windmeijer (1996)|
In the absence of R2, information criteria are often used and reported as comparison tools for mixed models. Information criteria are based on the likelihood of the data given a fitted model (the ‘likelihood’) penalized by the number of estimated parameters of the model. Commonly used information criteria include Akaike Information Criterion (AIC) (Akaike 1973), Bayesian information criterion (BIC), (Schwarz 1978) and the more recently proposed deviance information criterion (DIC), (Spiegelhalter et al. 2002; reviewed in Claeskens & Hjort 2009; Grueber et al. 2011; Hamaker et al. 2011). Information criteria are used to select the ‘best’ or ‘better’ models, and they are indeed useful for selecting the most parsimonious models from a candidate model set (Burnham & Anderson 2002). There are, however, at least three important limitations to the use of information criteria in relation to R2: (i) while information criteria provide an estimate of the relative fit of alternative models, they do not tell us anything about the absolute model fit (cf. evidence ratio; Burnham & Anderson 2002), (ii) information criteria do not provide any information on variance explained by a model (Orelien & Edwards 2008), and (iii) information criteria are not comparable across different datasets under any circumstances, because they are highly dataset specific (in other words, they are not standardized effect statistics which can be used for meta-analysis; Nakagawa & Cuthill 2007).
In this paper, we start by providing the most common definitions of R2 in LMs and GLMs. We then review previously proposed definitions of R2 measures for mixed-effects models and discuss the problems and difficulties associated with these measures. Finally, we explain a general and simple method for calculating variance explained by LMMs and GLMMs and illustrate its use by simulated ecological datasets.
Definitions of R2
We have deliberately left −2 in the denominator and numerator so that (‘D’ signifies ‘deviance’) can be compared with Equation eqn 3. For a LM (Equation eqn 1), the −2 log-likelihood statistic (sometimes referred to as deviance) is equal to the residual sum of squares based on OLS of this model (Menard 2000; see a series of formulas for non-Gaussian responses in Table 1 of Cameron & Windmeijer 1997). There are several other likelihood-based definitions of R2 (reviewed in Cameron & Windmeijer 1997; Menard 2000), but we do not review these definitions, as they are less relevant to our approach below. We will instead discuss the generalization of R2 to LMMs and GLMMs, and associated problems in this process, in the next section.
Common problems when generalizing R2
where yij is the ith response of the jth individual, xhij is the ith value of the jth individual for the hth predictor, β0 is the intercept, βh is the slope (regression coefficient) of the hth predictor, αj is the individual-specific effect from a normal distribution of individual-specific effects with mean of zero and variance of (between-individual variance) and εegr;ij is the residual associated with the ith value of the jth individual from a normal distribution of residuals with mean of zero and variance of (within-individual variance). As seen in the previous equations, LMMs have by definition more than one variance component (in this case two: and ), while LMs have only one (Equations eqn 1 and eqn 2).
One of the earliest definitions of R2 for mixed-effects models is based on the reduction of each variance component when including fixed-effect predictors separately; in other words, separate R2 for each random effect and the residual variance (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992; we detail this measurement in the section ‘Related issues’). This approach is analogous to Equation eqn 7. As pointed out by Snijders & Bosker (1994), however, it is not uncommon that some predictors can reduce while simultaneously increasing , and vice versa even though the total sum of variance components is usually reduced (for an example, see Table 1 in Snijders & Bosker 1994). Such behaviour of variance components can sometimes result in negative R2 because and can be larger than and , respectively (i.e. the corresponding variance components in the intercept model).
where is variance explained at the individual level (i.e. level 2; between-individual variance explained), is the mean observed value for the jth individual, is the fitted value for jth individual, k is the harmonic mean of the number of replicates per individuals, mj is the number of replicates for the ith individual, M is the total number of individuals, and other notations are as above. An advantage of using and is that we can evaluate how much variance is explained at each level of the analysis. However, there are at least three problems with this approach: (i) it turns out that and can decrease in larger models (note that can only increase when more predictors are added without the degrees of freedom adjustment; see Table 1), (ii) it is not clear how and can be extended to more than two levels (i.e. more than one random factor) and (iii) it is also not obvious how and are to be generalized to GLMMs.
The first problem means that because () of a model with more predictors can be larger than that of a model of fewer predictors, and could also take negative values (Snijders & Bosker 1994). In other words, the estimate of can be larger than that of (). Snijders & Bosker (1999) offer two explanations for decreases in R2 and/or negative R2 in a larger model: (i) chance fluctuation (or sampling variance) that is most prominent when the sample size is small or (ii) misspecification of the model, when the new predictor is redundant in relation to one or more other predictors in the model. Snijders & Bosker (1999) suggest that decreases in and (changes in the ‘wrong’ direction) can be used as a diagnostic in model selection. However, such misspecification does not need to be the cause of an increase in () (and consequently decreases in and ).
The second problem of extending and to models with more than two levels was addressed by Gelman & Pardoe (2006), who provide a solution to extend and to any arbitrary numbers of levels (or random factors) in a Bayesian framework. However, its general implementation is rather difficult, and we therefore refer to the original publication for those interested in this method.
The third problem of generalizing and is particularly profound because the residual variance, , cannot be easily defined for non-Gaussian responses (see also below). At first glance, adopting likelihood-based R2 measures such as in Equations eqn 8-eqn 10 could resolve this problem although such a method only provides R2 at the unit level (i.e. level 1); indeed, this type of solution has been recommended before (Edwards et al. 2008). Unfortunately, there are three obstacles to using a likelihood-based R2 like for generalized models: (i) the likelihoods cannot be compared when models are fitted by restricted maximum likelihood (REML) (the standard way to estimate variance components in LMMs; Pinheiro & Bates 2000), (ii) it is not clear whether we should use the likelihood from the null model such as yij = β0 + εij (excluding random factors) or from the null model such as yij = β0 + αj + εij (including random factors; see Equation eqn 10) and (iii) likelihood-based R2 measures applied to LMMs and GLMMs are also subject to the problem of decreased or even negative R2 with the introduction of additional predictors. We are not aware of a solution to this latter obstacle, but partial solutions to obstacles (i) and (ii) have been suggested and need separate discussion.
The first obstacle of fitting models with REML only applies to LMMs, and this can be resolved by using the ML estimates instead of REML. However, it is well known that variance components will be biased when models are fitted by ML (e.g. Pinheiro & Bates 2000).
With respect to the second obstacle regarding the choice of null models, it seems that both are permitted and accepted in the literature (e.g. Xu 2003; Orelien & Edwards 2008). Inclusion of random factors in the intercept model, however, can certainly change the likelihood of the null model that is used as a reference, and thus, it changes R2 values. This relates to an important matter. For mixed-effects models, R2 can be categorized loosely into two types: marginal R2 and conditional R2 (Vonesh, Chinchilli & Pu 1996). Marginal R2 is concerned with variance explained by fixed factors, and conditional R2 is concerned with variance explained by both fixed and random factors. So far, we only concentrated on the former, marginal R2, but we will expand more on the distinction between the two types in the next section.
Although we do not review all proposed definitions of R2 for mixed-effects models here (see Menard 2000; Xu 2003; Orelien & Edwards 2008; Roberts et al. 2011), it appears that all alternative definitions of R2 suffer from one or more aforementioned problems and their implementations may not be straightforward. In the next section, we introduce a definition of R2, which is simple and common to both LMMs and GLMMs and probably less prone to the aforementioned problems than previously proposed definitions.
General and simple R2 for GLMMs
where is the variance calculated from the fixed effect components of the LMM (c.f. Snijders & Bosker 1999), m in the parentheses indicates marginal R2 (i.e. variance explained by fixed factors; see below for conditional R2). Estimating can, in principle, be carried out by predicting fitted values based on the fixed effects alone (equivalent to multiplying the design matrix of the fixed effects with the vector of fixed effect estimates) followed by calculating the variance of these fitted values (Snijders & Bosker 1999). Note that should be estimated without degrees-of-freedom correction.
An obvious advantage of this formulation is that will never be negative. It is possible that can decrease by the addition of predictors (remember that never decrease with more predictors), but this is unlikely, because should always increase when predictors are added to the model (compare Equations eqn 16 and 26).
As one can see in Equation eqn 30, conditional R2 () despite its somewhat confusing name can be interpreted as the variance explained by the entire model. Both marginal and conditional convey unique and interesting information, and we recommend they both be presented in publications.
In the case of a Gaussian response and an identity link (as used in LMMs), the linked scale variance and the original scale variance are the same and the distribution-specific variance is zero. Thus, () reduces to in Equations eqn 29 and eqn 30. For other GLMMs, the link-scale variance will differ from the original scale variance. We here present R2 calculated on the link scale because of its generality: Equations eqn 29 and eqn 30 can be applied to different families of GLMMs, given the knowledge of distribution-specific variance and a model that fits additive overdispersion (e.g. MCMCglmm; Hadfield 2010). Importantly, when the denominators of Equations eqn 29 and eqn 30 include (i.e. for GLMM), both types of will never become 1 in contrast to traditional R2 (see also Table 1). Table 2 summarizes the specifications for binary/proportion data and count data, which are equivalent to Equations eqn 22-eqn 25. The GLMM formulations presented in Table 2 for binomial GLMMs were first presented by Snijders & Bosker (1999). They also show that this approach can be extended to multinomial GLMMs where the response is categorical with more than two levels (Snijders & Bosker 1999; see also Dean, Nakagawa & Pizzari 2011). However, to our knowledge, equivalent formulas for Poisson GLMMs (i.e. count data) have not been previously described (for derivation, see Appendix 1).
|Binary and proportion data||Count data|
|Link function||Logit link||Probit link||Log link||Square-root link|
|Description||Yijk is the number of ‘successes’ in mijk trials by the jth individual in the kth group at the ith occasion (for binary data, mijk is 1), pijk is the underlying (latent) probability of success for the jth individual in the kth group at the ith occasion (for binary data, is 0).||Yijk is the observed count for the jth individual in the kth group at the ith occasion, μijk is the underlying (latent) mean for the ith individual in the kth group at the ith occasion.|
As a technical note, we mention that for binary data the additive overdispersion is usually fixed to 1 for computational reasons, as additive dispersion is not identifiable (see Goldstein, Browne & Rasbash 2002). Furthermore, some of the R2 formulae include the intercept β0 (like in the case Poisson models for count data). In such cases, R2 values will be more easily interpreted when fixed effects are centred or otherwise have meaningful zero values (see Schielzeth 2010; see also Appendix 1). We further note that for Poisson models with square-root link and a mean of Yijk <5, the given formula is likely to be inaccurate because the variance of square-root transformation of count data substantially exceeds 0·25 (Table 2; Nakagawa & Schielzeth 2010).
where Cγ, Cα and Cɛ are PCV at the level of groups, individuals and units (observations), respectively, and , and are variance components from the intercept model (i.e. Equation eqn 22; PCV for additive dispersion, can also be calculated by replacing with ). Proportion change in variance is in fact one of earliest proposed R2 measures for LMMs (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992), although it can take negative values (Snijders & Bosker 1994). We think, however, that presenting PCV along with R2GLMM will turn out to be very useful, because PCV monitors changes specific to each variance component, that is, how the inclusion of additional predictor(s) has reduced (or increased) variance component at different levels. For example, if Cγ = 0·12, Cα = −0·05 and Cɛ = 0·23, the negative estimate shows that variance at the individual level has increased (i.e. ). Additionally, we refer the reader to Hössjer (2008) who describes an alternative approach for quantifying variance explained at different levels using variance components from a single model.
So far, we have only discussed random intercept models (e.g. Equations eqn 22) not random-slope models where slopes are fitted for each group (usually along with random intercepts at each level; see Schielzeth & Forstmeier (2009) highlighting the necessity to fit random-slope models when the main interest is on data-level fixed effect predictors). Snijders & Bosker (1999) point out that calculating R2 like and , it is easy to do so for random intercept models, but for random-slope models is tedious (as variance components for slopes cannot be easily integrated with other variance components, e.g. Schielzeth & Forstmeier 2009). Snijders & Bosker (1999) mention that and obtained from random-slope models are usually very similar to those obtained from random intercept models, where the same fixed effects are fitted. Therefore, we recommend calculating (both marginal and conditional) from corresponding random intercept models for random-slope models, although PCV should be calculated for the random-slope models of interest.
We will illustrate how the calculation of along with PCV using simulated datasets. Consider a hypothetical species of beetle that has the following life cycle: larvae hatch and grow in the soil until they pupate, and then adult beetles feed and mate on plants. They are a generalist species and so are widely distributed. We are interested in the effect of extra nutrients during the larval stage on subsequent morphology and reproductive success. Larvae are sampled from 12 different populations (‘Population’; see Fig. 1). Within each population, larvae are collected at two different microhabitats (‘Habitat’): dry and wet areas as determined by soil moisture. Larvae are exposed to two different dietary treatments (‘Treatment’): nutrient rich and control. The species is sexually dimorphic and can be easily sexed at the pupa stage (‘Sex’). Male beetles have two different colour morphs: one dark and the other reddish brown (‘Morph’, labelled A and B in Fig 1), and morphs are supposedly subject to sexual selection. Sexed pupae are housed in standard containers until they mature (‘Container’). Each container holds eight same-sex animals from a single population, but with a mix of individuals from the two habitats (N[container] = 120; N[animal] = 960). Three traits are measured after maturation: (i) body length of adult beetles (Gaussian distribution), (ii) frequencies of the two distinct male colour morphs (binomial or Bernoulli distribution) and (iii) the number of eggs laid by each female (Poisson distribution) after random mating (Fig. 1).
Data for this hypothetical example were created in R 2.15.0 (R Development Core Team 2012). We used the function lmer in the R package lme4 (version 0.999375-42; Bates, Maechler & Bolker 2011) for fitting LMMs and GLMMs. We modelled three response variables (see also Table 3): (i) the body length with a Gaussian error (‘Size models’), (ii) the two male morphs with the binomial error (logit-link function; ‘Morph models’) and (iii) the female egg numbers with the Poisson error (log-link function; ‘Fecundity models’). For each dataset, we fitted the null (intercept/empty) model and the ‘full’ model; all models contained ‘Population’ and ‘Container’ as random factors; we included an additive dispersion term (see Table 2) in Fecundity models. The full models all included ‘Treatment’ and ‘Habitat’ as fixed factors; ‘Sex’ was added as a fixed factor to the body size model. Two kinds of and PCV for the three variance components were calculated as explained above. The results of modelling the three different datasets are summarized in Table 3; all datasets and an R script are provided as online supplements (Data S1-4).
|Model name||Size models Gaussian mixed models||Morph models Binary mixed models (logit link)||Fecundity models Poisson mixed models (log link)|
|Null Model||Full Model||Null Model||Full Model||Null Model||Full Model|
|Fixed effects||b [95% CI]||b [95% CI]||b [95% CI]||b [95% CI]||b [95% CI]||b [95% CI]|
|Intercept||14·08 [13·41, 14·76]||15·22 [14·53, 15·91]||−0·38 [−0·96, 0·21]||−1·25 [−1·96, −0·54]||1·54 [1·22, 1·86]||1·23 [0·91, 1·56]|
|Treatment (experiment)||–||0·31 [0·18, 0·45]||–||1·01 [0·60, 1·43]||–||0·51 [0·41, 0·26]|
|Habitat (wet)||–||0·09 [−0·05, 0·23]||–||0·68 [0·27, 1·09]||–||0·10 [0·001, 0·20]|
|Sex (male)||–||−2·66 [−2·89, −2·45]||–||–||–||–|
|Residuals (additive dispersion)||1·224||1·197||–||–||0·171||0·100|
- CI, confidence interval; PCV, proportion change in variance; NA, not applicable/available; AIC, Akaike Information Criterion; BIC; Bayesian information criterion; ML, maximum likelihood; REML, restricted maximum likelihood; VC, variance components.
- For full models, the intercept represents control, dry and female. 95% CI was estimated by assuming an infinitely large degree of freedom (i.e. t = 1·96). For Size models, AIC and BIC values were calculated using ML but other parameters were from REML estimations (see the text for the reason).
In all the three model sets, some variance components in the full models were larger than corresponding variance components in the null models (e.g. ). In Morph models, the sum of all the random effect variance components in the full model was greater than the total variance in the null model (c.f. ); see above; Snijders & Bosker 1994). All these patterns result in negative PCV values (see Table 3), while values never become negative. In Morph and Fecundity models, values are relatively minor (8–10%) compared with values. In Size models, on the other hand, was nearly 40%. This was due to a very large effect of ‘Sex’ in body size model; in this model, the ‘Treatment’ and ‘Habitat’ effects together accounted for only c. 1% of the variance (not shown in Table 3). The variance among containers in the null Size model was conflated with the variance caused by differences between the sexes in the null model, as ‘Sex’ and ‘Container’ are confounded by the experimental design (single sex in each container; Fig. 1). A part of the variation assigned to ‘Container’ in the null model was explained by the fixed effect ‘Sex’ in the full model. Finally, it is important to note that both ‘Treatment’ and ‘Habitat’ effects were statistically significant in all the datasets in most cases (five out of six). Much of data variability, however, resided in the random effects along with residuals (additive dispersion) and in the distribution-specific variance. Note that differences between corresponding and values reflect how much variability is in random effects. Importantly, comparing the different variance components including that of the fixed factors within as well as between models, we believe, could help researchers gaining extra insights into their datasets (Merlo et al. 2005a,b). We also note that in some cases, calculating a variance component for each fixed factor may prove useful.
Here, we have provided a general measure of R2 that we label . Both marginal and conditional can be easily calculated, regardless of the statistical package used to fit the models. While we do not claim that is a perfect summary statistic, it is less susceptible to the common problems that plague alternative measures of R2. We further believe that can be used as a quantity of biological interest and hence might be thought of as being estimated from the data rather than calculated for a particular dataset. The empirical usefulness of as an estimator of the explained variance should still be tested in future studies. As with every estimator of biological interest, it is desirable to quantify the uncertainty around this estimate (e.g. 95% confidence interval, which could be approximated by parametric bootstrapping or MCMC sampling). As far as we are aware, such uncertainty estimates have not been considered for traditional R2. Perhaps, future studies can also investigate the usefulness of uncertainty estimates for and other R2 measurements.
We finish with a cautionary note that R2 should not replace model assessments such as diagnostic checks for heteroscedasticity, validating assumptions on the distribution of random effects and outlier analyses. Above, we presented R2 with the motivation of summarizing the amount of variance explained in a model that is suitable for the specific research questions and datasets. It should only be used on models that have been checked for quality by other means. It is also important to realize that the R2 can be large due to predictors that are not of direct interest in a particular study (Tjur 2009) such as the sex effect on body size in our example. Despite these limitations, when used along with other statistics such as AIC and PCV, will be a useful summary statistic of mixed-effects models for both biologists and other scientists alike.
We thank S. English, C. Grueber, F. Korner-Nievergelt, E. Santos, A. Senior and T. Uller for comments on earlier versions and M. Lagisz for help in preparing Fig. 1. We are also grateful to the Editor R. O'Hara and two anonymous referees, whose comments improved this paper. T. Snijders provided guidance on how to calculate variance for fixed effects. H.S. was supported by an Emmy-Noether fellowship of the German Research Foundation (SCHI 1188/1-1).
Derivation of distribution-specific variance ( ) for Poisson distributions
Simulations (unpublished data, the authors) show that as E(x) approaches 0, this approximation becomes unreliable. Also, exp(β0) should be obtained either from a model with centred or scaled variables (sense Schielzeth 2010), or an intercept-only model while including all random effects. Note that the former approach may be limited when a model includes categorical variables.
|mee3261-sup-0001-DataS1.csvMS Excel, 27.9 KB||Data S1. Data file for Size model (BeetlesBody.csv).|
|mee3261-sup-0002-DataS2.csvMS Excel, 8.9 KB||Data S2. Data file for Morph models (BeetlesMale.csv).|
|mee3261-sup-0003-DataS3.csvMS Excel, 9.9 KB||Data S3. Data file for Fecundity models (BeetlesFemale.csv).|
|mee3261-sup-0004-DataS4.Rtext/R, 9.8 KB||Data S4. R code for examples (R_code_examples.R).|
|mee3261-sup-0005-R code examples rev.Rtext/R, 9.7 KB|
Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
- 1973) Information theory as an extension of the maximum likelihood principle. 2nd International Symposium on Information Theory (eds B.N. Petrov & F. Csaki), pp. 267–281. Akademiai Kiado, Budapest.
- 2011) lme4: linear mixed-effects models. R package, version 0.999375-42. http://CRAN.R-project.org/package=lme4.
- 2009) Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology & Evolution, 24, 127–135.
- 1992) Hierarchical Linear Models. Sage, Newbury Park, CS.
- 2002) Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, 2nd edn. Springer-Verlag, Berlin.
- 1996) R-squared measures for count data regression models with applications to health-care utilization. Journal of Business & Economic Statistics, 14, 209–220.
- 1997) An R-squared measure of goodness of fit for some common nonlinear regression models. Journal of Econometrics, 77, 329–342.
- 2009) Model Selection and Model Averaging. Cambridge University Press, Cambridge.
- 2010) Applied Bayesian Hierarchical Methods. CRC, Boca Raton, FL.
- 2011) The risk and intensity of sperm ejection in female birds. American Naturalist, 178, 343–354.
- 1998) Applied Regression Analysis, 3rd edn. Wiley, New York.
- 2008) An R2 statistic for fixed effects in the linear mixed model. Statistics in Medicine, 27, 6137–6157.
- 2007) Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, Cambridge.
- 2006) Bayesian measures of explained variance and pooling in multilevel (hierarchical) models. Technometrics, 48, 241–251.
- 2002) Partitioning variation in multilevel models. Understanding Statistics, 1, 223–231.
- 2011) Multimodel inference in ecology and evolution: challenges and solutions. Journal of Evolutionary Biology, 24, 699–711.
- 2010) MCMC methods for multi-response Generalised Linear Mixed Models: the MCMCglmm R package. Journal of Statistical Software, 33, 1–22.
- 2011) Model selection based on information criteria in multilevel modeling. Handbook of Advanced Multilevel Analysis (eds J. Hox & J.K. Roberts), pp. 231–255. Routledge, New York.
- 2008) On the coefficient of determination for mixed regression models. Journal of Statistical Planning and Inference, 138, 3022–3038.
- 1985) Cautionary note about R2. American Statistician, 39, 279–285.
- 2008) Goodness-of-fit measures of R(2) for repeated measures mixed effect models. Journal of Applied Statistics, 35, 1081–1092.
- 1983) Limited-Dependent and Qualitative Variables in Econometrics. Cambridge University Press, Cambridge.
- 2000) Coefficients of determination for multiple logistic regression analysis. American Statistician, 54, 17–24.
- 2005a) A brief conceptual tutorial on multilevel analysis in social epidemiology: interpreting neighbourhood differences and the effect of neighbourhood characteristics on individual health. Journal of Epidemiology and Community Health, 59, 1022–1028.
- 2005b) A brief conceptual tutorial on multilevel analysis in social epidemiology: investigating contextual phenomena in different groups of people. Journal of Epidemiology and Community Health, 59, 729–736.
- 1991) A note on a general definition of the coefficient of determination. Biometrika, 78, 691–692.
- 2007) Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews, 82, 591–605.
- 2010) Repeatability for Gaussian and non-Gaussian data: a practical guide for biologists. Biological Reviews, 85, 935–956.
- 2008) Fixed-effect variable selection in linear mixed models using R2 statistics. Computational Statistics & Data Analysis, 52, 1896–1907.
- 2000) Mixed-effects Models in S and S-Plus. Springer, New York.
- R Development Core Team (2012) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
- 1986) A hierarchical model for studying school effects. Sociology of Education, 59, 1–17.
- 2011) Explained variance in multilevel models. Handbook of Advanced Multilevel Analysis (eds J. Hox & J.K. Roberts), pp. 219–230. Routledge, New York.
- 2010) Simple means to improve the interpretability of regression coefficients. Methods in Ecology and Evolution, 1, 103–113.
- 2009) Conclusions beyond support: overconfident estimates in mixed models. Behavioral Ecology, 20, 416–420.
- 2012) Nested by design: model fitting and interpretation in a mixed model era. Methods in Ecology and Evolution, doi: 10.1111/j.2041-210x.2012.00251.x.
- 1978) Estimating the dimension of a model. Annals of Statistics, 6, 461–464.
- 1994) Modeled variance in two-level models. Sociological Methods & Research, 22, 342–363.
- 1999) Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling. Sage, London.
- 2011) Multilevel Analysis: An Introduction to Basic and Advanced Multilevel Modeling, 2nd edn. Sage, London.
- 2002) Bayesian measures of model complexity and fit. Journal of the Royal Statistical Society Series B-Statistical Methodology, 64, 583–616.
- 2009) Coefficients of determination in logistic regression models – a new proposal: the coefficient of discrimination. American Statistician, 63, 366–372.
- 1996) Goodness-of-fit in generalized nonlinear mixed-effects models. Biometrics, 52, 572–587.
- 2003) Measuring explained variation in linear mixed effects models. Statistics in Medicine, 22, 3527–3541.