A general and simple method for obtaining R^{2} from generalized linear mixedeffects models
Summary

The use of both linear and generalized linear mixedeffects models (LMMs and GLMMs) has become popular not only in social and medical sciences, but also in biological sciences, especially in the field of ecology and evolution. Information criteria, such as Akaike Information Criterion (AIC), are usually presented as model comparison tools for mixedeffects models.

The presentation of ‘variance explained’ (R^{2}) as a relevant summarizing statistic of mixedeffects models, however, is rare, even though R^{2} is routinely reported for linear models (LMs) and also generalized linear models (GLMs). R^{2} has the extremely useful property of providing an absolute value for the goodnessoffit of a model, which cannot be given by the information criteria. As a summary statistic that describes the amount of variance explained, R^{2} can also be a quantity of biological interest.

One reason for the underappreciation of R^{2} for mixedeffects models lies in the fact that R^{2} can be defined in a number of ways. Furthermore, most definitions of R^{2} for mixedeffects have theoretical problems (e.g. decreased or negative R^{2} values in larger models) and/or their use is hindered by practical difficulties (e.g. implementation).

Here, we make a case for the importance of reporting R^{2} for mixedeffects models. We first provide the common definitions of R^{2} for LMs and GLMs and discuss the key problems associated with calculating R^{2} for mixedeffects models. We then recommend a general and simple method for calculating two types of R^{2} (marginal and conditional R^{2}) for both LMMs and GLMMs, which are less susceptible to common problems.

This method is illustrated by examples and can be widely employed by researchers in any fields of research, regardless of software packages used for fitting mixedeffects models. The proposed method has the potential to facilitate the presentation of R^{2} for a wide range of circumstances.
Introduction
Many biological datasets have multiple strata due to the hierarchical nature of the biological world, for example, cells within individuals, individuals within populations, populations within species and species within communities. Therefore, we need statistical methods that explicitly model the hierarchical structure of real data. Linear mixedeffects models (LMMs; also referred to as multilevel/hierarchical models) and their extension, generalized linear mixedeffects models (GLMMs) form a class of models that incorporate multilevel hierarchies in data. Indeed, LMMs and GLMMs are becoming a part of standard methodological tool kits in biological sciences (Bolker et al. 2009), as well as in social and medical sciences (Gelman & Hill 2007; Congdon 2010; Snijders & Bosker 2011). The widespread use of GLMMs demonstrates that a statistic that summarizes the goodnessoffit of mixedeffects model to the data would be of great importance. There seems currently no such summary statistic that is widely accepted for mixedeffects models.
Many scientists have traditionally used the coefficient of determination, R^{2} (ranging from 0 to 1), as a summary statistic to quantify the goodnessoffit of fixed effects models such as multiple linear regressions, anova, ancova and generalized linear models (GLMs). The concept of R^{2} as ‘variance explained’ is intuitive. Because R^{2} is unitless, it is extremely useful as a summary index for statistical models because one can objectively evaluate the fit of models and compare R^{2} values across studies in a similar manner as standardized effect size statistics under some circumstances (e.g. models with the same responses and similar set of predictors or in other words, it can be utilized for metaanalysis; Nakagawa & Cuthill 2007).
In Table 1, we briefly summarize 12 properties of R^{2} (based on Kvålseth 1985 and Cameron & Windmeijer 1996; compilation adopted from Orelien & Edwards 2008) that will provide the reader with a good sense of what a ‘traditional’ R^{2} statistic should be and also provide a benchmark for generalizing R^{2} to mixedeffects models. Generalizing R^{2} from linear models (LMs) to LMMs and GLMMs turns out to be a difficult task. A number of ways of obtaining R^{2} for mixed models have been proposed (e.g. Snijders & Bosker 1994; Xu 2003; Liu, Zheng & Shen 2008; Orelien & Edwards 2008). These proposed methods, however, share some theoretical problems or practical difficulties (discussed in detail below), and consequently, no consensus for a definition of R^{2} for mixedeffects models has emerged in the statistical literature. Therefore, it is not surprising that R^{2} is rarely reported as a model summary statistic when mixed models are used.
Property  References 

R^{2} must represent a goodnessoffit and have intuitive interpretation  Kvålseth (1985) 
R^{2} must be unit free; that is, dimensionless  Kvålseth (1985) 
R^{2} should range from 0 to 1 where 1 represents a perfect fit  Kvålseth (1985) 
R^{2} should be general enough to apply to any type of statistical model  Kvålseth (1985) 
R^{2} values should not be affected by different model fitting techniques  Kvålseth (1985) 
R^{2} values from different models fitted to the same data should be directly comparable  Kvålseth (1985) 
Relative R^{2} values should be comparable to other accepted goodnessoffit measures  Kvålseth (1985) 
All residuals (positive and negative) should be weighted equally by R^{2}  Kvålseth (1985) 
R^{2} values should always increase as more predictors are added (without degreesoffreedom correction)  Cameron & Windmeijer (1996) 
R^{2} values based on residual sum of squares and those based on explained sum of squares should match  Cameron & Windmeijer (1996) 
R^{2} values and statistical significance of slope parameters should show correspondence  Cameron & Windmeijer (1996) 
R^{2} should be interpretable in terms of the information content of the data  Cameron & Windmeijer (1996) 
In the absence of R^{2}, information criteria are often used and reported as comparison tools for mixed models. Information criteria are based on the likelihood of the data given a fitted model (the ‘likelihood’) penalized by the number of estimated parameters of the model. Commonly used information criteria include Akaike Information Criterion (AIC) (Akaike 1973), Bayesian information criterion (BIC), (Schwarz 1978) and the more recently proposed deviance information criterion (DIC), (Spiegelhalter et al. 2002; reviewed in Claeskens & Hjort 2009; Grueber et al. 2011; Hamaker et al. 2011). Information criteria are used to select the ‘best’ or ‘better’ models, and they are indeed useful for selecting the most parsimonious models from a candidate model set (Burnham & Anderson 2002). There are, however, at least three important limitations to the use of information criteria in relation to R^{2}: (i) while information criteria provide an estimate of the relative fit of alternative models, they do not tell us anything about the absolute model fit (cf. evidence ratio; Burnham & Anderson 2002), (ii) information criteria do not provide any information on variance explained by a model (Orelien & Edwards 2008), and (iii) information criteria are not comparable across different datasets under any circumstances, because they are highly dataset specific (in other words, they are not standardized effect statistics which can be used for metaanalysis; Nakagawa & Cuthill 2007).
In this paper, we start by providing the most common definitions of R^{2} in LMs and GLMs. We then review previously proposed definitions of R^{2} measures for mixedeffects models and discuss the problems and difficulties associated with these measures. Finally, we explain a general and simple method for calculating variance explained by LMMs and GLMMs and illustrate its use by simulated ecological datasets.
Definitions of R^{2}
We have deliberately left −2 in the denominator and numerator so that (‘D’ signifies ‘deviance’) can be compared with Equation eqn 3. For a LM (Equation eqn 1), the −2 loglikelihood statistic (sometimes referred to as deviance) is equal to the residual sum of squares based on OLS of this model (Menard 2000; see a series of formulas for nonGaussian responses in Table 1 of Cameron & Windmeijer 1997). There are several other likelihoodbased definitions of R^{2} (reviewed in Cameron & Windmeijer 1997; Menard 2000), but we do not review these definitions, as they are less relevant to our approach below. We will instead discuss the generalization of R^{2} to LMMs and GLMMs, and associated problems in this process, in the next section.
Common problems when generalizing R^{2}
where y_{ij} is the ith response of the jth individual, x_{hij} is the ith value of the jth individual for the hth predictor, β_{0} is the intercept, β_{h} is the slope (regression coefficient) of the hth predictor, α_{j} is the individualspecific effect from a normal distribution of individualspecific effects with mean of zero and variance of (betweenindividual variance) and εegr;_{ij} is the residual associated with the ith value of the jth individual from a normal distribution of residuals with mean of zero and variance of (withinindividual variance). As seen in the previous equations, LMMs have by definition more than one variance component (in this case two: and ), while LMs have only one (Equations eqn 1 and eqn 2).
One of the earliest definitions of R^{2} for mixedeffects models is based on the reduction of each variance component when including fixedeffect predictors separately; in other words, separate R^{2} for each random effect and the residual variance (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992; we detail this measurement in the section ‘Related issues’). This approach is analogous to Equation eqn 7. As pointed out by Snijders & Bosker (1994), however, it is not uncommon that some predictors can reduce while simultaneously increasing , and vice versa even though the total sum of variance components is usually reduced (for an example, see Table 1 in Snijders & Bosker 1994). Such behaviour of variance components can sometimes result in negative R^{2} because and can be larger than and , respectively (i.e. the corresponding variance components in the intercept model).
where is variance explained at the individual level (i.e. level 2; betweenindividual variance explained), is the mean observed value for the jth individual, is the fitted value for jth individual, k is the harmonic mean of the number of replicates per individuals, m_{j} is the number of replicates for the ith individual, M is the total number of individuals, and other notations are as above. An advantage of using and is that we can evaluate how much variance is explained at each level of the analysis. However, there are at least three problems with this approach: (i) it turns out that and can decrease in larger models (note that can only increase when more predictors are added without the degrees of freedom adjustment; see Table 1), (ii) it is not clear how and can be extended to more than two levels (i.e. more than one random factor) and (iii) it is also not obvious how and are to be generalized to GLMMs.
The first problem means that because () of a model with more predictors can be larger than that of a model of fewer predictors, and could also take negative values (Snijders & Bosker 1994). In other words, the estimate of can be larger than that of (). Snijders & Bosker (1999) offer two explanations for decreases in R^{2} and/or negative R^{2} in a larger model: (i) chance fluctuation (or sampling variance) that is most prominent when the sample size is small or (ii) misspecification of the model, when the new predictor is redundant in relation to one or more other predictors in the model. Snijders & Bosker (1999) suggest that decreases in and (changes in the ‘wrong’ direction) can be used as a diagnostic in model selection. However, such misspecification does not need to be the cause of an increase in () (and consequently decreases in and ).
The second problem of extending and to models with more than two levels was addressed by Gelman & Pardoe (2006), who provide a solution to extend and to any arbitrary numbers of levels (or random factors) in a Bayesian framework. However, its general implementation is rather difficult, and we therefore refer to the original publication for those interested in this method.
The third problem of generalizing and is particularly profound because the residual variance, , cannot be easily defined for nonGaussian responses (see also below). At first glance, adopting likelihoodbased R^{2} measures such as in Equations eqn 8eqn 10 could resolve this problem although such a method only provides R^{2} at the unit level (i.e. level 1); indeed, this type of solution has been recommended before (Edwards et al. 2008). Unfortunately, there are three obstacles to using a likelihoodbased R^{2} like for generalized models: (i) the likelihoods cannot be compared when models are fitted by restricted maximum likelihood (REML) (the standard way to estimate variance components in LMMs; Pinheiro & Bates 2000), (ii) it is not clear whether we should use the likelihood from the null model such as y_{ij} = β_{0} + ε_{ij} (excluding random factors) or from the null model such as y_{ij} = β_{0} + α_{j} + ε_{ij} (including random factors; see Equation eqn 10) and (iii) likelihoodbased R^{2} measures applied to LMMs and GLMMs are also subject to the problem of decreased or even negative R^{2} with the introduction of additional predictors. We are not aware of a solution to this latter obstacle, but partial solutions to obstacles (i) and (ii) have been suggested and need separate discussion.
The first obstacle of fitting models with REML only applies to LMMs, and this can be resolved by using the ML estimates instead of REML. However, it is well known that variance components will be biased when models are fitted by ML (e.g. Pinheiro & Bates 2000).
With respect to the second obstacle regarding the choice of null models, it seems that both are permitted and accepted in the literature (e.g. Xu 2003; Orelien & Edwards 2008). Inclusion of random factors in the intercept model, however, can certainly change the likelihood of the null model that is used as a reference, and thus, it changes R^{2} values. This relates to an important matter. For mixedeffects models, R^{2} can be categorized loosely into two types: marginal R^{2} and conditional R^{2} (Vonesh, Chinchilli & Pu 1996). Marginal R^{2} is concerned with variance explained by fixed factors, and conditional R^{2} is concerned with variance explained by both fixed and random factors. So far, we only concentrated on the former, marginal R^{2}, but we will expand more on the distinction between the two types in the next section.
Although we do not review all proposed definitions of R^{2} for mixedeffects models here (see Menard 2000; Xu 2003; Orelien & Edwards 2008; Roberts et al. 2011), it appears that all alternative definitions of R^{2} suffer from one or more aforementioned problems and their implementations may not be straightforward. In the next section, we introduce a definition of R^{2}, which is simple and common to both LMMs and GLMMs and probably less prone to the aforementioned problems than previously proposed definitions.
General and simple R^{2} for GLMMs
where is the variance calculated from the fixed effect components of the LMM (c.f. Snijders & Bosker 1999), m in the parentheses indicates marginal R^{2} (i.e. variance explained by fixed factors; see below for conditional R^{2}). Estimating can, in principle, be carried out by predicting fitted values based on the fixed effects alone (equivalent to multiplying the design matrix of the fixed effects with the vector of fixed effect estimates) followed by calculating the variance of these fitted values (Snijders & Bosker 1999). Note that should be estimated without degreesoffreedom correction.
An obvious advantage of this formulation is that will never be negative. It is possible that can decrease by the addition of predictors (remember that never decrease with more predictors), but this is unlikely, because should always increase when predictors are added to the model (compare Equations eqn 16 and 26).
As one can see in Equation eqn 30, conditional R^{2} () despite its somewhat confusing name can be interpreted as the variance explained by the entire model. Both marginal and conditional convey unique and interesting information, and we recommend they both be presented in publications.
In the case of a Gaussian response and an identity link (as used in LMMs), the linked scale variance and the original scale variance are the same and the distributionspecific variance is zero. Thus, () reduces to in Equations eqn 29 and eqn 30. For other GLMMs, the linkscale variance will differ from the original scale variance. We here present R^{2} calculated on the link scale because of its generality: Equations eqn 29 and eqn 30 can be applied to different families of GLMMs, given the knowledge of distributionspecific variance and a model that fits additive overdispersion (e.g. MCMCglmm; Hadfield 2010). Importantly, when the denominators of Equations eqn 29 and eqn 30 include (i.e. for GLMM), both types of will never become 1 in contrast to traditional R^{2} (see also Table 1). Table 2 summarizes the specifications for binary/proportion data and count data, which are equivalent to Equations eqn 22eqn 25. The GLMM formulations presented in Table 2 for binomial GLMMs were first presented by Snijders & Bosker (1999). They also show that this approach can be extended to multinomial GLMMs where the response is categorical with more than two levels (Snijders & Bosker 1999; see also Dean, Nakagawa & Pizzari 2011). However, to our knowledge, equivalent formulas for Poisson GLMMs (i.e. count data) have not been previously described (for derivation, see Appendix 1).
Binary and proportion data  Count data  

Link function  Logit link  Probit link  Log link  Squareroot link 
Distributionspecific variance  1  0·25  
Model specification 



Description  Y_{ijk} is the number of ‘successes’ in m_{ijk} trials by the jth individual in the kth group at the ith occasion (for binary data, m_{ijk} is 1), p_{ijk} is the underlying (latent) probability of success for the jth individual in the kth group at the ith occasion (for binary data, is 0).  Y_{ijk} is the observed count for the jth individual in the kth group at the ith occasion, μ_{ijk} is the underlying (latent) mean for the ith individual in the kth group at the ith occasion.  
Marginal R^{2}  
Conditional R^{2} 
As a technical note, we mention that for binary data the additive overdispersion is usually fixed to 1 for computational reasons, as additive dispersion is not identifiable (see Goldstein, Browne & Rasbash 2002). Furthermore, some of the R^{2} formulae include the intercept β_{0} (like in the case Poisson models for count data). In such cases, R^{2} values will be more easily interpreted when fixed effects are centred or otherwise have meaningful zero values (see Schielzeth 2010; see also Appendix 1). We further note that for Poisson models with squareroot link and a mean of Y_{ijk} <5, the given formula is likely to be inaccurate because the variance of squareroot transformation of count data substantially exceeds 0·25 (Table 2; Nakagawa & Schielzeth 2010).
Related issues
where C_{γ}, C_{α} and C_{ɛ} are PCV at the level of groups, individuals and units (observations), respectively, and , and are variance components from the intercept model (i.e. Equation eqn 22; PCV for additive dispersion, can also be calculated by replacing with ). Proportion change in variance is in fact one of earliest proposed R^{2} measures for LMMs (Raudenbush & Bryk 1986; Bryk & Raudenbush 1992), although it can take negative values (Snijders & Bosker 1994). We think, however, that presenting PCV along with R^{2}_{GLMM} will turn out to be very useful, because PCV monitors changes specific to each variance component, that is, how the inclusion of additional predictor(s) has reduced (or increased) variance component at different levels. For example, if C_{γ} = 0·12, C_{α} = −0·05 and C_{ɛ} = 0·23, the negative estimate shows that variance at the individual level has increased (i.e. ). Additionally, we refer the reader to Hössjer (2008) who describes an alternative approach for quantifying variance explained at different levels using variance components from a single model.
So far, we have only discussed random intercept models (e.g. Equations eqn 22) not randomslope models where slopes are fitted for each group (usually along with random intercepts at each level; see Schielzeth & Forstmeier (2009) highlighting the necessity to fit randomslope models when the main interest is on datalevel fixed effect predictors). Snijders & Bosker (1999) point out that calculating R^{2} like and , it is easy to do so for random intercept models, but for randomslope models is tedious (as variance components for slopes cannot be easily integrated with other variance components, e.g. Schielzeth & Forstmeier 2009). Snijders & Bosker (1999) mention that and obtained from randomslope models are usually very similar to those obtained from random intercept models, where the same fixed effects are fitted. Therefore, we recommend calculating (both marginal and conditional) from corresponding random intercept models for randomslope models, although PCV should be calculated for the randomslope models of interest.
Worked examples
We will illustrate how the calculation of along with PCV using simulated datasets. Consider a hypothetical species of beetle that has the following life cycle: larvae hatch and grow in the soil until they pupate, and then adult beetles feed and mate on plants. They are a generalist species and so are widely distributed. We are interested in the effect of extra nutrients during the larval stage on subsequent morphology and reproductive success. Larvae are sampled from 12 different populations (‘Population’; see Fig. 1). Within each population, larvae are collected at two different microhabitats (‘Habitat’): dry and wet areas as determined by soil moisture. Larvae are exposed to two different dietary treatments (‘Treatment’): nutrient rich and control. The species is sexually dimorphic and can be easily sexed at the pupa stage (‘Sex’). Male beetles have two different colour morphs: one dark and the other reddish brown (‘Morph’, labelled A and B in Fig 1), and morphs are supposedly subject to sexual selection. Sexed pupae are housed in standard containers until they mature (‘Container’). Each container holds eight samesex animals from a single population, but with a mix of individuals from the two habitats (N_{[container]} = 120; N_{[animal]} = 960). Three traits are measured after maturation: (i) body length of adult beetles (Gaussian distribution), (ii) frequencies of the two distinct male colour morphs (binomial or Bernoulli distribution) and (iii) the number of eggs laid by each female (Poisson distribution) after random mating (Fig. 1).
Data for this hypothetical example were created in R 2.15.0 (R Development Core Team 2012). We used the function lmer in the R package lme4 (version 0.99937542; Bates, Maechler & Bolker 2011) for fitting LMMs and GLMMs. We modelled three response variables (see also Table 3): (i) the body length with a Gaussian error (‘Size models’), (ii) the two male morphs with the binomial error (logitlink function; ‘Morph models’) and (iii) the female egg numbers with the Poisson error (loglink function; ‘Fecundity models’). For each dataset, we fitted the null (intercept/empty) model and the ‘full’ model; all models contained ‘Population’ and ‘Container’ as random factors; we included an additive dispersion term (see Table 2) in Fecundity models. The full models all included ‘Treatment’ and ‘Habitat’ as fixed factors; ‘Sex’ was added as a fixed factor to the body size model. Two kinds of and PCV for the three variance components were calculated as explained above. The results of modelling the three different datasets are summarized in Table 3; all datasets and an R script are provided as online supplements (Data S14).
Model name  Size models Gaussian mixed models  Morph models Binary mixed models (logit link)  Fecundity models Poisson mixed models (log link)  

Null Model  Full Model  Null Model  Full Model  Null Model  Full Model  
Fixed effects  b [95% CI]  b [95% CI]  b [95% CI]  b [95% CI]  b [95% CI]  b [95% CI] 
Intercept  14·08 [13·41, 14·76]  15·22 [14·53, 15·91]  −0·38 [−0·96, 0·21]  −1·25 [−1·96, −0·54]  1·54 [1·22, 1·86]  1·23 [0·91, 1·56] 
Treatment (experiment)  –  0·31 [0·18, 0·45]  –  1·01 [0·60, 1·43]  –  0·51 [0·41, 0·26] 
Habitat (wet)  –  0·09 [−0·05, 0·23]  –  0·68 [0·27, 1·09]  –  0·10 [0·001, 0·20] 
Sex (male)  –  −2·66 [−2·89, −2·45]  –  –  –  – 
Random effects  VC  VC  VC  VC  VC  VC 
Population  1·181  1·379  0·946  1·110  0·303  0·304 
Container  2·206  0·235  < 0·0001  0·006  0·012  0·023 
Residuals (additive dispersion)  1·224  1·197  –  –  0·171  0·100 
Fixed factors  –  1·809  –  0·371  –  0·067 
PCV_{[Population]}  –  −16·77%  –  −17·34%  –  −0·54% 
PCV_{[Container]}  –  89·37%  –  <−100%  –  −84·32% 
PCV_{[Residuals]}  –  2·21%  –  –  –  41·54% 
–  39·16%  –  7·77%  –  9·76%  
–  74·09%  –  31·13%  –  57·23%  
AIC  3275  3063  602·4  573·1  902·7  811·9 
BIC  3295  3097  614·9  594·0  920·4  836·9 
 CI, confidence interval; PCV, proportion change in variance; NA, not applicable/available; AIC, Akaike Information Criterion; BIC; Bayesian information criterion; ML, maximum likelihood; REML, restricted maximum likelihood; VC, variance components.
 For full models, the intercept represents control, dry and female. 95% CI was estimated by assuming an infinitely large degree of freedom (i.e. t = 1·96). For Size models, AIC and BIC values were calculated using ML but other parameters were from REML estimations (see the text for the reason).
In all the three model sets, some variance components in the full models were larger than corresponding variance components in the null models (e.g. ). In Morph models, the sum of all the random effect variance components in the full model was greater than the total variance in the null model (c.f. ); see above; Snijders & Bosker 1994). All these patterns result in negative PCV values (see Table 3), while values never become negative. In Morph and Fecundity models, values are relatively minor (8–10%) compared with values. In Size models, on the other hand, was nearly 40%. This was due to a very large effect of ‘Sex’ in body size model; in this model, the ‘Treatment’ and ‘Habitat’ effects together accounted for only c. 1% of the variance (not shown in Table 3). The variance among containers in the null Size model was conflated with the variance caused by differences between the sexes in the null model, as ‘Sex’ and ‘Container’ are confounded by the experimental design (single sex in each container; Fig. 1). A part of the variation assigned to ‘Container’ in the null model was explained by the fixed effect ‘Sex’ in the full model. Finally, it is important to note that both ‘Treatment’ and ‘Habitat’ effects were statistically significant in all the datasets in most cases (five out of six). Much of data variability, however, resided in the random effects along with residuals (additive dispersion) and in the distributionspecific variance. Note that differences between corresponding and values reflect how much variability is in random effects. Importantly, comparing the different variance components including that of the fixed factors within as well as between models, we believe, could help researchers gaining extra insights into their datasets (Merlo et al. 2005a,b). We also note that in some cases, calculating a variance component for each fixed factor may prove useful.
Final remarks
Here, we have provided a general measure of R^{2} that we label . Both marginal and conditional can be easily calculated, regardless of the statistical package used to fit the models. While we do not claim that is a perfect summary statistic, it is less susceptible to the common problems that plague alternative measures of R^{2}. We further believe that can be used as a quantity of biological interest and hence might be thought of as being estimated from the data rather than calculated for a particular dataset. The empirical usefulness of as an estimator of the explained variance should still be tested in future studies. As with every estimator of biological interest, it is desirable to quantify the uncertainty around this estimate (e.g. 95% confidence interval, which could be approximated by parametric bootstrapping or MCMC sampling). As far as we are aware, such uncertainty estimates have not been considered for traditional R^{2}. Perhaps, future studies can also investigate the usefulness of uncertainty estimates for and other R^{2} measurements.
We finish with a cautionary note that R^{2} should not replace model assessments such as diagnostic checks for heteroscedasticity, validating assumptions on the distribution of random effects and outlier analyses. Above, we presented R^{2} with the motivation of summarizing the amount of variance explained in a model that is suitable for the specific research questions and datasets. It should only be used on models that have been checked for quality by other means. It is also important to realize that the R^{2} can be large due to predictors that are not of direct interest in a particular study (Tjur 2009) such as the sex effect on body size in our example. Despite these limitations, when used along with other statistics such as AIC and PCV, will be a useful summary statistic of mixedeffects models for both biologists and other scientists alike.
Acknowledgements
We thank S. English, C. Grueber, F. KornerNievergelt, E. Santos, A. Senior and T. Uller for comments on earlier versions and M. Lagisz for help in preparing Fig. 1. We are also grateful to the Editor R. O'Hara and two anonymous referees, whose comments improved this paper. T. Snijders provided guidance on how to calculate variance for fixed effects. H.S. was supported by an EmmyNoether fellowship of the German Research Foundation (SCHI 1188/11).
Appendix 1
Derivation of distributionspecific variance ( ) for Poisson distributions
Simulations (unpublished data, the authors) show that as E(x) approaches 0, this approximation becomes unreliable. Also, exp(β_{0}) should be obtained either from a model with centred or scaled variables (sense Schielzeth 2010), or an interceptonly model while including all random effects. Note that the former approach may be limited when a model includes categorical variables.