Assessing transferability of ecological models: an underappreciated aspect of statistical validation
Correspondence site: http://www.respond2articles.com/MEE/
Summary
1. Ecologists have long sought to distinguish relationships that are general from those that are idiosyncratic to a narrow range of conditions. Conventional methods of model validation and selection assess in‐ or out‐of‐sample prediction accuracy but do not assess model generality or transferability, which can lead to overestimates of performance when predicting in other locations, time periods or data sets.
2. We propose an intuitive method for evaluating transferability based on techniques currently in use in the area of species distribution modelling. The method involves cross‐validation in which data are assigned non‐randomly to groups that are spatially, temporally or otherwise distinct, thus using heterogeneity in the data set as a surrogate for heterogeneity among data sets.
3. We illustrate the method by applying it to distribution modelling of brook trout (Salvelinus fontinalis Mitchill) and brown trout (Salmo trutta Linnaeus) in western United States. We show that machine‐learning techniques such as random forests and artificial neural networks can produce models with excellent in‐sample performance but poor transferability, unless complexity is constrained. In our example, traditional linear models have greater transferability.
4. We recommend the use of a transferability assessment whenever there is interest in making inferences beyond the data set used for model fitting. Such an assessment can be used both for validation and for model selection and provides important information beyond what can be learned from conventional validation and selection techniques.
Introduction
A fundamental goal of ecology, as in other branches of science, is to identify relationships and patterns that are repeatable or general (Peters 1991). Although a relationship that is idiosyncratic to a narrow set of conditions may be interesting and informative, no ecologist would wish to mistake it for an association that is broadly applicable and constitutes a general rule. Such relationships or models can be said to have generality (Fielding & Haworth 1995; Olden & Jackson 2000), generalizability (Justice, Covinsky & Berlin 1999; Vaughan & Ormerod 2005) or transferability (Thomas & Bovee 1993; Randin et al. 2006) to data sets other than the one for which they were developed. However, conventional approaches to evaluating ecological models do not commonly provide inference into transferability. As a result, the generality of a model is often unknown, and the model selected as ‘best’ for a given data set may have worse transferability than an alternative, rejected one.
The issue of transferability has been the subject of intermittent ecological interest for a number of years, but this greatly increased with the rise of the field of species distribution modelling (Elith & Leathwick 2009) in the 2000s. Researchers have investigated whether a species model developed in one region can successfully predict in a different region (Peterson, Papeş & Kluza 2003; Randin et al. 2006; Peterson, Papeş & Eaton 2007; Barbosa, Real & Vargas 2009; Sundblad et al. 2009; Wenger et al. 2011a) and to a smaller extent whether models developed in one time period can predict a different time period with different weather or climatic conditions (Boyce et al. 2002; Araújo et al. 2005; Varela, Rodríguez & Lobo 2009; Buisson et al. 2010; Tuanmu et al. 2011). However, the question of model transferability is a general one that is common to questions other than those of species–environment relationships. It is equally important to consider the generality of models of physical phenomena (e.g. models of temperature), of ecological processes (e.g. denitrification rates) or of population parameters (e.g. growth rates). The fundamental problem is that there can be considerable spatial or temporal heterogeneity in ecological relationships, and this heterogeneity can limit model generality.
Lack of model generality is often a result of overfitting (Chatfield 1995; Sarle 1995), which can be defined as accepting a predictor variable (or a form of a predictor variable, such as a squared term or interaction term) that is nominally correlated with the response variable in the data set, but which does not represent a relationship that holds generally. Overfitting may occur for two rather different reasons. First, weak correlations among variables arise as a result of random noise, and these may be incorrectly interpreted as legitimate relationships. Model selection criteria such as the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) are designed to minimize this kind of overfitting by penalizing models for excess complexity, resulting in the rejection of spurious relationships. By and large, these criteria are effective at this goal (Burnham & Anderson 2002, 2004) and have been widely adopted by ecologists. Traditional cross‐validation techniques such as leave‐one‐out can also provide an unbiased assessment of model performance that does not favour such overfitted models (Olden & Jackson 2000; Olden, Jackson & Peres‐Neto 2002). The second cause of overfitting is when there are statistical associations between predictor and response variables that are real in a given data set but do not occur under a wide range of conditions. For example, with the large data sets now commonly used in species distribution modelling, models with over 100 terms may be justifiable under traditional criteria, but such a precise description of a distribution in one location often transfers poorly to other locations (Tuanmu et al. 2011). This is especially true when using indirect predictors without a close, perceived mechanistic link to the response variable (Randin et al. 2006). Of course, underfitting is also possible. For example, a model predicting species occurrence as a function of elevation may be more parsimonious than a more complex one based on temperature and precipitation, but the elevation model will likely transfer poorly to other latitudes.
An estimate of transferability is especially important with the increased use of machine‐learning modelling techniques such as neural networks (Lek & Guegan 1999), genetic algorithms (Stockwell & Noble 1992), maximum entropy (Phillips, Anderson & Schapire 2006), support vector machines (Drake, Randin & Guisan 2006), classification and regression trees (De’ath & Fabricius 2000) and random forests (RF) (Breiman 2001). These methods have the potential to match highly nonlinear, complex relationships, yielding in‐sample and randomly cross‐validated performance superior to that of traditional generalized linear modelling (Elith et al. 2006; Olden, Lawler & Poff 2008), but at the risk of limited transferability if model complexity is not constrained (Sarle 1995; Tuanmu et al. 2011). Assessment of generality of such models is critical if they are to be used in a predictive manner beyond the conditions under which they were trained – for example, if species distribution models are used to make projections under climate change conditions (Araújo et al. 2005).
Our primary objective in this paper is to present a general approach to estimating model transferability by extending and formalizing methods currently in use in the species distribution modelling literature. A secondary objective is to illustrate why transferability assessment can be important. We do this by fitting different kinds of models to an example data set and then comparing model transferability to traditional performance measures, showing how apparently good models can transfer very poorly. We then discuss practical aspects, limitations and appropriate use of the method.
The method: estimating transferability via non‐random cross‐validation
In species distribution modelling, transferability has often been estimated by splitting the data set into geographically distinct subsets, fitting the model with the first subset (called the training data set) and validating with the second (called the test data set). Then, the process is reversed, with the second subset used for fitting and the first for validation. This is nothing more than a form of cross‐validation in which the subset membership is assigned non‐randomly based on a relevant factor such as geography. We propose that this approach can be generalized to serve as a standard method for transferability assessment. We introduce the method by first reviewing conventional validation techniques.
A model’s performance can be validated based on the error in its predictions of observed data. If these predictions involve the same data used to fit the model (i.e. the training and testing data sets are identical), then the errors are the model residuals and are called in‐sample or resubstitution error. However, in‐sample error underestimates true model error, especially for small sample sizes (Efron 1986; Fielding & Bell 1997; Olden & Jackson 2000; Burnham & Anderson 2002; Olden, Jackson & Peres‐Neto 2002). An alternative approach is to use a fully independent validation data set, which provides an independent test of model error and a direct measure of transferability (Fielding & Bell 1997). The downside, of course, is that the test data are not used for model fitting.
A useful compromise is cross‐validation, which uses all of the data but also can provide unbiased error estimates. With cross‐validation, a portion of the data is withheld for model training, and a different portion is withheld as the test data set. In this way, all data are iteratively used for both training and testing. There are various types of cross‐validation depending on what fraction of the data set is excluded from model training and used for validation. In fivefold cross‐validation, the data are divided into five groups, and one‐fifth of the data are withheld at a time; in n‐fold or leave‐one‐out cross‐validation, only one data point is withheld at a time. It has been shown that leave‐one‐out provides an unbiased estimate of model error, even at small sample sizes (Olden & Jackson 2000) and furthermore that when used as the basis for model selection, it is (asymptotically) consistent with the widely used AIC (Shao 1993). However, neither leave‐one‐out nor other forms of cross‐validation, as conventionally applied, provide an estimate of model transferability. This is because unless sample size is very small, subsamples randomly selected from the full range of data provide an unbiased estimate of the overall relationships in the full data set, but do not necessarily reflect the heterogeneity that may exist across space or time. The problem is compounded when there is autocorrelation in the data, such that for any given data point in the training data set, there is likely to be a similar, correlated data point in the validation data set (Araújo et al. 2005).
An alternative is to divide the data non‐randomly into groups for cross‐validation, such that any group used for validation differs from those used for training the model in the same way that an independent data set would. That is, we use heterogeneity within the data set as a surrogate for heterogeneity among data sets. For a species distribution model, for example, cross‐validation based on dividing data into multiple geographic regions provides inferences into how the model will perform in an unsampled region (e.g. Olden & Jackson 2001; Kennard et al. 2007) or under future climate conditions in the same region (Vaughan & Ormerod 2005). This can readily be extended to other types of data sets. For example, in a data set with 5 years of annual observations, a full year of data could be withheld at a time; this ought to provide a reasonable estimate of the model’s predictive ability in a future, as yet unobserved year. In general terms, we can define the transferability of a model as the accuracy of its predictions for an independent data set; an estimate of transferability (which we refer to as a ‘transferability assessment’) is provided by non‐random cross‐validation.
An intuitive extension of assessing transferability is to use the results to select among competing models. Model validation and model selection are two sides of the same coin; it is reasonable to rank models based on their validated predictive performance and select the best performing model or a confidence set of good performing models (Arlot & Celisse 2010). Evaluating models based on transferability should provide a highly robust method of identifying relationships with predictor variables that are truly general, thus greatly reducing the risk of overfitting and increasing model utility.
Example: invasive trout in the western United States
Brook trout (Salvelinus fontinalis Mitchell) and brown trout (Salmo trutta Linnaeus) are introduced species in the western United States, where they are considered invasive and a threat to the persistence of native trout (Thurow, Lee & Rieman 1997; Dunham et al. 2002; McHugh & Budy 2006). Relationships between the species’ distributions and climatic and landscape variables are of interest for predicting future invasions, as well as the species’ potential response to projected climate change. We used a data base of 9890 presence/absence fish collection records from the interior west of the United States (Fig. 1) to model brook and brown trout occurrence as a function of predictor variables that were selected a priori to be likely influences on the species distributions: mean summer air temperature, winter high flow frequency, mean flow, slope, presence/absence of a road within a kilometre of the stream and distance to the nearest unconfined valley. Details on the data and variables are in Wenger et al. (2011b); here, we summarize the statistical analysis methods used here.

Collection sites (black dots) and study area (grey shading) in the western United States used in the example. For the fivefold transferability assessment, sites were assigned non‐randomly to five groups (labelled with numbers) based on latitudinal bands (delineated by heavy black lines). For the 10‐fold cross‐validation, each of these bands was divided by latitude into two equal‐sized groups, producing 10 groups. For the twofold cross‐validation, the entire data set was divided by latitude into two equal‐sized groups.
Three modelling approaches were employed: (i) multilevel generalized linear modelling [or generalized linear mixed modelling (GLMM)] with a logit link; (ii) artificial neural networks (ANN) and (iii) the RF classifier. In the GLMM modelling, we used a multilevel analysis because sites were not distributed randomly across the landscape, but were often clustered; a multilevel approach (with sites nested within watersheds) reduces the bias caused by such spatial autocorrelation (Raudenbush & Bryk 2002; Gelman & Hill 2007). We used AIC to select the best model for each species from among a candidate set of GLMM models with different combinations of predictor variables. The second method, ANN, is a widely used machine‐learning technique that can account for nonlinearities and complex interactions among variables (Olden, Lawler & Poff 2008). For the ANN modelling, we included all six predictor variables for each species and specified three types of network architecture of increasing complexity: one with six hidden nodes, one with 12 hidden nodes and one with 18 hidden nodes, all in a single layer. The third method, RF, is a type of sophisticated classification and regression tree analysis (see De’ath & Fabricius 2000) that has been shown to display excellent in‐sample predictive performance (e.g. Lawler et al. 2006; Holden, Morgan & Evans 2009). Random forest classifiers are a model‐averaging or ensemble‐based approach in which multiple classification or regression tree models are built using random subsets of the data and predictor variables (Cutler et al. 2007). We grew a forest of 1000 classification trees by sampling with replacement randomized subsets of the original observations (using default software settings). We included all six predictor variables. Models were fit in the R Statistical Package (http://www.R‐project.org) using the packages lme4, nnet and randomForest.
Model performance was evaluated in three ways. First, we calculated in‐sample model performance by using the models to predict the training data used for fitting. Secondly, we performed a traditional type of fivefold cross‐validation in which data points were randomly assigned to groups. We iteratively trained the model using 4/5 of the data and validated it using 1/5 of the data. Third, we assessed transferability by partitioning the data non‐randomly into latitudinal bands. We examined 2‐, 5‐ and 10‐fold cross‐validation (Fig. 1 shows fivefold group assignments as an example), iteratively fitting with one group withheld at a time and evaluating performance in predicting the withheld data. The use of latitudinal bands was designed to roughly separate the data climatically, providing inference into transferability to a future climate (Wenger et al. 2011b). For the GLMMs, only the fixed effects were used to make predictions for validation, as it is not possible to estimate random effects for regions outside the fitting data set. Because ANNs and RF are subject to random variability in model fitting, we ran 100 iterations of each validation. For the traditional cross‐validation, different training and fitting subsets were selected randomly at every iteration, but for the transferability assessments, group assignments were constant across iterations. For each validation, we calculated area under the curve (AUC), the AUC of the receiver‐operator characteristic plot, a widely used and unbiased summary metric of model performance for binary data (Guisan & Zimmermann 2000; Manel, Wiilams & Ormerod 2001; but see Lobo, Jimenez‐Valverde & Real 2008 for limitations). We explored alternative performance measures, but as all gave results that were essentially identical to AUC, we report only the latter.
Results showed that the RF models had the highest performance based on in‐sample and random cross‐validation, followed by models developed using ANNs and GLMMs (Table 1). Among the three ANN models, the more complex formulations (models with many nodes) tended to have better in‐sample and random cross‐validation performance than the simpler ones (with few nodes). The comparison of model transferability among the methods showed nearly opposite trends. The GLMM models displayed the highest transferability, while the RF and ANN models exhibited substantially lower transferability. The ANN models with simpler formulations (in terms of the number of nodes) had greater transferability than the more complex versions. Note that for brook trout, even the GLMM transferability was not very good, but the performances of the other modelling methods were even worse. The 2‐, 5‐ and 10‐fold transferability assessments all showed the same trends, but the twofold transferability assessment produced the lowest AUC scores, followed by 5‐ and 10‐fold.
| Species Model | In‐sample | Random CV | Transferability 10‐fold | Transferability fivefold | Transferability twofold |
|---|---|---|---|---|---|
| Brown trout | |||||
| Random forests | 0·918 | 0·912 | 0·749 | 0·717 | 0·711 |
| ANN – 18 hidden nodes | 0·906 | 0·848 | 0·729 | 0·711 | 0·646 |
| ANN – 12 hidden nodes | 0·892 | 0·838 | 0·738 | 0·719 | 0·665 |
| ANN – 6 hidden nodes | 0·865 | 0·834 | 0·756 | 0·740 | 0·703 |
| Multilevel GLMM | 0·822 | 0·820 | 0·788 | 0·783 | 0·757 |
| Brook trout | |||||
| Random forests | 0·884 | 0·873 | 0·625 | 0·603 | 0·516 |
| ANN – 18 hidden nodes | 0·783 | 0·745 | 0·618 | 0·596 | 0·551 |
| ANN – 12 hidden nodes | 0·764 | 0·738 | 0·627 | 0·604 | 0·553 |
| ANN – 6 hidden nodes | 0·728 | 0·717 | 0·640 | 0·618 | 0·563 |
| Multilevel GLMM | 0·674 | 0·673 | 0·650 | 0·653 | 0·574 |
- Best models for each species for each performance measure are shown in bold. An AUC score of 0·5 is no better than random; scores >0·7 are good, and scores >0·9 are excellent. Each value is the mean of 100 iterations.
- ANN, artificial neural networks.
Examination of the predictor–response relationships from the different models sheds insight into the cause of the poor transferability performance of the machine‐learning approaches (RF and ANN). As an example, consider the temperature response for the RF and GLMM models. In the GLMM models, this was represented by a quadratic relationship, temperature + temperature2 (Fig. 2), selected a priori as a candidate relationship because it represents the classical species niche association with temperature as an ecological resource (Magnuson, Crowder & Medvick 1979; Austin 2002). By contrast, the RF model empirically describes the observed relationships in the data, with no assumptions of form, as shown in the partial dependence plot (Hastie, Tibshirani & Friedman 2001) of occurrence in response to temperature (Fig. 3). The jagged shape of the response curve matches the fitting data set extremely well, but likely has no basis in biology, and it is perhaps no wonder that it fails to transfer to other data sets.

Plot of the probability of brook trout occurrence in response to air temperature, with other variables held to their mean values, based on the best‐supported GLMM model.

Partial dependence plot for the probability of brook trout occurrence in response to air temperature, based on random forests model. The Y‐axis is 0·5 × the logit of the occurrence probability; for practical purposes, this may be viewed as relative occurrence probability. It is the shape of the plot that should be compared with Fig. 2.
Discussion and practical guidance
Our example demonstrates the importance of considering model transferability, in addition to traditional measures of model accuracy, when assessing model performance. Based on in‐sample validation and conventional cross‐validation, the RF models for both brook trout and brown trout appeared to be excellent, and it would be tempting to use them as a forecasting tool – such as for projections of future invasion potential or distributional responses to projected climate change. However, the transferability assessment indicates this could be a mistake. The RF and many ANN models performed relatively poorly when just a fifth of the data was non‐randomly withheld, suggesting possible overfitting and the need for great caution in making inferences in new locations or new climates. Our case study illustrated that simpler models can, at least in some cases, be more transferable.
It is now common to see machine‐learning methods like RF applied to a range of ecological data analyses, including projections of species distributions under climate change scenarios, without any assessment of transferability (e.g. Ledig et al. 2010). Because RF is, by design, immune to overfitting associated with random noise (Breiman 2001), researchers may make the incorrect assumption that it is also immune to overfitting caused by heterogeneity in predictor–response relationships. Of course this cannot be true; for example, no reasonable ecologist would interpret the complex relationship shown in Fig. 3 to be a general one that can be applied with high accuracy in other locations. It may be surprising that a method like RF that is robust to overfitting in the conventional sense should suffer poor transferability. However, RF, like all analytical methods, is designed to seek the best fit for a data set as a whole. It cannot distinguish between predictor–response relationships with high generality from those that have less generality, but are nevertheless legitimate relationships in the data set. The only way to gain insight into the degree of model generality or transferability is either to test the model with a new, independent data set or to cross‐validate it using non‐random subsets, as we advocate here.
We suggest that a transferability assessment be conducted whenever there is interest in making projections or inferences beyond the data set used for model fitting. If results suggest that transferability is substantially worse than in‐sample performance, simpler alternative models should be considered. With RF, it is possible to simplify by reducing the number of parameters used in modelling, reducing the maximum number of nodes per tree and specifying a minimum number of cases per node (although our attempts to use these resulted in minimal improvements, as evidenced by partial dependence plots and transferability; S.J. Wenger, unpublished data). Other classification algorithms such as boosted regression trees have alternative settings to manage complexity (Elith, Leathwick & Hastie 2008). Neural networks offer multiple ways to control complexity, including managing the number of nodes (as we did here), stopping the algorithm early and weight decay or weight elimination (Bishop 1995; Sarle 1995; Olden & Jackson 2002). We possibly could have achieved higher transferability in the ANN models in our example if we had used these techniques to more aggressively control complexity. For general additive models, the order and number of knots in the splines may be limited on a variable‐by‐variable basis; for GLMMs and GLMs, complexity may be similarly controlled by limiting higher‐order terms; and for both general additive models and GLMMs, interactions may be specified or not.
Another factor that may influence transferability is the distance of the causal links between the predictor variables and the response variable. Predictor–response relationships that have a sound ecological basis and direct causal linkages are likely to be more transferable than those based on indirect relationships or pure correlation (Austin 2002; Sundblad et al. 2009). For this reason, we suggest selecting predictor variables and possibly even the form of the expected response (e.g. positive, negative and quadratic) on the basis of reasonable a priori hypotheses, unless the goal of the modelling is purely exploratory. This is perhaps best performed using methods such as GLMs, GLMMs and GAMs that offer a high degree of user control. Such models benefit further from well‐developed theory and methodologies for addressing autocorrelation (Cressie 1993; Lichstein et al. 2002; Dormann et al. 2007), a problem that is generally ignored in the application of machine‐learning methods. Of course, GLMs and their variants are not immune to overfitting, and GLMs with excessive parameters, higher‐order terms or interactions can suffer from decreased transferability (Wenger et al. 2011a).
In devising a transferability assessment, the researcher must make several key decisions requiring a degree of professional judgment. The first of these is deciding how many groups into which to divide the data set (i.e. the number of folds of k‐fold cross‐validation), which is essentially a decision on how conservative a test to run. In our example, we found that the fewer the groups, the more conservative the assessment. We expect this to be a general rule and to be true regardless of the size of the data set, which stands in contrast to random cross‐validation, in which the number of groups becomes irrelevant as the size of the data set grows arbitrarily large (Olden & Jackson 2000). To date, transferability assessments in the field of species distribution modelling have tended to use twofold cross‐validation (e.g. Randin et al. 2006; Peterson, Papeş & Eaton 2007; Barbosa, Real & Vargas 2009). We suspect this will be overly conservative for many applications. On the other hand, cross‐validation with more than 10‐fold may be too liberal, so we recommend between 3‐ and 10‐fold for most applications. The choice depends largely on how projections are to be used. If the data set covers most of the area of potential inference, a more liberal test is reasonable; if the coverage of the data set is small relative to the area of inference, then a more conservative test is appropriate. For example, consider a case where researchers wish to parameterize survival estimates in a population model based on a targeted study involving six populations, and then apply the results to forecasts of 60 populations across a large region. In such a case, a conservative transferability assessment based on twofold or threefold cross‐validation would be essential to avoid overly optimistic predictions of the generality of the observed relationships. If there is difficulty in deciding how many groups to use, it is perfectly reasonable and usually quite practical to run multiple tests with different numbers of groups, as we did.
A second key decision is how to assign data to the groups. Two principles should guide this process. The first is that all of the fitting data sets should cover a large portion of the range of variability of the predictor variables of interest. For example, in building a species–climate model, if all high elevation locations are placed in a single group, that group will likely be poorly predicted because it lies outside the range of variability of the other groups. This would be overly conservative, so it is preferable to assign those high elevation sites to at least two groups, so some of them are always available for model training. We used latitudinal bands in our example because the large elevational gradient in this region produced a climatic range of a magnitude at least as great as that produced by the latitudinal gradient, preserving the range of variability in predictor variables when a band is removed (except in the very conservative case of twofold validation). However, such an assignment would probably not be appropriate for a data set from the Great Plains of the United States, which have little elevational gradient. The second principle for guiding group assignments is that the heterogeneity among the groups (in terms of predictor–response relationships) should be in the range of the expected heterogeneity between the full data set and other locations or data sets for which inferences are of interest. In our example, we were interested in the temporal climatic variability between current conditions and future conditions and made the assumption that climatic variability across latitudinal bands provided a reasonable surrogate. These guidelines notwithstanding, concern over optimizing group assignments should not become an obstacle to performing a transferability assessment, as any rational method of group assignments is likely to yield useful information, especially for large data sets. With small data sets, where it is possible for a particular grouping to significantly affect the outcome, it may be useful to repeat the transferability assessment multiple times with different group assignments in a form of ensemble prediction (Araújo & New 2007). We explored this with our example data set (S.J. Wenger, unreported data), but found it had little effect on the results, likely due to the relatively large sample size.
A transferability assessment is not necessary or appropriate for all data sets and all circumstances. If projections and inferences do not extend beyond the conditions represented by the data used to fit the model (e.g. Evans & Cushman 2009), transferability is less relevant. Most models of data from tightly controlled experiments also would not benefit from a transferability assessment because heterogeneity is either limited or is itself the focus of study. For very small data sets, transferability assessments may be infeasible because models cannot be effectively fit unless all the data are used, or because the data do not cover a sufficient range of conditions. Where it is appropriate, we regard a transferability assessment as a useful tool that complements existing model performance measures and selection methods. It has some clear limitations. Dividing the data into subsets provides some inferences into how a model will perform with a new data set (e.g. a different region or time period), but the actual performance could be substantially better or worse. Using a transferability assessment based on geographic units to provide inferences into performance under a future climate requires assumptions that regional climatic differences are of a similar magnitude to differences between current and future climates in a single area, which cannot really be known. In some cases, climates will shift to novel ones that lack current analogues (Williams, Jackson & Kutzbacht 2007), limiting the value of a spatial transferability assessment. Even under such circumstances, we argue that a transferability assessment provides important additional information beyond what can be learned from traditional performance assessment methods. Whenever there is interest in extending inferences to data sets beyond the one used to fit an ecological model, the researcher is better off armed with some insight into potential transferability (however imperfect) than proceeding under the untested assumption that the model will perform with the same error rate in new data sets as the one used to create it.
Acknowledgements
S.J.W. was supported in part by grant G09AC00050 from the US Geological Survey and a contract from the Forest Service Rocky Mountain Research Station. J.D.O. acknowledges funding support from the U.S. EPA Science To Achieve Results (STAR) Program (Grant No. 833834), USGS Status and Trends Program and the USGS National Gap Analysis Program. The manuscript was improved by helpful comments from Daniel Dauwalter, Mary Freeman, Daniel Isaak and two anonymous reviewers.
References
Citing Literature
Number of times cited according to CrossRef: 180
- Sara M. Melo-Merino, Héctor Reyes-Bonilla, Andrés Lira-Noriega, Ecological niche models and species distribution models in marine environments: A literature review and spatial analysis of evidence, Ecological Modelling, 10.1016/j.ecolmodel.2019.108837, 415, (108837), (2020).
- Élie Pédarros, Tabitha Coetzee, Hervé Fritz, Chloé Guerbois, Rallying citizen knowledge to assess wildlife occurrence and habitat suitability in anthropogenic landscapes, Biological Conservation, 10.1016/j.biocon.2020.108407, 242, (108407), (2020).
- Tianxiao Hao, Jane Elith, José J. Lahoz‐Monfort, Gurutzeta Guillera‐Arroita, Testing whether ensemble modelling is advantageous for maximising predictive performance of species distribution models, Ecography, 10.1111/ecog.04890, 43, 4, (549-558), (2020).
- Tobias Fremout, Evert Thomas, Hannes Gaisberger, Koenraad Van Meerbeek, Jannes Muenchow, Siebe Briers, Claudia E. Gutierrez‐Miranda, José L. Marcelo‐Peña, Roeland Kindt, Rachel Atkinson, Omar Cabrera, Carlos I. Espinosa, Zhofre Aguirre‐Mendoza, Bart Muys, Mapping tree species vulnerability to multiple threats as a guide to restoration and conservation of tropical dry forests, Global Change Biology, 10.1111/gcb.15028, 26, 6, (3552-3568), (2020).
- José Manuel Fernández‐Guisuraga, Susana Suárez‐Seoane, Leonor Calvo, Transferability of vegetation recovery models based on remote sensing across different fire regimes, Applied Vegetation Science, 10.1111/avsc.12500, 23, 3, (441-451), (2020).
- Evangeline Corcoran, Simon Denman, Grant Hamilton, New technologies in the mix: Assessing N‐mixture models for abundance estimation using automated detection data from drone surveys, Ecology and Evolution, 10.1002/ece3.6522, 10, 15, (8176-8185), (2020).
- Korryn Bodner, Marie‐Josée Fortin, Péter K. Molnár, Making predictive modelling ART: accurate, reliable, and transparent, Ecosphere, 10.1002/ecs2.3160, 11, 6, (2020).
- Debra P. C. Peters, D. Scott McVey, Emile H. Elias, Angela M. Pelzel‐McCluskey, Justin D. Derner, N. Dylan Burruss, T. Scott Schrader, Jin Yao, Steven J. Pauszek, Jason Lombard, Luis L. Rodriguez, Big data–model integration and AI for vector‐borne disease prediction, Ecosphere, 10.1002/ecs2.3157, 11, 6, (2020).
- Vardakas Leonidas, Kalogianni Eleni, Smeti Evangelia, Economou N. Alcibiades, Skoulikidis Th Nikolaos, Koutsoubas Drosos, Dimitriadis Charalampos, Thibault Datry, Spatial factors control the structure of fish metacommunity in a Mediterranean intermittent river, Ecohydrology & Hydrobiology, 10.1016/j.ecohyd.2020.04.005, (2020).
- Xilei Dai, Junjie Liu, Xin Zhang, A review of studies applying machine learning models to predict occupancy and window-opening behaviours in smart buildings, Energy and Buildings, 10.1016/j.enbuild.2020.110159, (110159), (2020).
- Ross Vander Vorste, Mariska Obedzinski, Sarah Nossaman Pierce, Stephanie M. Carlson, Theodore E. Grantham, Refuges and ecological traps: Extreme drought threatens persistence of an endangered fish in intermittent streams, Global Change Biology, 10.1111/gcb.15116, 26, 7, (3834-3845), (2020).
- Justine Jackson‐Ricketts, Chalatip Junchompoo, Ellen M. Hines, Elliott L. Hazen, Louisa S. Ponnampalam, Anoukchika Ilangakoon, Somchai Monanunsap, Habitat modeling of Irrawaddy dolphins (Orcaella brevirostris) in the Eastern Gulf of Thailand, Ecology and Evolution, 10.1002/ece3.6023, 10, 6, (2778-2792), (2020).
- Michael G. Jacox, Michael A. Alexander, Samantha Siedlecki, Ke Chen, Young-Oh Kwon, Stephanie Brodie, Ivonne Ortiz, Desiree Tommasi, Matthew J. Widlansky, Daniel Barrie, Antonietta Capotondi, Wei Cheng, Emanuele Di Lorenzo, Christopher Edwards, Jerome Fiechter, Paula Fratantoni, Elliott L. Hazen, Albert J. Hermann, Arun Kumar, Arthur J. Miller, Douglas Pirhalla, Mercedes Pozo Buil, Sulagna Ray, Scott C. Sheridan, Aneesh Subramanian, Philip Thompson, Lesley Thorne, Hariharasubramanian Annamalai, Kerim Aydin, Steven J. Bograd, Roger B. Griffis, Kelly Kearney, Hyemi Kim, Annarita Mariotti, Mark Merrifield, Ryan Rykaczewski, Seasonal-to-interannual prediction of U.S. coastal marine ecosystems: Forecast methods, mechanisms of predictability, and priority developments, Progress in Oceanography, 10.1016/j.pocean.2020.102307, (102307), (2020).
- Konstantin Ochs, Gregory Egger, Arnd Weber, Teresa Ferreira, John Ethan Householder, Matthias Schneider, The potential natural vegetation of large river floodplains – from dynamic to static equilibrium, Journal of Hydro-environment Research, 10.1016/j.jher.2020.01.005, (2020).
- Sarah P. Saunders, Nicole L. Michel, Brooke L. Bateman, Chad B. Wilsey, Kathy Dale, Geoffrey S. LeBaron, Gary M. Langham, Community science validates climate suitability projections from ecological niche modeling, Ecological Applications, 10.1002/eap.2128, 30, 6, (2020).
- Nils Teichert, Stéphane Tétard, Thomas Trancart, Eric de Oliveira, Anthony Acou, Alexandre Carpentier, Bastien Bourillon, Eric Feunteun, Towards transferability in fish migration models: A generic operational tool for predicting silver eel migration in rivers, Science of The Total Environment, 10.1016/j.scitotenv.2020.140069, 739, (140069), (2020).
- Chongliang Zhang, Yong Chen, Binduo Xu, Ying Xue, Yiping Ren, Temporal transferability of marine distribution models in a multispecies context, Ecological Indicators, 10.1016/j.ecolind.2020.106649, 117, (106649), (2020).
- Rory C O’Connor, Matthew J Germino, David M Barnard, Caitlin M Andrews, John B Bradford, David S Pilliod, Robert S Arkle, Robert K Shriver, Small-scale water deficits after wildfires create long-lasting ecological impacts, Environmental Research Letters, 10.1088/1748-9326/ab79e4, 15, 4, (044001), (2020).
- Quresh S. Latif, Victoria A. Saab, Jonathan G. Dudley, Amy Markus, Kim Mellen-McLean, Development and evaluation of habitat suitability models for nesting white-headed woodpecker (Dryobates albolarvatus) in burned forest, PLOS ONE, 10.1371/journal.pone.0233043, 15, 5, (e0233043), (2020).
- Karin van Ewijk, Piotr Tompalski, Paul Treitz, Nicholas C. Coops, Murray Woods (ret.), Douglas Pitt (ret.), Transferability of ALS-Derived Forest Resource Inventory Attributes Between an Eastern and Western Canadian Boreal Forest Mixedwood Site, Canadian Journal of Remote Sensing, 10.1080/07038992.2020.1769470, (1-23), (2020).
- Sean M. Johnson‐Bice, Jake M. Ferguson, John D. Erb, Thomas D. Gable, Steve K. Windels, Ecological forecasts reveal limitations of common model selection methods: predicting changes in beaver colony densities, Ecological Applications, 10.1002/eap.2198, 0, 0, (2020).
- Arunava Datta, Oliver Schweiger, Ingolf Kühn, Origin of climatic data can determine the transferability of species distribution models, NeoBiota, 10.3897/neobiota.59.36299, 59, (61-76), (2020).
- Adrián Regos, Pablo Gómez-Rodríguez, Salvador Arenas-Castro, Luis Tapia, María Vidal, Jesús Domínguez, Model-Assisted Bird Monitoring Based on Remotely Sensed Ecosystem Functioning and Atlas Data, Remote Sensing, 10.3390/rs12162549, 12, 16, (2549), (2020).
- Andreas Bender, Andre Python, Steve W. Lindsay, Nick Golding, Catherine L. Moyes, Modelling geospatial distributions of the triatomine vectors of Trypanosoma cruzi in Latin America, PLOS Neglected Tropical Diseases, 10.1371/journal.pntd.0008411, 14, 8, (e0008411), (2020).
- Joshua N. Smith, Natalie Kelly, Ian W. Renner, Validation of presence‐only models for conservation planning and the application to whales in a multiple‐use marine park, Ecological Applications, 10.1002/eap.2214, 0, 0, (2020).
- Jason Matthiopoulos, John Fieberg, Geert Aarts, Frédéric Barraquand, Bruce E Kendall, Within reach? Habitat availability as a function of individual mobility and spatial structuring, The American Naturalist, 10.1086/708519, (2020).
- Annelise Tran, Morgan Mangeas, Marie Demarchi, Emmanuel Roux, Pascal Degenne, Marion Haramboure, Gilbert Le Goff, David Damiens, Louis-Clément Gouagna, Vincent Herbreteau, Jean-Sébastien Dehecq, Complementarity of empirical and process-based approaches to modelling mosquito population dynamics with Aedes albopictus as an example—Application to the development of an operational mapping tool of vector populations, PLOS ONE, 10.1371/journal.pone.0227407, 15, 1, (e0227407), (2020).
- Brent R Campos, Quresh S Latif, Ryan D Burnett, Victoria A Saab, Predictive habitat suitability models for nesting woodpeckers following wildfire in the Sierra Nevada and Southern Cascades of California, The Condor, 10.1093/condor/duz062, (2020).
- Claire Kermorvant, Frank D’Amico, Grégory L’Ambert, Simplice Dossou-Gbete, Setting up an efficient survey of Aedes albopictus in an unfamiliar urban area, Urban Ecosystems, 10.1007/s11252-020-01041-y, (2020).
- Michael Bradley, Ivan Nagelkerken, Ronald Baker, Marcus Sheaves, Context Dependence: A Conceptual Approach for Understanding the Habitat Relationships of Coastal Marine Fauna, BioScience, 10.1093/biosci/biaa100, (2020).
- Erica L Westerman, Sarah E J Bowman, Bradley Davidson, Marcus C Davis, Eric R Larson, Christopher P J Sanford, Deploying Big Data to Crack the Genotype to Phenotype Code, Integrative and Comparative Biology, 10.1093/icb/icaa055, (2020).
- Xiao Feng, Ye Liang, Belinda Gallardo, Monica Papeş, Physiology in ecological niche modeling: using zebra mussel's upper thermal tolerance to refine model predictions through Bayesian analysis, Ecography, 10.1111/ecog.04627, 43, 2, (270-282), (2019).
- Adam Thomas Clark, Lindsay Ann Turnbull, Andrew Tredennick, Eric Allan, W. Stanley Harpole, Margaret M. Mayfield, Santiago Soliveres, Kathryn Barry, Nico Eisenhauer, Hans Kroon, Benjamin Rosenbaum, Cameron Wagg, Alexandra Weigelt, Yanhao Feng, Christiane Roscher, Bernhard Schmid, Predicting species abundances in a grassland biodiversity experiment: Trade‐offs between model complexity and generality, Journal of Ecology, 10.1111/1365-2745.13316, 108, 2, (774-787), (2019).
- Pablo A. Menéndez‐Guerrero, David M. Green, T. Jonathan Davies, Climate change and the future restructuring of Neotropical anuran biodiversity, Ecography, 10.1111/ecog.04510, 43, 2, (222-235), (2019).
- Mathieu Chevalier, Jonas Knape, The cost of complexity in forecasts of population abundances is reduced but not eliminated by borrowing information across space using a hierarchical approach, Oikos, 10.1111/oik.06401, 129, 2, (249-260), (2019).
- Isabelle Maréchaux, Laurent Saint‐André, Megan K. Bartlett, Lawren Sack, Jérôme Chave, Leaf drought tolerance cannot be inferred from classic leaf traits in a tropical rainforest, Journal of Ecology, 10.1111/1365-2745.13321, 108, 3, (1030-1045), (2019).
- Richard Westwood, Alana R. Westwood, Mahsa Hooshmandi, Kara Pearson, Kerienne LaFrance, Colin Murray, A field‐validated species distribution model to support management of the critically endangered Poweshiek skipperling () butterfly in Canada, Conservation Science and Practice, 10.1111/csp2.163, 2, 3, (2019).
- Bryan S. Stevens, Courtney J. Conway, Predictive multi‐scale occupancy models at range‐wide extents: Effects of habitat and human disturbance on distributions of wetland birds, Diversity and Distributions, 10.1111/ddi.12995, 26, 1, (34-48), (2019).
- Adam Duarte, Steven L. Whitlock, James T. Peterson, Species Distribution Modeling, Encyclopedia of Ecology, 10.1016/B978-0-12-409548-9.10572-X, (189-198), (2019).
- Adrián Regos, Laura Gagne, Domingo Alcaraz-Segura, João P. Honrado, Jesús Domínguez, Effects of species traits and environmental predictors on performance and transferability of ecological niche models, Scientific Reports, 10.1038/s41598-019-40766-5, 9, 1, (2019).
- Guillaumot Charlène, Artois Jean, Saucède Thomas, Demoustier Laura, Moreau Camille, Eléaume Marc, Agüera Antonio, Danis Bruno, Broad-scale species distribution models applied to data-poor areas, Progress in Oceanography, 10.1016/j.pocean.2019.04.007, (2019).
- Simon M. Smart, Susan G. Jarvis, Toshie Mizunuma, Cristina Herrero‐Jáuregui, Zhou Fang, Adam Butler, Jamie Alison, Mike Wilson, Robert H. Marrs, Assessment of a large number of empirical plant species niche models by elicitation of knowledge from two national experts, Ecology and Evolution, 10.1002/ece3.5766, 9, 22, (12858-12868), (2019).
- Julien Vollering, Rune Halvorsen, Sabrina Mazzoni, The MIAmaxent R package: Variable transformation and model selection for species distribution models, Ecology and Evolution, 10.1002/ece3.5654, 9, 21, (12051-12068), (2019).
- Farah Tasneem, Baoxin Hu, Jian-Guo Wang, G. Brent Hall, Use of geospatial methods to characterize dispersion of the Emerald ash borer in southern Ontario, Canada, Ecological Informatics, 10.1016/j.ecoinf.2019.101037, (101037), (2019).
- Mariano Soley-Guardia, Ana Carolina Carnaval, Robert P Anderson, Sufficient versus optimal climatic stability during the Late Quaternary: using environmental quality to guide phylogeographic inferences in a Neotropical montane system, Journal of Mammalogy, 10.1093/jmammal/gyz162, 100, 6, (1783-1807), (2019).
- Andrea E. Copping, Mikaela C. Freeman, Alicia M. Gorton, Lenaig G. Hemery, undefined, OCEANS 2019 MTS/IEEE SEATTLE, 10.23919/OCEANS40490.2019.8962841, (1-5), (2019).
- Bryan S. Stevens, Courtney J. Conway, Predicting species distributions: unifying model selection and scale optimization for multi‐scale occupancy models, Ecosphere, 10.1002/ecs2.2748, 10, 5, (2019).
- Andrew S. French, Ruth N. Zadoks, Philip J. Skuce, Gillian Mitchell, Danielle K. Gordon-Gibbs, Mark A. Taggart, Habitat and host factors associated with liver fluke (Fasciola hepatica) diagnoses in wild red deer (Cervus elaphus) in the Scottish Highlands, Parasites & Vectors, 10.1186/s13071-019-3782-3, 12, 1, (2019).
- Gabrielle Koerich, Jorge Assis, Giulia Burle Costa, Marina Nasri Sissini, Ester A. Serrão, Leonardo Rubi Rörig, Jason M. Hall-Spencer, José Bonomi Barufi, Paulo Antunes Horta, How experimental physiology and ecological niche modelling can inform the management of marine bioinvasions?, Science of The Total Environment, 10.1016/j.scitotenv.2019.134692, (134692), (2019).
- Hanna Meyer, Christoph Reudenbach, Stephan Wöllauer, Thomas Nauss, Importance of spatial predictor variable selection in machine learning applications – Moving from data reproduction to spatial prediction, Ecological Modelling, 10.1016/j.ecolmodel.2019.108815, 411, (108815), (2019).
- Nele Schuwirth, Florian Borgwardt, Sami Domisch, Martin Friedrichs, Mira Kattwinkel, David Kneis, Mathias Kuemmerlen, Simone D. Langhans, Javier Martínez-López, Peter Vermeiren, How to make ecological models useful for environmental management, Ecological Modelling, 10.1016/j.ecolmodel.2019.108784, 411, (108784), (2019).
- Patrick Schratz, Jannes Muenchow, Eugenia Iturritxa, Jakob Richter, Alexander Brenning, Hyperparameter tuning and performance assessment of statistical and machine-learning algorithms using spatial data, Ecological Modelling, 10.1016/j.ecolmodel.2019.06.002, 406, (109-120), (2019).
- Adam J. Wyness, David M. Paterson, Tania Mendo, Emma C. Defew, Marc I. Stutter, Lisa M. Avery, Factors affecting the spatial and temporal distribution of E. coli in intertidal estuarine sediments, Science of The Total Environment, 10.1016/j.scitotenv.2019.01.061, 661, (155-167), (2019).
- José Manuel Fernández-Guisuraga, Leonor Calvo, Víctor Fernández-García, Elena Marcos-Porras, Ángela Taboada, Susana Suárez-Seoane, Efficiency of remote sensing tools for post-fire management along a climatic gradient, Forest Ecology and Management, 10.1016/j.foreco.2018.11.045, 433, (553-562), (2019).
- Eduarda Martiniano de Oliveira Silveira, Marcela de Castro Nunes Santos Terra, Hans ter Steege, Eduardo Eiji Maeda, Fausto Weimar Acerbi Júnior, Jose Roberto Soares Scolforo, Carbon-diversity hotspots and their owners in Brazilian southeastern Savanna, Atlantic Forest and Semi-Arid Woodland domains, Forest Ecology and Management, 10.1016/j.foreco.2019.117575, 452, (117575), (2019).
- Robert J. Fletcher, Trevor J. Hefley, Ellen P. Robertson, Benjamin Zuckerberg, Robert A. McCleery, Robert M. Dorazio, A practical guide for combining data to model species distributions, Ecology, 10.1002/ecy.2710, 100, 6, (2019).
- Bethan J. Hindle, Jill G. Pilkington, Josephine M. Pemberton, Dylan Z. Childs, Cumulative weather effects can impact across the whole life cycle, Global Change Biology, 10.1111/gcb.14742, 25, 10, (3282-3293), (2019).
- Matthew D. Petrie, Debra P. C. Peters, N. Dylan Burruss, Wenjie Ji, Heather M. Savoy, Differing climate and landscape effects on regional dryland vegetation responses during wet periods allude to future patterns, Global Change Biology, 10.1111/gcb.14724, 25, 10, (3305-3318), (2019).
- Eric S. Walsh, Tara Hudiburg, An integration framework for linking avifauna niche and forest landscape models, PLOS ONE, 10.1371/journal.pone.0217299, 14, 6, (e0217299), (2019).
- Jane P. Messina, Oliver J. Brady, Nick Golding, Moritz U. G. Kraemer, G. R. William Wint, Sarah E. Ray, David M. Pigott, Freya M. Shearer, Kimberly Johnson, Lucas Earl, Laurie B. Marczak, Shreya Shirude, Nicole Davis Weaver, Marius Gilbert, Raman Velayudhan, Peter Jones, Thomas Jaenisch, Thomas W. Scott, Robert C. Reiner, Simon I. Hay, The current and future global distribution and population at risk of dengue, Nature Microbiology, 10.1038/s41564-019-0476-8, (2019).
- Patrícia R. Ströher, Andreas L. S. Meyer, Eugenia Zarza, Whitney L. E. Tsai, John E. McCormack, Marcio R. Pie, Phylogeography of ants from the Brazilian Atlantic Forest, Organisms Diversity & Evolution, 10.1007/s13127-019-00409-z, (2019).
- Julian Wittische, Jasmine K. Janes, Patrick M.A. James, Modelling landscape genetic connectivity of the mountain pine beetle in western Canada, Canadian Journal of Forest Research, 10.1139/cjfr-2018-0417, (1339-1348), (2019).
- Gabriel Carrasco-Escobar, Edgar Manrique, Jorge Ruiz-Cabrejos, Marlon Saavedra, Freddy Alava, Sara Bickersmith, Catharine Prussing, Joseph M. Vinetz, Jan E. Conn, Marta Moreno, Dionicia Gamboa, High-accuracy detection of malaria vector larval habitats using drone-based multispectral imagery, PLOS Neglected Tropical Diseases, 10.1371/journal.pntd.0007105, 13, 1, (e0007105), (2019).
- Lucy R. Mason, Rhys E. Green, Christine Howard, Philip A. Stephens, Stephen G. Willis, Ainars Aunins, Lluís Brotons, Tomasz Chodkiewicz, Przemysław Chylarecki, Virginia Escandell, Ruud P. B. Foppen, Sergi Herrando, Magne Husby, Frédéric Jiguet, John Atle Kålås, Åke Lindström, Dario Massimino, Charlotte Moshøj, Renno Nellis, Jean-Yves Paquet, Jiří Reif, Päivi M. Sirkiä, Tibor Szép, Guido Tellini Florenzano, Norbert Teufelbauer, Sven Trautmann, Arco van Strien, Chris A. M. van Turnhout, Petr Voříšek, Richard D. Gregory, Population responses of bird populations to climate change on two continents vary with species’ ecological traits but not with direction of change in climate suitability, Climatic Change, 10.1007/s10584-019-02549-9, (2019).
- S B M Kraak, A Velasco, U Fröse, U Krumme, Prediction of delayed mortality using vitality scores and reflexes, as well as catch, processing, and post-release conditions: evidence from discarded flatfish in the Western Baltic trawl fishery, ICES Journal of Marine Science, 10.1093/icesjms/fsy129, 76, 1, (330-341), (2018).
- Roozbeh Valavi, Jane Elith, José J. Lahoz‐Monfort, Gurutzeta Guillera‐Arroita, blockCV: An r package for generating spatially or environmentally separated folds for k‐fold cross‐validation of species distribution models, Methods in Ecology and Evolution, 10.1111/2041-210X.13107, 10, 2, (225-232), (2018).
- Jeffrey Daniel, Julie Horrocks, Gary J. Umphrey, Penalized composite likelihoods for inhomogeneous Gibbs point process models, Computational Statistics & Data Analysis, 10.1016/j.csda.2018.02.005, 124, (104-116), (2018).
- Wu Ma, Christopher W. Woodall, Grant M. Domke, Anthony W. D’Amato, Brian F. Walters, Stand age versus tree diameter as a driver of forest carbon inventory simulations in the northeastern U.S., Canadian Journal of Forest Research, 10.1139/cjfr-2018-0019, 48, 10, (1135-1147), (2018).
- Katherine L. Yates, Phil J. Bouchet, M. Julian Caley, Kerrie Mengersen, Christophe F. Randin, Stephen Parnell, Alan H. Fielding, Andrew J. Bamford, Stephen Ban, A. Márcia Barbosa, Carsten F. Dormann, Jane Elith, Clare B. Embling, Gary N. Ervin, Rebecca Fisher, Susan Gould, Roland F. Graf, Edward J. Gregr, Patrick N. Halpin, Risto K. Heikkinen, Stefan Heinänen, Alice R. Jones, Periyadan K. Krishnakumar, Valentina Lauria, Hector Lozano-Montes, Laura Mannocci, Camille Mellin, Mohsen B. Mesgaran, Elena Moreno-Amat, Sophie Mormede, Emilie Novaczek, Steffen Oppel, Guillermo Ortuño Crespo, A. Townsend Peterson, Giovanni Rapacciuolo, Jason J. Roberts, Rebecca E. Ross, Kylie L. Scales, David Schoeman, Paul Snelgrove, Göran Sundblad, Wilfried Thuiller, Leigh G. Torres, Heroen Verbruggen, Lifei Wang, Seth Wenger, Mark J. Whittingham, Yuri Zharikov, Damaris Zurell, Ana M.M. Sequeira, Outstanding Challenges in the Transferability of Ecological Models, Trends in Ecology & Evolution, 10.1016/j.tree.2018.08.001, 33, 10, (790-802), (2018).
- Reza Goljani Amirkhiz, Jennifer K. Frey, James W. Cain, Stewart W. Breck, David L. Bergman, Predicting spatial factors associated with cattle depredations by the Mexican wolf ( Canis lupus baileyi ) with recommendations for depredation risk modeling, Biological Conservation, 10.1016/j.biocon.2018.06.013, 224, (327-335), (2018).
- Amanda M. West, Paul H. Evangelista, Catherine S. Jarnevich, Darin Schulte, A tale of two wildfires; testing detection and prediction of invasive species distributions using models fit with topographic and spectral indices, Landscape Ecology, 10.1007/s10980-018-0644-x, 33, 6, (969-984), (2018).
- Kit Wheeler, Seth J. Wenger, Stephen J. Walsh, Zachary P. Martin, Howard L. Jelks, Mary C. Freeman, Stream fish colonization but not persistence varies regionally across a large North American river basin, Biological Conservation, 10.1016/j.biocon.2018.04.023, 223, (1-10), (2018).
- Yiwen Zeng, Darren C.J. Yeo, Assessing the aggregated risk of invasive crayfish and climate change to freshwater crabs: A Southeast Asian case study, Biological Conservation, 10.1016/j.biocon.2018.04.033, 223, (58-67), (2018).
- Robert Fletcher, Marie-Josée Fortin, Robert Fletcher, Marie-Josée Fortin, Species Distributions, Spatial Ecology and Conservation Modeling, 10.1007/978-3-030-01989-1, (213-269), (2018).
- Laura Mannocci, Jason J. Roberts, Patrick N. Halpin, Matthieu Authier, Oliver Boisseau, Mohamed Nejmeddine Bradai, Ana Cañadas, Carla Chicote, Léa David, Nathalie Di-Méglio, Caterina M. Fortuna, Alexandros Frantzis, Manel Gazo, Tilen Genov, Philip S. Hammond, Draško Holcer, Kristin Kaschner, Dani Kerem, Giancarlo Lauriano, Tim Lewis, Giuseppe Notarbartolo di Sciara, Simone Panigada, Juan Antonio Raga, Aviad Scheinin, Vincent Ridoux, Adriana Vella, Joseph Vella, Assessing cetacean surveys throughout the Mediterranean Sea: a gap analysis in environmental space, Scientific Reports, 10.1038/s41598-018-19842-9, 8, 1, (2018).
- Sandra Skowronek, Ruben Van De Kerchove, Bjorn Rombouts, Raf Aerts, Michael Ewald, Jens Warrie, Felix Schiefer, Carol Garzon-Lopez, Tarek Hattab, Olivier Honnay, Jonathan Lenoir, Duccio Rocchini, Sebastian Schmidtlein, Ben Somers, Hannes Feilhauer, Transferability of species distribution models for the detection of an invasive alien bryophyte using imaging spectroscopy data, International Journal of Applied Earth Observation and Geoinformation, 10.1016/j.jag.2018.02.001, 68, (61-72), (2018).
- Fernando Ascensão, Marcello D'Amico, Rafael Barrientos, Validation data is needed to support modelling in Road Ecology, Biological Conservation, 10.1016/j.biocon.2018.12.023, (2018).
- Rachel M. Egly, Gust M. Annis, W. Lindsay Chadderton, Jody A. Peters, Eric R. Larson, Predicting the potential distribution of the non-native Red Swamp Crayfish Procambarus clarkii in the Laurentian Great Lakes, Journal of Great Lakes Research, 10.1016/j.jglr.2018.11.007, (2018).
- Eivind Flittie Kleiven, John-André Henden, Rolf Anker Ims, Nigel Gilles Yoccoz, Seasonal difference in temporal transferability of an ecological model: near-term predictions of lemming outbreak abundances, Scientific Reports, 10.1038/s41598-018-33443-6, 8, 1, (2018).
- Ana M. M. Sequeira, Phil J. Bouchet, Katherine L. Yates, Kerrie Mengersen, M. Julian Caley, Transferring biodiversity models for conservation: Opportunities and challenges, Methods in Ecology and Evolution, 10.1111/2041-210X.12998, 9, 5, (1250-1264), (2018).
- Lei Zhang, Xueming Yang, Craig Drury, Martin Chantigny, Edward Gregorich, Jim Miller, Shabtai Bittman, W. Dan Reynolds, Jingyi Yang, Infrared spectroscopy estimation methods for water-dissolved carbon and amino sugars in diverse Canadian agricultural soils, Canadian Journal of Soil Science, 10.1139/cjss-2018-0027, 98, 3, (484-499), (2018).
- C Guillaumot, A Martin, M Eléaume, T Saucède, Methods for improving species distribution models in data-poor areas: example of sub-Antarctic benthic species on the Kerguelen Plateau, Marine Ecology Progress Series, 10.3354/meps12538, 594, (149-164), (2018).
- Jonathan M. Bossenbroek, Lyubov E. Burlakova, Todd C. Crail, Alexander Y. Karatayev, Robert A. Krebs, David T. Zanatta, Modeling habitat of freshwater mussels (Bivalvia:Unionidae) in the lower Great Lakes 25 years after the Dreissena invasion , Freshwater Science, 10.1086/697738, 37, 2, (330-342), (2018).
- Niels Hellwig, Kerstin Anschlag, Gabriele Broll, A Fuzzy Logic Based Method for Modeling the Spatial Distribution of Indicators of Decomposition in a High Mountain Environment, Arctic, Antarctic, and Alpine Research, 10.1657/AAAR0015-073, 48, 4, (623-635), (2018).
- Sean R. Haughian, Stephen R. Clayden, Robert Cameron, On the distribution and habitat of Fuscopannaria leucosticta in New Brunswick, Canada , Écoscience, 10.1080/11956860.2018.1526997, (1-14), (2018).
- Zhixin Zhang, César Capinha, Robbie Weterings, Colin L. McLay, Dan Xi, Hongjian Lü, Lingyun Yu, Ensemble forecasting of the global potential distribution of the invasive Chinese mitten crab, Eriocheir sinensis, Hydrobiologia, 10.1007/s10750-018-3749-y, (2018).
- Paulo De Marco, Sara Villén, Poliana Mendes, Caroline Nóbrega, Lara Cortes, Tiago Castro, Rodrigo Souza, Vulnerability of Cerrado threatened mammals: an integrative landscape and climate modeling approach, Biodiversity and Conservation, 10.1007/s10531-018-1615-x, (2018).
- Xiaojia Feng, Huijuan Zhou, Saman Zulfiqar, Xiang Luo, Yiheng Hu, Li Feng, Maria E. Malvolti, Keith Woeste, Peng Zhao, The Phytogeographic History of Common Walnut in China, Frontiers in Plant Science, 10.3389/fpls.2018.01399, 9, (2018).
- Markus Mosimann, Linda Frossard, Margreth Keiler, Rolf Weingartner, Andreas Zischg, A Robust and Transferable Model for the Prediction of Flood Losses on Household Contents, Water, 10.3390/w10111596, 10, 11, (1596), (2018).
- Rachel M. Egly, Eric R. Larson, Distribution, habitat associations, and conservation status updates for the pilose crayfish Pacifastacus gambelii (Girard, 1852) and Snake River pilose crayfish Pacifastacus connectens (Faxon, 1914) of the western United States , PeerJ, 10.7717/peerj.5668, 6, (e5668), (2018).
- Paulo De Marco, Caroline Corrêa Nóbrega, Evaluating collinearity effects on species distribution models: An approach based on virtual species simulation, PLOS ONE, 10.1371/journal.pone.0202403, 13, 9, (e0202403), (2018).
- César Capinha, Franz Essl, Hanno Seebens, Henrique Miguel Pereira, Ingolf Kühn, Models of alien species richness show moderate predictive accuracy and poor transferability, NeoBiota, 10.3897/neobiota.38.23518, 38, (77-96), (2018).
- Claudia Pittiglio, Sergei Khomenko, Daniel Beltran-Alcrudo, Wild boar mapping using population-density statistics: From polygons to high resolution raster maps, PLOS ONE, 10.1371/journal.pone.0193295, 13, 5, (e0193295), (2018).
- Brunno Freire Oliveira, Gabriel Corrêa Costa, Carlos Roberto Fonseca, Niche dynamics of two cryptic Prosopis invading South American drylands, Biological Invasions, 10.1007/s10530-017-1525-y, 20, 1, (181-194), (2017).
- Yoni Gavish, Jerome O'Connell, Tim G. Benton, Quantifying and modelling decay in forecast proficiency indicates the limits of transferability in land‐cover classification, Methods in Ecology and Evolution, 10.1111/2041-210X.12870, 9, 2, (235-244), (2017).
- J. Assis, E. Berecibar, B. Claro, F. Alberto, D. Reed, P. Raimondi, E. A. Serrão, Major shifts at the range edge of marine forests: the combined effects of climate changes and limited dispersal, Scientific Reports, 10.1038/srep44348, 7, 1, (2017).
- Annakate M. Schatz, Andrew M. Kramer, John M. Drake, Accuracy of climate-based forecasts of pathogen spread, Royal Society Open Science, 10.1098/rsos.160975, 4, 3, (160975), (2017).
- Marion E. Wittmann, Gust Annis, Andrew M. Kramer, Lacey Mason, Catherine Riseng, Edward S. Rutherford, William L. Chadderton, Dmitry Beletsky, John M. Drake, David M. Lodge, Refining species distribution model outputs using landscape-scale habitat data: Forecasting grass carp and Hydrilla establishment in the Great Lakes region, Journal of Great Lakes Research, 10.1016/j.jglr.2016.09.008, 43, 2, (298-307), (2017).
- D. Gebler, K. Szoszkiewicz, K. Pietruczuk, Modeling of the river ecological status with macrophytes using artificial neural networks, Limnologica, 10.1016/j.limno.2017.07.004, 65, (46-54), (2017).
- Fabio Veronesi, Athina Korfiati, René Buffat, Martin Raubal, Assessing Accuracy and Geographical Transferability of Machine Learning Algorithms for Wind Speed Modelling, Societal Geo-innovation, 10.1007/978-3-319-56759-4_17, (297-310), (2017).
- See more




