A practical method for evaluating spatial biodiversity offset scenarios based on spatial conservation prioritization outputs

Biodiversity offsetting is a tool to balance ecological damage caused by human activity with new benefits created elsewhere. Offsetting is implemented by protecting, restoring or managing sufficiently large areas of habitat. While there are concerns about the true feasibility of offsetting, they are becoming a common policy tool world‐wide. Operationally uncomplicated, quantitative approaches to spatial analysis of offsets are rare and their use is often restricted by the availability of suitable spatial data. We describe a practical method for offsets that builds upon two layers of relatively easily sourced spatial data, a balanced spatial priority ranking and a weighted range size rarity map. Together with (a) spatial information about impact and offset areas, and (b) extra parameters for the effectiveness of avoided loss and the amount of leakage expected, we can evaluate whether the proposed offset exchange represents a credible no net loss or net positive impact with an upward trade. The priority ranking and range size rarity maps can be produced in various ways, most notably using existing conservation planning tools. Here we used the standard outputs of the Zonation spatial prioritization software. We illustrate the method and associated visualization in the context of offsetting of boreal forests in Finland, where forests experience high and increasing pressures from forestry and bioenergy sectors. The example is timely as there is political demand for the uptake of biodiversity offset policies in Finland, and boreal forests are the most common biotope. The methods described here are applicable to biomes around the world. The described tools are made available as r scripts that utilize standard Zonation outputs, thus providing direct linkage to any past or future Zonation applications. As a limitation, the present methods only apply to avoided loss offsets.


| INTRODUC TI ON
Human activity such as clearing of natural habitats for agriculture or development usually causes ecological loss. In the biodiversity offsetting, ecological losses are balanced out by ecological gains generated elsewhere through habitat restoration, establishment of new protected areas or other conservation actions (Gibbons & Lindenmayer, 2007;McKenney & Kiesecker, 2010).
Offsetting is the last stage of the mitigation hierarchy in which the ecological loss is (a) first avoided altogether, (b) secondly, minimized by appropriate project design, (c) thirdly, corrected by local rehabilitation (habitat restoration), and finally, (d) remaining loss is offset elsewhere.
The often-stated goal of biodiversity offsetting is no net loss (NNL), meaning that all ecological losses are balanced by commensurate gains (Gibbons & Lindenmayer, 2007;Maron et al., 2018). Concerns have been voiced about the credibility of offsets to provide NNL for ecological systems, which are inherently both dimensionally and functionally highly complex (Bekessy et al., 2010;Gibbons et al., 2016;Gordon, Bull, Wilcox, & Maron, 2015;Spash, 2015;Walker, Brower, Stephens, & Lee, 2009). In addition, there is evidence of incomplete or even completely missing implementation and/or monitoring of outcomes (e.g. May, Hobbs, & Valentine, 2017). Yet, offsets are becoming a common policy tool and practice in many countries, with a recent review finding of over 13,000 offset cases globally (Bull & Strange, 2018).
Designing and evaluating credible offsets to balance-out typically immediate and often permanent ecological losses is a non-trivial task.
One of the challenges is the question of equivalency, that is, how to compare the complex biological values of two sites when only a small fraction of biodiversity can be measured and no two sites are identical . Similarly, offset evaluation is required to account for temporal and spatial dimensions, that is, how offset gains develop through time, their additionality to and interaction with other actions in the wider landscape and the uncertainty in expected biodiversity gains. Recently, many quantitative ways of handling biodiversity in offsetting have been developed (e.g. Mandle et al., 2016;Maseyk et al., 2016;Pouzols, Burgman, & Moilanen, 2012). Most of these approaches are non-spatial and focus on ecological equivalence. Rarely do they also incorporate other significant factors of offset accounting such as time frames, additionality and leakage of negative impacts to other areas (Moilanen & Kotiaho, 2018, but see Peterson, Maron, Moillanen, Bekessy, & Gordon, 2018). Availability of data for parameterizing such factors in non-spatial models is often an obstacle. Consequently, and enforced by the highly diverse offset policies and requirements, there exists no commonly adopted methods for offset design and evaluation.
Spatial conservation prioritization (SCP) methods provide another promising avenue for designing and analysing offsets. In conservation science, SCP methods are used to integrate data about distributions of species, habitats, ecosystem services, costs and threats, to identify priority areas that support land use and conservation planning (Kujala, Lahoz-Monfort, Elith, & Moilanen, 2018). SCP methods have been adopted world-wide to design reserve networks and their expansions, target habitat restoration and plan for ecological impact avoidance (Kukkala & Moilanen, 2013). They are powerful decision support tools that can account for spatial interactions and distill information on complex problems across thousands of biodiversity features into a single spatial map. Yet, applications of SCP that systematically include factors relevant for offset design are still rare (Moilanen & Kotiaho, 2018). As with other offsetting methods, the use of SCP methods is restricted by the lack of information on the many ecological impacts of offset trades and their spatial interactions, particularly when simultaneously assessing many species.
Here we describe a practical spatial method that builds on existing offset tools by combining typical SCP outputs with information on the time frame, leakage and uncertainty in offset returns to evaluate the parity of losses and gains in offset proposals. The method can be used to evaluate out-of-kind offsets, where one type of biodiversity is traded to another (Habib, Farr, Schneider, & Boutin, 2013), and like-for-like offsets, where only trades within the same habitat type are allowed but which do not require verified like-for-like offsets for all species. The focus is to provide a straightforward spatial analysis to a highly complex problem, accounting for the multiple dimensions of biodiversity across spatial and temporal scales. We describe the theory of the approach and illustrate its use with a case study of boreal forest offsetting in Finland, Europe. In this application, we utilize the standard output of the SCP software Zonation v.4.0 (Moilanen et al., 2005), but these data layers could be produced with other SCP methods as well. The proposed method applies specifically to avoided loss offsets, in which protection of an offset area leads to (passive) the improvement in habitat quality and/or prevents further habitat deterioration and loss, and which is one of the most common form of biodiversity offsets world-wide (Bull & Strange, 2018). Using avoided loss requires that there exist anthropogenic pressures that are alleviated through protection, which applies to many forested regions around the world. The application is highly relevant for Finland, where forest is the most widespread environment (~75% of land area) and faces high and increasing economic utilization pressures.

| Study area and spatial data
For years, spatial prioritizations to assist decisions around forest conservation in Finland have been produced by the government funded

Finnish Environment Institute and Metsähallitus Parks & Wildlife
Finland (Finnish government´s forest office) together with university researchers (Mikkonen, Leikola, Lahtinen, Lehtomäki, & Halme, 2018). These analyses utilize several data, including (a) ground-truth calibrated high-resolution satellite imagery and air borne laser scanning of forest characteristics; (b) field survey data of forest stands; (c) habitats of special importance for biodiversity (Finnish Forest Act 10 §); (d) implemented forest management actions; (e) drainage of forests; and (f) IUCN Red Listed forest species observations. Using Zonation v.4.0, the project has produced priority maps that identify ecologically relevant priority areas for forest conservation, while also accounting for loss of local habitat quality due to past forestry actions: In the Finnish context, forest management and drainage of wet and boggy forests translate into reduced natural quality depending on management intensity.
The forest data, covering entire Finland (284,000 km 2 ), include information on soil fertility and tree species present, meaning that the prioritizations capture forests of different productivity and tree species composition. Independent evaluations of the priority rankings have found that the prioritizations correlate well with indicators of ecological quality of forests (Lehtomäki, Tuominen, Toivonen, & Leinonen, 2015). We used the outputs of the latest prioritization solution (Mikkonen et al., 2018), available at 96 × 96 m resolution, as the inputs in our analysis (described in detail below). For the purpose of the present analysis, and to allow better visualization of data and results, we focus on the Uusimaa province in Southern Finland, which includes the capital district.

| Data components for offset calculations
Our proposed approach requires four spatial input data ( Table 1), two of which can be produced using an SCP method. This section explains the main data, their role in the offset-calculations and the concepts utilized in this work.

| Conservation priority layer
The first input is the conservation priority ranking of spatial units (pixels), which give the rank priority of each unit, scaled between 0 (lowest priority) and 1 (highest priority). This data layer distils distribution and local occurrence data of individual biodiversity features (species, habitats, ecosystem services) into a single map, and can be used to compare the conservation value of protecting different sites either across habitat types (out-of-kind trades) or within the same habitat (like-for-like trades). Zonation produces this standard output through an iterative process in which, at each iteration, the conservation value of all units are assessed based on input data and the least important remaining spatial unit is identified and removed from the analysis. The removal order defines the conservation priority rank of each unit, most important units being those that are removed last.
An important component of the ranking process is that the amount of values remaining for each feature is tracked throughout the prioritization. This information is used to maintain a balanced representation of all features through the ranking (Moilanen et al., 2005(Moilanen et al., , 2011. Consequently, the dimensionality of biodiversity is preserved through the analysis process and a favourable balance (complementarity) between features is maintained in the solution so that a set of top priority areas together represent all biodiversity features.

Compulsory (yes/no) Explanation
Spatial data

TA B L E 1 Input data and parameters used in the present example
In the prioritization process, Zonation can account for, among other things, weights given to features, costs and types of connectivity (Lehtomäki & Moilanen, 2013). The approach is not described here in detail as this information is extensively covered in earlier publications and many freely available software manuals (Moilanen et al., 2014).
For the purpose of biodiversity offsetting, the weakness of the priority rank maps is that spatial units are ranked (ordered) linearly from 0 to 1, and these ranks are not comparable in the additive sense. For example, a cell with rank 0.6 is not necessarily twice as good as a cell with rank 0.3. For this reason, priority-ranking maps always need to be interpreted with additional information, which give the absolute occurrence levels of features in any chosen top or bottom fraction of the landscape. If, for example, biodiversity is concentrated to few locations in the landscape, it could be that the top ranked areas of the landscape include almost all the remaining biodiversity. This potential concentration of biodiversity is not apparent from the ranking alone, which brings us to the second main input of the proposed method.

| Range size weighted richness
The second main spatial data are a weighted range size rarity layer.
Range size rarity is calculated for each spatial unit (pixel), and is the sum of the fractions of biodiversity features' ranges inside the unit (Williams et al., 1996). For example, if a grid cell includes 1% of a feature's distribution, the feature contributes 0.01 to range size rarity of that cell. The weighted range size rarity layer can be produced manually, but here we used a layer produced as output from a Zonation analysis. During recent years, range size rarity and its close relative, richness of small range species, have been proposed as a metric directly useful for conservation decision-making (Ceballos & Ehrlich, 2006;Jenkins, Pimm, & Joppa, 2013). These measures account for both the presence and rarity (range size) of all features in the spatial unit and have the strength that spatial units are comparable in the absolute sense. Hence, one could imagine using range size rarity directly to evaluate offset trades. However, the weakness of range size rarity is that it flattens the dimensions of biodiversity into a one-dimensional metric. This is relevant especially if the data includes biotopes with different levels of richness and rarity: biotopes with high richness and rarity may override other biotopes with intrinsically lower richness or rarity of species. If a set of high-priority areas is chosen based on range size rarity alone, there is no guarantee that all species and biotopes are represented in the chosen areas. Hence, balance (complementarity) of the spatial solution is not guaranteed. We therefore propose that for offset calculations the combined information of both weighted range size rarity and priority rank of sites is needed.

| Avoided loss
The third main component of the present work is avoided loss. In avoided loss offsets, pressures impacting an area are removed (or reduced) via protection. This leads to environmental gains compared to a situation where those pressures are allowed to continue. It is important to understand that gains from avoided loss only develop over time: first year following protection, hardly anything has had time to improve. Consequently, the rate by which avoided loss generates gains is an important quantity. In the proposed approach, we define an avoided loss function over time, which is the time discounted over a specific evaluation time frame to provide estimate of gains (Moilanen & Kotiaho, 2018). As our case example focuses on forests of economic value, we use spatial data on average stand age, as the basis of avoided loss calculation. Implementing avoided loss (protection) also incurs costs. Although relevant, they are not directly considered here, as the primary aim is to understand which of the proposed sites are acceptable offsets. Costs of avoided loss can be accounted for post hoc through for example, a cost-benefit analysis of the acceptable sites to further explore best offset options (not done here).

| Leakage
The fourth main component is leakage.

| Algorithm for offset evaluation
As discussed, spatial priority rankings and range size rarity have different weaknesses from the perspective of offsetting. Therefore, we take the element-wise product, marked M, of these two maps to rep- 4. First offset verification step, which is optional. In the Finnish forest case, we require that the offset areas (H P ) are overall better than the areas lost (H L ), H P > H L , which implies that the entire offset operation has the characteristic of trading up (Habib et al., 2013). This is done by examining pixels in H L in order of highest value to lowest and comparing them to likewise ranked pixels in H P . Here we require that for each pixel in H L there needs to be a pixel in H P with a higher M-value (i.e. either higher conservation value or rarity-weighted richness or both). As a complication, we also require that the offset area needs to be larger than the impact area, that is, H P needs to have more pixels than H L . Hence, the requirement is changed so that for each pixel in H L , there has to be N h better pixels in H P (N h being a small positive integer, 1, 2, 3, 4, …; each pixel can only be used once in the evaluation).
If H P > H L in the sense described in this step, then the exchange satisfies the condition of trading up. For our example case study, we define N h = 2.

Application of leakage. In avoided loss offsets, avoided loss is
paired with leakage. If there is avoided loss potential, offset areas are by definition subject to utilization pressures, which may relocate elsewhere following protection. We describe handling of leakage for the Finnish case in a separate section (below). In the algorithm, the generic effect of leakage is simply the following: a quantity Q leakage is subtracted from the M-value of each pixel in H p converting it to a new set of numbers marked as H q = H p − Q leakage .
6. Application of avoided loss and time discounting. The leakagecorrected offset potential histogram H q is transformed into an offset gains histogram H G by applying a combination of avoided loss and time discounting to each pixel. Note that although the benefits of avoided loss and time discounting need to be addressed in all avoided loss offsetting, the construction of the avoided loss gain function and selection of suitable discount rate would be highly case-specific.
In the Finnish case, we construct the avoided loss gain function as follows. First, we specify an evaluation timeframe, T E = 30 time steps (years). Until the age of T C = 60 years, the loss rate of forest is zero, meaning that a forest under that age would never be clear cut and there hence is no immediate avoided loss gain of protecting it (Figure 2). After the forest reaches T C , an approximate yearly clear-cut rate is applied (7% annually, F I G U R E 1 Schematic histograms of loss and gain when accounting for leakage, avoided loss and time discounting. (a) Pixels lost have biodiversity values measured by M, visualized as a histogram. (b) Likewise, the prospective offset area has an M-histogram. As the first criteria, we require that the quality of gain area has to be better than the quality of loss area, meaning that potential gains need to be to the right of losses on the M-axis.
(c) True gains become reduced by leakage, avoided loss and time discounting, moving the M-histogram for gains to the left. The second requirement is that the mass of histogram (c) needs to be higher than mass of (a). Mass can be visualized if the y-axis of the plot is M*(pixel count) instead of pixel count (not shown)  Table 1). It is also assumed that a small fraction of forest owners (f n = 0.1) are primarily interested in maintaining the forest even in the presence of economic incentive.
Therefore, fraction of f n of the proposed offset site would never be cleared and does not contribute to the avoided loss gain.

The avoided loss gain function A(t) is then defined as:
where t is time in 1-year increments and n = t − T C , the number of years after the forest has reached harvesting maturity (Figure 2b).
We also utilize a time discounting function D(t) = (1 − d) t (Figure 2a), in which t is years into the future until end of evaluation period T E is reached and d is the time discount parameter (here 2%, i.e. d = 0.02).
We mark by v the value of a pixel in the leakage-corrected potential gains histogram H q and by a the age of forest in the same pixel (Figure 2c). We omit use of subscripts for pixel identity as the calculation is just repeated for every pixel.
Then avoided loss for the pixel is calculated as: (i) If forest age a < (T C − T E ), gain g = 0 and the pixel receives value 0 in H G . This is because the forest is so young that it is not in danger of being lost during the evaluation time frame.
(ii) If a > T C , then forest has reached harvesting age and is in immediate danger of being cut. The harvesting rate applies for the full duration of the evaluation time frame. The true gain of protecting the pixel is calculated by applying time discounting on the avoided loss function.
(iii) If (T C − T E ) < a < T C , then the forest is not immediately old enough to be harvested, but it will become so during the evaluation time frame. Gains become reduced, because avoided loss gains only start accumulating after the harvesting age is reached.
7. Second verification step. As was done for losses in step 2, we calculate the sum of gains, S G , from gains histogram H G . We require that S G > S L .
8. If both verification steps pass, we conclude that the proposed offset presents a general upward trade and that the sum of gains is equal or greater than the sum of losses after accounting for avoided loss, leakage and time discounting, meaning that the trade is NNL or better.
If either of the verification steps fails, one can extend or change the offsets areas and redo the calculation. One can also accept the trade, with the understanding that the NNL requirements for an offset are not met, but instead the trade represents a partial compensation that may require further sociopolitical decisions on their acceptability.

| Calculation of leakage
Consideration of leakage will involve case-specific calculations.
Here leakage was estimated as follows. When a forest block becomes protected, utilization pressures will relocate to areas where forest is old enough to be harvested, that is, >60 years of age. With spatial information on forest stand age, we calculated the average M values of pixels at harvesting maturity (q U ) in the Uusimaa province to represent the ecological value of sites likely to be targeted following relocation of pressures. However, in boreal forest systems, clear-cutting of forest coups does not automatically lead to the entire loss ecological quality. In Finland, clear-cut areas are often surrounded by forest and natural recovery of ground vegetation is relatively rapid. We therefore used average M values of pixels that have been clear-cut in past 20 years (q L ) as reference for ecological value following clear-cutting. For the Uusimaa province q L is ~5% of the value of q U , which is of similar magnitude to national estimates of 10% of the ecological value retained in clear-cut areas (Mikkonen et al., 2018). The difference q U -q L then represents the average leakage Q leakage that relocates to other areas in Uusimaa following forest protection. This value is subtracted pixel-wise from potential gains h p .

| RE SULTS
The hypothetical development site L is a 38-ha area in the eastern part of province Uusimaa, covered by on average 60-year old forest with some trees reaching the age of >80 years (Figure 3).
We assume that the development of this site results in the total loss of the forest and associated biodiversity values. The proposed six candidate sites (Figure 3, A-F) for offsetting the impact are all larger than the impact site (70-160 ha) and vary in their average stand age (45-65 years) and associated M-values (Table 2).
Using the algorithm described above, we find that offset candidates D and E fail to meet the first criteria set for an acceptable offset (H P > N h × H L ). That is, they do not have enough pixels with higher values to those lost at site L when the multiplier N h is set to two. While site D has the most mature forest across candidate sites (Table 2) and a relatively high average M-value (Figure 4), it has the smallest area of unprotected forests, leading to inadequate offset size. Site E in turn has relatively low M-values. Hence, we exclude these sites from further considerations.
After the benefits of each site have been corrected for leakage (H q ) and avoided loss and time discounting (H G ), the expected offset gains of sites B, C and F are markedly reduced and C and F consequently do not meet our second criteria for an acceptable (1) . (

| D ISCUSS I ON
Evaluation of offset proposals is often hindered by the lack of data and methods that allow a landscape level assessment of spatial and temporal impacts of offset trades. Here, we have demonstrated a method for evaluating avoided loss biodiversity offset scenarios using relatively easily accessible spatial information and parameters describing  There are prior studies in which some form of spatial prioritization has been used for designing offsets (Kujala, Whitehead, Morris, & Wintle, 2015;Maseyk et al., 2016;Moilanen, 2013). Where ecological loss of development for individual species is known and spatial layers representing gains from offset action exist, it is possible to set adequate offset targets and use SCP for solving the optimization problem directly (e.g. Kujala et al., 2015). However, spatial data to run such optimizations are rarely available. Also, options for both impact and offset areas are much more frequently limited by land tenure and ownership, implying that a scenario-based investigation will often suffice. For those common situations, the proposed approach offers a plausible quantitative tool to assess offset proposals.
Consequently, the present work differs from studies using direct spatial optimization for offset design in several ways. First, the proposed approach is post hoc applicable to standard SCP outputs, to be taken to assure that true upward trades can be made based on the SCP outputs, as commonly required in out-of-kind offset programmes (Bull, Hardy, Moilanen, & Gordon, 2015;Habib et al., 2013).
In the described algorithm, the (optional) upward-trade condition states that, even within the same environment, the offset area needs to have higher conservation value or rarity-weighted richness, or both, than the impact area (first criteria, Figures 1 and 4) to achieve NNL. But the upward-trade condition can also quickly limit the options for offsetting, particularly when the area lost is of high value to start with. This limitation could be relaxed by allowing some pixels (e.g. up to 5%) to be compensated with lower values at offset site, or by replacing higher quality with an increased area for offsets. While doing so is possible, case-specific consideration will be needed. How much quality can be sacrificed and for what ecological cost, and whether trades between environments are allowed and how strictly values are traded up, should be an active decision. Effectively, it is a question of the degree of flexibility allowed versus the risks of not achieving policy goals .
Albeit an illustrative example, our analysis provides important insights to the practical challenges of avoided loss offsetting.
Notably, even with just moderate assumptions on the realized returns and time discounting, only offsets that harboured notably higher value, relatively mature stands and/or were of much larger in size than the impact site, successfully achieved NNL. These results highlight major difference between offsetting of young or developing managed forest versus offsetting of semi-natural or natural forest. Managed forest tends to retain limited structural features that promote biodiversity, such as mixed tree species composition, mixed age structure, natural hydrology and presence of mature trees and dead wood (Esseen, Ehnström, Ericson, & Sjöberg, 1997).
Hence, offsetting young production forest with mature forest with some structural features of natural forest represents a true upward trade. Conversely, offsetting of semi-natural or natural forest will be hard, if not impossible, because all available offset areas will be of the same or lower quality as the impact area. When using avoided loss together with time discounting and leakage, such trades will inevitably require significantly larger areas to offset losses. This observation emphasizes the fact that offsetting of slowly developing late-successional habitats is unwise in general (Maron et al., 2012;McAlpine et al., 2016).
To conclude, SCP methods can provide a fast first-step assessment of proposed avoided loss offsets. As spatial prioritization methods are not specific to environment, region or analysis resolution, all flexibility inherent to them applies to the proposed offset evaluation approach as well (Kujala et al., 2018;Lehtomäki & Moilanen, 2013).
Here we used the SCP software Zonation, but the priority ranking and range size rarity maps utilized could have been produced via any other analysis path and in this sense, the proposed methods are completely general. Being one of the most commonly used SCP soft-

DATA AVA I L A B I L I T Y S TAT E M E N T
Data and r scripts to run the illustrative example have been made available at https://doi.org/10.26188 /5e496 c00b3005 (Kujala, Moilanen, & Mikkonen, 2020).