UvA-DARE (Digital Academic Repository) Calculating second derivatives of population growth rates for ecology and evolution

Summary 1. Second derivatives of the population growth rate measurethe curvature of its response to demographic, physiological or environmental parameters. The second derivatives quantify the response of sensitivity results to perturbations, provide a classiﬁcation of types of selection and provide one way to calculate sensitivities of the stochastic growth rate. 2. Using matrix calculus, we derive the second derivatives of three population growth rate measures: the discrete-time growth rate k , the continuous-time growth rate r = log k and the net reproductive rate R 0 , which measures per-generation growth. 3. We present a suite of formulae for the second derivatives of each growth rate and show how to compute these derivatives with respect to projection matrix entries and to lower-level parameters aﬀecting those matrix entries. 4. We also illustrate several ecological and evolutionary applications for these second derivative calculations with a case study for the tropical herb Calathea ovandensis .


Introduction
Using matrix population models, ecological indices can be calculated as functions of vital rates such as survival or fertility.Measures of population growth rate, including the discretetime growth rate k, the continuous-time growth rate r = log k and the net reproductive rate R 0 , are of particular interest.The discrete-time population growth rate k is given by the dominant eigenvalue of the population projection matrix.Sensitivities (first partial derivatives) of k with respect to relevant parameters quantify how population growth responds to vital rate perturbations.These first derivatives are used to project the effects of vital rate changes due to environmental or management perturbations, uncertainty in parameter estimates and phenotypic evolution (i.e. with k as a fitness measure, the sensitivity of k with respect to a parameter is the selection gradient on that parameter) (Caswell 2001).

A P P L I C A T I O N S O F S E C O N D D E R I V A T I V E S O F G R O W T H R A T E S
The second derivatives of growth rates have applications in both ecology (e.g.assessing and improving recommendations from sensitivity analysis, approximating the sensitivities of stochastic growth rates) and evolution (e.g.characterizing nonlinear selection gradients and evolutionary equilibria).Several of these applications are summarized in Table 1 and described in the following sections.

Second-order sensitivity analysis and growth rate estimation
The sensitivity of growth rate provides insight into the population response to parameter perturbations.However, such perturbations also affect the sensitivity itself, that is, sensitivity is 'situational' (Stearns 1992).These second-order effects are quantified by the sensitivity, with respect to a parameter h j , of the sensitivity of k to another parameter h i , that is, by the second derivatives o 2 k ohjohi .The sensitivity of the elasticity of growth rate to parameters similarly depends on second derivatives (Caswell, 1996(Caswell, , 2001)).
In conservation applications, attention is often focused on the vital rates to which population growth is particularly sensitive or elastic; these first-order results may change depending on parameter perturbations.First derivatives also provide a linear, first-order approximation to the response of the growth rate to changes in parameters.The linear approximation is guaranteed to be accurate for sufficiently small perturbations and is often very accurate even for quite large perturbations (Caswell 2001).If the response of k to h is nonlinear, it is tempting to use a second-order approximation for Dk: *Correspondence author.E-mail: eshyu@whoi.edu We caution that although this may, in some cases, provide a more accurate calculation, this is not guaranteed.As shown in Fig. 1 of Carslake, Townley & Hodgson (2008), for example, adding the second-order terms may actually reduce the accuracy of the approximation.

Characterizing nonlinear selection processes
The second derivatives of fitness with respect to trait values have consequences for selection.The first derivatives of fitness are selection gradients (Lande 1982).When fitness is a linear function of a trait, its second derivatives are zero, and there is selection to shift the trait's mean value.When fitness is a nonlinear function of a trait, its second derivatives are nonzero and provide additional information on how selection affects the trait's higher moments (Lande & Arnold 1983, Phillips & Arnold 1989, Brodie, Moore & Janzen 1995).Such nonlinear selection can be classified as concave or convex depending on whether the second derivatives are negative or positive.One can classify a selection process as linear, concave or convex using quadratic selection gradients, the local second derivatives of fitness with respect to trait value (Phillips & Arnold 1989).If fitness is measured as k, these quadratic selection gradients are equivalent to o 2 k/oh 2 , the pure second derivatives of k with respect to trait h (e.g. the second derivatives with respect to stage-specific survival in C. ovandensis, as shown in Fig. 3a).Concave, linear and convex selection correspond to negative, zero and positive second derivatives, respectively.
Concave selection reduces the variance in the trait, and convex selection increases it; Lande & Arnold (1983, p.1216) equate this to a more sophisticated version of the concepts of stabilizing and disruptive selection.Brodie, Moore & Janzen (1995) provide further analysis of the curvature of the fitness surface and its effects on selection.
Selection operating on pairs of traits is said to be correlational if the cross second derivatives are nonzero.Thus, if the pure second derivatives of two different traits, h i and h j , are both nonzero, their mixed second derivative o 2 k/oh j oh i is a measure of correlational selection.If o 2 k/oh j oh i <0, there is selection to decrease the phenotypic correlation between the two traits; if o 2 k/oh j oh i >0, there is selection to increase their correlation.The concepts of nonlinear selection are powerful, but require the second derivatives of fitness to be applied.

Stability of evolutionary singular strategies
Second derivatives play a role in adaptive dynamic analyses.Evolutionary singular strategies (SSs) are phenotypes for which the selection gradient is locally zero (e.g.Geritz et al. 1998).SSs are classified as stable, attracting or repelling, and by whether they can invade or coexist with other nearby phenotypes (Geritz et al. 1998, Diekmann 2004, Waxman & Gavrilets 2005, Doebeli 2011).
These classifications depend on the local second derivatives of invasion fitness, the growth rate of a rare mutant in an equilibrium resident environment.For example, the second derivative of the mutant growth rate k to the mutant trait y determines whether a SS is evolutionarily stable (o 2 k/oy 2 <0) or evolutionarily unstable (o 2 k/oy 2 >0).Evolutionarily stable strategies, once established, are unbeatable phenotypes against which no nearby mutants can increase under selection and are thus long-term evolutionary endpoints.Evolutionarily unstable strategies, on the other hand, are branching points open to phenotypic divergence and may ultimately become sources of sympatric speciation (Geritz et al. 1998).

Sensitivity of the stochastic growth rate
Second derivatives provide a way to calculate the sensitivity of the stochastic growth rate in some cases.The stochastic growth rate is where N(t) is the population size at time t.Tuljapurkar (1982) derived a small-noise approximation for log k s in the absence of temporal autocorrelation.As shown by Caswell (2001 Section 14.3.6),this approximation can be written in terms of the first derivatives of k, the dominant eigenvalue of the mean projection matrix A. Thus, the derivatives of this approximation can be written in terms of the second derivatives of that eigenvalue (Caswell 2001, Section 14.3.6).We discuss this application further in the section 'Sensitivity analysis of stochastic growth rates'.
The second derivatives of k with respect to matrix elements were introduced by Caswell (1996); see also Caswell (2001, Section 9.7).However, these calculations are awkward and error-prone, because they involve all the eigenvalues and eigenvectors of the projection matrix.McCarthy, Townley & Hodgson (2008) introduced an alternative approach for calculating the second derivatives of eigenvalues (they call them 'second-order sensitivities') based on transfer functions, partially to avoid the calculation of all the eigenvectors.However, they consider only rank-one perturbations of a subset of the matrix elements, excluding fertilities, and their calculations are perhaps equally difficult.
Here, we reformulate the second derivative calculations using matrix calculus, providing easily computable results.We extend previous results by including not only second derivatives with respect to matrix elements, but also those with respect to any lower-level parameters that may affect the matrix elements, and by presenting the second derivatives of the continuous-time invasion exponent r and the net reproductive rate R 0 .
The key to our approach is that the calculation of first derivatives using matrix calculus yields a particular expression, the differentiation of which leads directly to the second derivatives.Second derivatives are easily computed by this method in any matrix-oriented language, such as MATLAB or R.Although we consider only the second derivatives of population growth rates, our approach extends naturally to other scalar-dependent variables.
In the section 'A case study: Calathea ovandensis', we present an example of the calculation of second derivatives in a case study of the tropical herb Calathea ovandensis.

N O T A T I O N
Matrices are denoted by upper-case boldface letters (e.g.A) and vectors by lower-case boldface letters (e.g.w); unless otherwise indicated, all vectors are column vectors.Transposes of matrices and vectors are indicated by the superscript |.The matrix I n is the n9n identity matrix, the vector e is a vector of ones, and e 1 is a vector with 1 as its first entry and zeros elsewhere.The matrix K m,n is a mn9mn commutation matrix (vecpermutation matrix) (Magnus & Neudecker 1979, Henderson & Searle 1981), which can be calculated using the MATLAB function provided in Appendix S1-D.The expression diag(x) indicates the square matrix with x on the diagonal and zeros elsewhere.
The Kronecker product is denoted by X⊗Y and the Hadamard (element-by-element) product by X∘Y.The vec operator (e.g.vecA) stacks the columns of a matrix into a single vector.For convenience, we will write (vecA) | as vec | A .We will make frequent use of Roth's theorem (Roth 1934), which states that for any matrices X, Y and Z: Matrix calculus is a system for manipulating vectors and matrices in multivariable calculus and simplifies partial derivative calculations by allowing the differentiation of scalar, vector or matrix functions with respect to scalar, vector or matrix arguments.While there are multiple matrix calculus notations, we will use the system of Magnus & Neudecker (1999).For a more detailed introduction to these methods in an ecological context, see Appendix 1 of Caswell (2007).
The first derivative of a m91 vector y with respect to a n91 vector x is defined to be the m9n Jacobian matrix that is, a matrix whose (i,j) entry is the derivative of y i with respect to x j .We will also write this as an operator D[y;x]; the first argument of D is the vector-valued function y to be differentiated, and the second argument is the vector-valued variable x with respect to which differentation is carried out.Thus, As in the scalar case, second derivatives are obtained by differentiating first derivatives.If we consider a scalar-valued function y(x) of a vector-valued argument x, the matrix of second derivatives (the Hessian matrix) is given by the operator The matrix of second derivatives of a vector-valued function y(x), where y has dimensions m91, is obtained by stacking the Hessian matrices for each of the elements of y; that is, H½y; x ¼ H½y 1 ; x H½y 2 ; x . . .
These first and second derivative definitions are written in terms of vector-valued functions and arguments.When matrices appear, they are transformed into vectors using the vec operator, which stacks the columns of the matrix into a column vector.Thus, the first and second derivatives of k with respect to the entries of the matrix A would be written, respectively, as Magnus & Neudecker (1985, 1999) showed how to obtain first and second derivatives from the differentials of functions.Their 'first identification theorem' showed that That is, if an expression of the form dy=Qdx can be obtained, then the Jacobian matrix of first derivatives is given by Q.
The 'second identification theorem' does the same for the Hessian matrix of second derivatives, showing that Thus, our goal will be to find expressions of the form d 2 y = dx | Bdx, where y is a measure of population growth rate and x represents either matrix entries or lower-level parameters; the matrix B will then provide the Hessian matrix using (11).The key to our approach is to begin with the expression (10) for the first differential, differentiate it to obtain the second differential and manipulate the results to obtain a matrix B in the form of (11).

Second derivatives of growth rates
We now apply the identification theorems to three measures of population growth rate, the discrete-time growth rate k, the continuous-time growth rate r = log k and the net reproductive rate R 0 .
Second derivatives of k with respect to matrix entries: We assume a population projection matrix A of dimension n9n.The discrete-time growth rate k is the dominant eigenvalue of A. To derive H [k;vecA], we begin with an expression of the form (10) for the first differential of k.As shown in Caswell (2010), where w and v are the right and left eigenvectors of A corresponding to k, scaled so that where e is a n91 vector of ones.
Differentiate ( 12) to obtain the second differential Because we are calculating second derivatives with respect to A, the second term will drop out because d 2 vecA = 0 (Magnus & Neudecker 1999).Apply the vec operator to obtain (Magnus & Neudecker 1999).Substituting ( 17) into ( 16) gives By the chain rule, The first derivatives of w and v, subject to ( 13) and ( 14), are given in Caswell (2008) and H. Caswell and Y. Vindenes (unpublished data), respectively, as: Substituting ( 19) and ( 20) into (18) gives This is of the form and hence where and the first derivatives of w and v are given by ( 21) and ( 22).

Second derivatives of k with respect to lower-level parameters: H [k; h]
Because many life-history traits and environmental factors affect multiple life cycle transitions, the entries of A are usually functions of lower-level parameters.The first derivatives with respect to lower-level parameters are calculated with the chain rule.To calculate the second derivatives of k with respect to a s91 vector h of lower-level parameters, we must develop a chain rule for the Hessian.
To do so, we begin with the first differential of k in ( 12) and differentiate to obtain the second differential (15).Because we are calculating second derivatives with respect to h rather than A, d 2 vecA is no longer zero.By the chain rule, dvecA ¼ dvecA dh | dh: eqn 26 Differentiate (26) to obtain Because d 2 h = 0, the second term drops out.Substituting ( 26) and ( 27) into the expression for the second differential in (15) yields eqn 28 To simplify this expression, define in terms of which ( 28) can be rewritten as Then apply the vec operator and Roth's theorem (3) to obtain where, as shown by (A-10) and (A-13) in Appendix S1-A, and hence by the second identification theorem (11), where The first and second derivatives of A with respect to θ, which appear in dvecA dh | and H [vecA; h], respectively, can be evaluated by hand or by using a symbolic math program.This result is in agreement with the Hessian chain rule derived in a different way by Magnus & Neudecker (1999, p. 125).
These results can be used to parameterize constraints or covariation among traits.As a simple example, suppose that survival and fertility are constrained to covary as F i = cP i , and one wants the total second derivative including this constraint.This is obtained by defining a lower-level parameter h, setting F i = h and P i = ch and calculating The population growth rate in continuous time is the invasion exponent r = log k.By the definition of the Hessian in ( 7), the Hessian of r with respect to A is We insert the first derivative of log k, and then apply the product rule to obtain which simplifies to where H[k;vecA] is given by (25).
Replacing vecA in (42) with a parameter vector h gives the Hessian The derivatives dvecA oh | can be calculated by hand or with a symbolic math program, and H [k;h] can be obtained from (38).
The net reproductive rate R 0 measures the population growth rate per generation and is used as an alternative fitness measure to r under some special conditions (P asztor, Mesz ena & Kidsi 1996, Brommer 2000).If A is decomposed into transition and fertility matrices, A = U+F, then R 0 is the dominant eigenvalue of the next generation matrix R = FN (Cushing & Zhou 1994), where N is the fundamental matrix: The (i,j) entry of N gives the expected number of visits to stage i for an individual starting in stage j.The (i,j) entry of R gives the expected lifetime production of stage i offspring by an individual starting in stage j.
Because R 0 is an eigenvalue, our results for H[k;vecA] and H [k;θ] can be applied to find its second derivatives, but with R taking the place of matrix A. The resulting expressions are more complicated than the corresponding expressions for k, because parameters can affect R 0 through U, F or both.In the important special case where only a single type of offspring is produced (suppose it is numbered as stage 1), then R is an upper triangular matrix and R 0 is its (1,1) entry; in this case, eigenvalue calculations are not necessary.
We defer the fully general calculation of H[R 0 ;θ] to Appendix S1-C and show results here for two useful special cases: the second derivatives with respect to the entries of the transition matrix U and with respect to the entries of the fertility matrix F. We consider both single and multiple types of offspring.
If we apply (38) to the case of R 0 , replacing vecA with vecR, we obtain where and w R and v R are the right and left eigenvectors of R.
To evaluate (47), we must calculate the second derivatives of R 0 with respect to R, and the first and second derivatives of R with respect to h.For the former, the Hessian H[R 0 ;vecR] is given by (25), using the dominant eigenvalues and eigenvectors of R rather than those of A. For the latter, we will consider the derivatives of R with respect to U and F in turn.The derivatives of R with respect to general parameters h are shown in Appendix S1-B.
Second derivatives of R 0 to the transition matrix: The second derivatives of R 0 with respect to the entries of the transition matrix U require the first and second derivatives of R with respect to U. The first derivatives are obtained by differentiating R = FN, applying the vec operator and noting that dvecN = (N | ⊗N)dvecU (Caswell 2006(Caswell , 2009)), to obtain The second derivatives of R are obtained from the definition of the Hessian matrix (9): The derivative of vec (N⊗R | ) is given by a result of Magnus & Neudecker (1985, Theorem 11;1999, p. 209); for a m9n matrix X and a p9q matrix Y, Thus, ( 50) can be rewritten as As a result, where where dvecR dvec | U is given by ( 48) and H [vecR; vecU] is given by (53).
Second derivatives of R 0 to the fertility matrix: H [R 0 ; vecF] Now consider the second derivatives of R 0 with respect to the entries of the fertility matrix F. Differentiating R = FN with respect to F yields the first derivatives The second derivatives of R are given by the Hessian matrix However, because N depends only on U, and not on F, this is a zero matrix.

Single type of offspring
In the common case where there is only one type of offspring (Appendix S1-B), H[R 0 ; h] simplifies to where e 1 is the n 9 1 vector with 1 as its first entry and zeros elsewhere.

A case study: Calathea ovandensis
Calathea ovandensis is a neotropical perennial herb that inhabits forest understories.Horvitz & Schemske (1995)  Horvitz and Schemske summarized four years of population dynamics from four plots of C. ovandensis with a series of 898 projection matrices.The average of these matrices, weighted by the observed stage abundances and transition frequencies, is given in Table 8 of Horvitz & Schemske (1995) as The dominant eigenvalue of this matrix is 0Á9923, indicating a near-steady state population.
To obtain the second derivatives of k to the entries of A, we calculated the Hessian H[k; vecA] using (25).It is a symmetric 64 9 64 matrix (Fig. 1).In this example, and in others with large projection matrices, H[k; vecA] contains many entries and may be difficult to interpret, even when entries that are fixed at 0 are omitted.Most of the second derivatives here are small in magnitude (Fig. 1b) with the exception of a few entries, including the highly negative o 2 k=oa 2 3;1 ¼ À217Á53 and o 2 k/oa 3,1 oa 4,2 = À75Á64, where a 3,1 is the transition probability from seed to juvenile and a 4,2 is the transition probability from seedling to pre-reproductive.
Using (38), we calculated the Hessian H[k; h] for a set of lower-level parameters h.For example, the stage-specific survival probabilities are lower-level parameters that affect multi-ple matrix entries.To analyse these using (38), write the survival probabilities in a vector r, which is given by the column sums of U, so that where G describes stage transitions conditional on survival (Caswell 2011).The Hessian of k with respect to r is given by ( 38), with the parameter vector h given by r.Calculating this Hessian requires the first and second derivatives of A with respect to r. where dvecA dr | is given by ( 61) and H[k;vecA] is given by ( 25).The resulting Hessian matrix with respect to the lower-level survival probabilities, H[k;r], is shown in Fig. 2.These second derivatives are generally of smaller magnitude than those of H[k;vecA] (Fig. 1).The largest second derivatives in H[k;r] appear in rows 1 and 2 (equivalently, columns 1 and 2). Figure 3 highlights the mixed second derivatives o 2 k/or 1 or i and o 2 k/or 2 or i , along with the pure second derivatives o 2 k=or 2 i .C. ovandensis has several large second derivatives involving r 1 and r 2 (the first two rows or columns of Fig. 2, which are shown separately in Fig. 3b,c).As discussed in the section 'Second-order sensitivity analysis and growth rate estimation,' this indicates that the sensitivity of k to stage 1 (seed) and stage 2 (seedling) survival will be especially responsive to changes in later survival.Similarly, the sensitivity of k to later survival is especially responsive to changes in seed and seedling survival.
When interpreted in terms of selection gradients, recall from the section, 'Characterizing nonlinear selection processes' that selection on a single trait is concave, linear or convex if o 2 k/oh 2 is negative, zero or positive.Selection on two traits is negatively or positively correlational if o 2 k/oh 1 oh 2 is negative or positive.C. ovandensis is experiencing nearly linear selection on survival in stage 8 (o 2 k/or 8 % 0), concave selection on survival in stage 2, and convex selection on survival in stages 1, 3, 4 and 5.There is negative correlational selection between survival in stage 1 (seeds) or 2 (seedlings) and survival in stages 4-8 (adults), and positive correlational selection between seed or seedling survival and survival in stages 1-3 (pre-adults).This indicates that seed and seedling survival are being selected to decrease their correlation with adult survival, but to increase their correlation with pre-adult survival.
Because the Hessian matrices include second derivatives with respect to all possible pairs of characters (matrix entries or lower-level parameters), they contain a great deal of information, and there are no established standards for displaying the results.We have shown several possibilities that may be useful: colour plots, plots that remove matrix entries that are of no interest because they are structural zeros, and plots displaying the range of magnitudes of the second derivatives.Others will no doubt be developed.The MATLAB code used to generate the analysis is included in the Supporting Information.

Sensitivity analysis of stochastic growth rates
An application in which second derivatives are not the objective, but in which the Hessian matrix plays a role, is the sensitivity of Tuljapurkar's small-noise approximation to the stochastic growth rate log k s ('Section Sensitivity of the stochastic growth rate').where C is the covariance matrix of the entries of the population projection matrix The sensitivity of the stochastic growth rate can be obtained by differentiating (63) with respect to the entries of A. The sensitivity of the stochastic growth rate to A, leaving the variances and covariances fixed, depends on the second derivatives of k as where H ¼ H½k; vec A is the Hessian matrix of second derivatives.A derivation of (65) is provided in Appendix S1-C.Much more powerful and general approaches to sensitivity analysis of the stochastic growth rate are available in recent developments the Monte Carlo method (e.g.Caswell, 2005, 2010, Tuljapurkar, Horvitz & Pascarella 2003, Haridas & Tuljapurkar 2005, Horvitz, Tuljapurkar & Pascarella 2005).This approximate result may, however, be useful in situations where the stochastic environment is defined directly in terms of the covariance matrix C of the vital rates.

Discussion
Although the first derivatives of population growth rates are commonly used in ecology and demography, tools for calculating the second derivatives are not nearly as well-established, even though second derivatives also have a variety of potential applications.To this end, we have derived new, more easily computable formulae for the second derivatives of three population growth rate measuresthe discrete-time growth rate k, the continuous-time growth rate r, and the per-generation growth rate R 0both with respect to projection matrix entries and to lower-level parameters.Table 2 provides an overview of the results, with directions to the equations defining the Hessian matrix, containing all second-order partial derivatives, for each type of growth rate and each type of independent variable.
The matrix calculus approach is comprehensive, and even though the formulae may appear complicated, they are easy to apply with any matrix-oriented software.Other methods for finding second derivatives are either more limited or require more difficult and error-prone calculations.Cohen (1978), for instance, derives the second pure derivatives of k with respect to the diagonal elements of the projection matrix (o 2 k=oa 2 ii ) only.The approaches of Deutsch & Neumann (1984) and Kirkland & Neumann (1994) rely on the calculation of group inverses, while those of Caswell (1996) require all the eigenvalues and eigenvectors of the projection matrix.McCarthy et al.'s method (2008) uses transfer functions rather than eigenvectors and is more complicated when handling lower-level parameters.
Population growth rate, no matter how it is measured, is important in many ecological and evolutionary problems.It is hoped that the methods presented here will contribute to a deeper understanding of the response of growth rates to changes in parameters.
Table 2.An overview of the formulae for the second derivatives of population growth rates (k, r, R 0 ) with respect to matrix entries (A, U, F), or to lower-level parameters (h, r).The equation number for the corresponding Hessian matrix is given in the third column; auxiliary equations for terms in the Hessian expressions are given in the fourth column.The MATLAB functions used to calculate each Hessian, as provided in the supplemental material, are listed in the last column

Fig. 1 .Fig. 2 .Fig. 3 .
Fig. 1.(a) The Hessian matrix H[k; vecA], giving the second derivatives of k with respect to the entries of the projection matrix A, for C. ovandensis.Entries corresponding to fixed zeros (unobserved transitions) in the matrix (59) are omitted.(b) The entries of the Hessian matrix in 1a, sorted in ascending order.The derivatives o 2 k=oa 2 3;1 ¼ À217Á53 and o 2 k/oa 3,1 oa 4,2 = À75Á64 are omitted due to their magnitude.

Table 1 .
Potential applications for the pure and mixed second derivatives of k.Analogous interpretations apply to r or R 0 as alternative measures of growth or fitness developed a stage-structured matrix model for C. ovandensis that contains eight stages distinguished by size and reproductive ability: seeds, nonreproductive stages (seedlings, juveniles, pre-reproductive), and reproductive stages (small, medium, large and extra large).Plants may grow larger, remain in the same size class, or shrink at each time step; larger adults are typically more fecund.
The first derivatives, assuming that F does not depend on r (i.e.prebreeding census), aredvecA dr | ¼ ðI n GÞ diag ðvecI n Þðe I n Þ eqn 61(see Caswell and Salguero-G omez 2013, Appendix A).The second derivatives of A are given by H[vecA; r], the derivative of (61) with respect to r.However, none of the terms in (61) depend on r, so H[vecA;r] is a zero matrix.Thus, the matrix B in (38) reduces to