SEARCH

SEARCH BY CITATION

Abstract

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

This paper provides estimates of bank efficiency and productivity in the United States, over the period from 1998 to 2005, using (for the first time) the globally flexible Fourier cost functional form, as originally proposed by Gallant (1982), and estimated subject to global theoretical regularity conditions, using procedures suggested by Gallant and Golub (1984). We find that failure to incorporate monotonicity and curvature into the estimation results in mismeasured magnitudes of cost efficiency and misleading rankings of individual banks in terms of cost efficiency. We also find that the largest two subgroups (with assets greater than 1 billion in 1998 dollars) are less efficient than the other subgroups and that the largest four bank subgroups (with assets greater than $ 400 million) experienced significant productivity gains and the smallest eight subgroups experienced insignificant productivity gains or even productivity losses. Copyright © 2008 John Wiley & Sons, Ltd.


1. INTRODUCTION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

In the last 25 years (from 1980 to 2005), the banking industry in the United States has been greatly transformed by numerous regulatory changes—see, for example, Lown et al. (2000), Kroszner and Strahan (2000), and Montgomery (2003) for a detailed list of regulatory changes. These changes, and particularly those related to the permission of interstate branching and combinations of banks, securities firms, and insurance companies, stimulated the decade-long consolidation in the industry characterized by the dramatic rise in merger and acquisition activities, the rapid decline in the number of commercial banks and the increasing concentration of industry assets among the very large banks (see Jones and Critchfield, 2005). On the other hand, various innovations in technology and applied finance were widespread and intensively adopted by the US banking industry. These technological and financial innovations include, but are not limited to, information processing and telecommunication technologies, the securitization and sale of bank loans, and the development of derivatives markets. The widespread and intensive use of information technologies and financial innovation has facilitated the rapid transfer of information at low cost, increased the scope and volume of non-traditional activities, and also helped facilitate consolidation of the industry (see Berger et al., 1995; Berger, 2004).

The question of whether the unprecedented transformation has made the US banking industry more efficient has stimulated a substantial body of efficiency studies—see, for example, surveys in Berger and Humphrey (1997) and Berger et al. (1999). One dimension of banking efficiency that attracted a lot of research interest (especially in studies prior to the 1990s) is scale efficiency and scope efficiency. The former is used to measure whether a banking firm is producing at optimal output levels; and the latter is used to measure whether it is producing at an optimal combination of outputs. The other dimension of banking efficiency that has received increasing attention since the early 1990s is X-efficiency. X-efficiency is called ‘frontier efficiency’ in Bauer et al. (1998) and ‘economic efficiency’ in Kumbhakar and Lovell (2003). The interested reader is referred to Kumbhakar and Lovell (2003) for an excellent discussion of the relationship between different concepts of efficiency.

X-efficiency is a combination of technical efficiency and allocative efficiency, with the former referring to the ability of a firm to produce output from a given set of inputs and the latter referring to the extent to which a firm uses the inputs in the best proportions, given their prices. X-efficiency is most commonly measured by determining an industry's best-practice frontier and comparing how far each firm deviates from this frontier. However, previous studies revealed that X-inefficiency outweighs scale and scope inefficiencies by a considerable margin, and thus, as Bauer et al. (1998, p. 86) put it, ‘have a strong empirical association with higher probabilities of financial institution failures.’ According to Berger and Humphrey (1991), cost inefficiency consumes 25% or more of total costs, whereas scale inefficiency and allocative inefficiency consume only 5% or less. Therefore, in recent years, research on the efficiency of the US banking industry has increasingly focused on X-efficiency.

The literature investigating X-efficiency in the US banking industry has been dominated by two methodologies: nonparametric Data Envelopment Analysis (DEA for short) and the parametric Stochastic Frontier Analysis (SFA for short). Two other less commonly used parametric approaches are the Thick Frontier Analysis (TFA for short, see Berger and Humphrey, 1991) and the Distribution Free Approach (DFA for short, see Berger, 1993). First put forward by Charnes et al. (1978), the DEA approach is a linear programming technique where the efficient frontier is formed as the piecewise linear combination that connects the set of best-practice observations in the dataset under analysis, yielding a convex production possibility set (see Berger and Humphrey, 1997). However, because DEA uses only the data on inputs and outputs and does not take direct account of input prices, it does not incorporate allocative inefficiency.

The SFA approach, based on the ideas of Aigner et al. (1977) and Meeusen and van den Broeck (1977), involves the estimation of a specific parameterized efficiency frontier with a composite error term consisting of non-negative inefficiency and noise components. X-efficiency can thus be measured in terms of cost efficiency, revenue efficiency, or profit efficiency, depending on the type of frontier used. The DEA and SFA approaches generally give very different efficiency estimates. However, Bauer et al. (1998) and Rossi and Ruzzier (2000) argue that it is not necessary to have a consensus on which is the single best frontier approach for measuring efficiency. They also propose a series of criteria to evaluate if the inefficiency estimates obtained from different approaches are mutually consistent in terms of inefficiency scores and ranks.

Cost efficiency has received the most attention in the parametric analysis of efficiency of the US banking industry. According to Berger and Humphrey (1997), 30 out of 38 studies that employed parametric techniques in the analysis of efficiency in the US banking industry were reported to employ cost functions, and the rest employed profit functions—among these 38 parametric studies of the efficiency of the US banking industry, several employed TFA and DFA. Despite its popularity, the cost frontier used in previous studies suffers from the following two problems. First, the estimated parameters of cost frontiers frequently violate the monotonicity and concavity constraints implied by economic theory, which eventually leads to wrong conclusions concerning efficiency levels. While permitting a parameterized function to depart from the neoclassical function space is usually fit-improving, it also causes the hypothetical best practice firm not to be fully efficient at those data points where theoretical regularity is violated.

Second, the cost frontier suffers the problem of not having enough flexibility. Most of the previous studies employ a translog functional form. Researchers have found, however, that the translog function lacks enough flexibility in modelling the US banking industry which is composed of banks of widely varying sizes (see McAllister and McManus, 1993; Wheelock and Wilson, 2001). In an attempt to increase flexibility, more recent studies employ a so called ‘Fourier function’ which is actually a translog function augmented with trigonometric Fourier terms. Although this so-called ‘Fourier function’ can improve the goodness of fit, it is not a true Fourier flexible functional form, in Gallant's (1982) original sense. In particular, the original Fourier flexible functional form consists of two components, with the first component being a ‘reparameterized’ translog function and the second component a trigonometric Fourier series. It is important to note that these two components are not independent of each other. In fact, the scaled variables of outputs and input prices are not only used in the Fourier series, but also in the modified translog part. However, the so-called ‘Fourier function’ ignores the parametric relationship between the two components of the Fourier function, and just includes the scaled variables of outputs and input prices in the Fourier series. While this practice makes it a lot easier to use the Fourier function, it may be unable to reach close approximation in the Sobolev norm and may result in inconsistent parameter estimates.

Motivated by the widespread practice of ignoring the theoretical regularity conditions and not using a globally flexible functional form, as summarized in Table I, the purpose of this paper is to reinvestigate the cost efficiency of the US banking industry with more recent panel data over the sample period from 1998 to 2005, and by addressing the above two problems inherent in previous studies. In doing so, we take the SFA approach, and minimize the potential problem of using a misspecified functional form by employing a globally flexible functional form—Gallant's (1982) original Fourier flexible functional cost form. It should be noted that there are two globally flexible functional forms which can provide greater flexibility than locally flexible functional forms: the Fourier flexible functional form and the Asymptotically Ideal Model, introduced by Barnett et al. (1991). The former is based on a Fourier series expansion and the latter is based on a linearly homogeneous multivariate Muntz–Szatz series expansion. Both of them are globally flexible in the sense that they are capable of approximating the underlying cost function at every point in the function's domain by increasing the order of the expansion, and thus have more flexibility than most of the locally flexible functional forms which theoretically can attain flexibility only at a single point or in an infinitesimally small region. In this study we employ the Fourier cost functional form, which is both log-linear and globally flexible. In the implementation of it, we strictly follow Gallant's (1982) original specification of the functional form rather than just include the scaled variables of outputs and input prices in the Fourier series as previous studies did.

Table I. A summary of flexible functional forms estimation of cost efficiency of US banks
StudyModel usedTrue FourierCurvature imposed
  1. Note: Some studies employed both cost and profit frontiers.

Ferrier and Lovell (1990)Translog No
Berger and Humphrey (1991)Translog No
Berger (1993)Translog No
Kaparakis et al. (1994)Translog No
Berger and Mester (1997)Translog + Fourier trigonometric termsNoNo
Berger et al. (1997)Translog + Fourier trigonometric termsNoNo
Peristiani (1997)Translog No
DeYoung (1997)Translog No
Mester (1997)Translog No
DeYoung et al. (1998)Translog + Fourier trigonometric termsNoNo
Stiroh (2000)Translog No
Clark and Siems (2002)Translog No
Berger and Mester (2003)Translog + Fourier trigonometric termsNoNo

We also estimate the Fourier flexible cost function subject to full theoretical regularity. There are three approaches to incorporating curvature and/or monotonicity restrictions into flexible functional forms: the Cholesky factorization approach, the Bayesian approach, and the nonlinear constrained optimization approach. The Cholesky factorization approach can only guarantee the negative semidefiniteness of the Hessian matrix of a cost function in a region around the reference point (that is, a data point where curvature is imposed), and satisfaction of curvature at data points far away from the reference point can only be obtained by luck (see Ryan and Wales, 2000). This is not satisfactory, especially when the sample size is large and violations of curvature are widespread. The Bayesian approach involves specifying prior distributions for parameters and inefficiency terms. However, the specification of prior distributions adds extra uncertainty to the outcome of the modelling exercise, especially when researchers have no idea of how to parameterize a priori the unknown parameters (see Diewert, 2004; Greene, 2005). The nonlinear constrained optimization approach, originally proposed by Gallant and Golub (1984) and recently used by Serletis and Shahmoradi (2005) in the context of consumer demand systems, develops computational methods for imposing curvature restrictions at any arbitrary set of points. Monotonicity can also be incorporated into the estimation of the cost function although the original Gallant and Golub (1984) paper does not do so. This method applies to any cost function as long as the Hessian matrix (or some transform of the Hessian matrix) and the first-order conditions of the cost function can be explicitly specified. While the nonlinear constrained optimization method has many desirable properties, no attempt has been made in the stochastic frontier literature to use this method to incorporate monotonicity and curvature on parametric (cost or distance) functions.

The rest of the paper is organized as follows. Section 2 provides a brief review of stochastic cost frontiers. In Section 3 we present the Fourier cost function and detail the homogeneity, monotonicity, and curvature constraints implied by neoclassical microeconomic theory. In Section 4 we discuss the constrained nonlinear optimization methodology for imposing these constraints on the parameters of the Fourier cost function. Section 5 deals with the data description. In Section 6, we apply our model to panel data on US banks, and discuss the effect of the incorporation of monotonicity and curvature on cost efficiency, and also report our estimates on cost efficiency for 12 different bank groups. Section 7 summarizes and concludes the paper.

2. STOCHASTIC COST FRONTIER

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

Within a panel data framework, the cost frontier model can be written as

  • equation image(1)

This model decomposes the observed cost for firm i at time t, Cit, into three parts: (i) the actual frontier f(Xit, ρ), which depends on Xit, a vector of exogenous variables (i.e., input prices and output quantities), and ρ, a vector of parameters, and which represents the minimum possible cost of producing a given level of output with certain input prices; (ii) a non-negative term τit ≥ 1, measuring firm-specific inefficiency; and (iii) a random error, ζit, which captures statistical noise. The deterministic kernel of the cost frontier is f(Xit, ρ), and the stochastic cost frontier is f(Xit, ρit. As required by microeconomic theory, f(Xit, ρ) is a linearly homogeneous and concave function in prices and also nondecreasing in both input prices and outputs.

We follow the common practice in this literature and assume that f(Xit, ρ) is a log-linear functional form. The stochastic cost function in (1) is rewritten as

  • equation image(2)

where cit = lnCit; α+ xitβ = lnf(Xit, ρ); uit = lnτit ≥ 0; and vit = lnζit. xit is the counterpart of Xit with the input prices and output quantities transformed to logarithms, β is a K × 1 vector of parameters, and α is the intercept. Thus the composite error term εit( = uit + vit) consists of two parts, with uit capturing the level of firm inefficiency and vit capturing statistical noise.

In an empirical exercise, assumptions are commonly made about the two error components. Usually the vits are assumed to be i.i.d. N(0, σ2) and independent of the uits, an assumption we maintain throughout this paper. In the specification of the distribution for the uits we assume

  • equation image(3)

where

  • equation image(4)

where η1 and η2 are parameters to be estimated and the uis are assumed to be independently and identically distributed non-negative truncations of the equation image distribution. Note that the above exponential function of time, ηit, is a generalization of that proposed by Battese and Coelli (1992) in the sense that it relaxes the monotonicity of the temporal variation pattern of the efficiency term using a two-parameter specification.

The cost efficiency of firm i at time t can then be defined as the ratio of minimum cost attainable in an environment characterized by exp(vit) to observed expenditure, as follows:

  • equation image(5)

with CEit ≤ 1. Notice that CEit = 1 if and only if cit = α+ xitβ + vit. For example, if a firm is 80% efficient, it could reduce costs by 20% simply by becoming fully efficient.

3. THE FOURIER COST FUNCTION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

We assume that α+ xitβ in equation (2) is an M-output and N-input Fourier cost functional form, as follows:

  • equation image(6)

where u0 = α, equation image, and equation image is a vector of parameters to be estimated. zit = (lit, qit)′ is an (N + M) vector of rescaled log input prices, lit, and rescaled log outputs, qit. The procedure for this rescaling is the same as suggested by Gallant (1982):

  • equation image(7)

where pn is the price for input n, ym is the quantity for output m, and the location parameters ln an and ln equation image are chosen as

  • equation image(8)

In equation (6), equation image; λ is a rescaling factor, and kα is a multi-index—an (N + M) vector with integer components. As Gallant (1982) shows, the length of a multi-index, denoted as equation image, reduces the complexity of the notation required to denote high-order partial differentiation and multivariate Fourier trigonometric terms (those sin and cos terms). Following Gallant (1982), these indexes are constructed using the following rules (the construction of these indexes is complex and is performed using MATLAB). First, the zero vector and any kα whose first non-zero element is negative are deleted. Second, every index with a common integer divisor is also deleted.

As a Fourier term is a periodic function in its arguments but the cost function is not, the scaling of the data is also important. In empirical applications, to avoid the approximation from diverging from the true cost function, the data should be rescaled by a common scaling factor, λ, so that the input prices and output quantities lie in the interval [0, 2π]. The common scaling factor, λ, for input prices is defined analogously as in Gallant (1982). The parameters E (the number of terms) and J (the degree of the approximation) determine the degree of the Fourier polynomials. Thus, the Fourier cost function has 1 + (N + M)+ E(1 + 2J) parameters to be estimated.

Substituting the cost frontier defined by (6) into (2), we obtain the basic panel data stochastic cost frontier model we are going to use in this paper:

  • equation image(9)

where all parameters and variables are defined as above.

3.1. Theoretical Regularity

As required by microeconomic theory, the Fourier cost function in (6) has to satisfy certain theoretical regularity conditions, i.e., homogeneity, monotonicity, and concavity. The restriction of linear homogeneity on the Fourier cost frontier can be imposed through reparameterization, as in Gallant (1982) and Gallant and Golub (1984):

  • equation image(10)

and

  • equation image(11)

Restriction (10) guarantees the linear homogeneity of the first-order terms, and (11) guarantees the linear homogeneity of both the second-order terms and the Fourier trigonometric terms.

We now turn to the monotonicity and curvature constraints. For simplicity, the subscripts i and t for all variables are suppressed in this subsection to avoid notational cluster. Define ∇zg(l, q, ϑ) = ∂[g(l, q, ϑ)]/∂z, and equation image, where z = (l, q) as above. By the two equations defined in (7), it can be easily shown that

  • equation image(12)

where f(p1, ·, pn, y1, ·, ym) = f(Xit, ρ) is the cost frontier corresponding to the Fourier cost function. In what follows, we use f(p, y) instead of f(Xit, ρ). Taking the partial derivative of both sides of (12) with respect to z, we can obtain the following equation:

  • equation image(13)

where equation image and Z is a diagonal matrix with unscaled input prices (p1, ·, pN) and outputs (y1, ·, yM) on its main diagonal. With both f(p, y) and Z−1 being positive, monotonicity (∂[f(p, y)]/∂p > 0) requires

  • equation image(14)

where ∇math imageg(l, q, ϑ) has to satisfy equation image, which can be derived from the fact that the cost function is homogeneous of degree one in prices, i.e.

  • equation image(15)

In equation (15), the first equality can be obtained by using (13).

Concavity in input prices requires that the Hessian matrix, H, of the cost frontier, f (p, y), is negative semidefinite. It can be easily shown that the element of the ith row and jth column of the Hessian matrix, H, of the cost function f (p, y) is given by (see Appendix)

  • equation image(16)

where si is the cost share for input i, ∇math imagesi is the derivative of si with respect to the log price of input j, equation image is the demand for input i, obtained by Shephard's lemma as the first derivative of the cost function with respect to input price pi, and δij = 1 if i = j and 0 otherwise.

Since si = ∇math imageg(l, q, ϑ) and equation image, for i = 1, ·, N, equation (16) can be rewritten as

  • equation image(17)

Since equation image is positive by the property of monotonicity, ∇math imageg(l, q, ϑ) is positive by equation (14), and pj is also positive, concavity of the Hessian matrix in our particular case is equivalent to requiring (in matrix notation) that

  • equation image(18)

be a negative semidefinite matrix. Thus, (14) and (18) are the constraints we need to incorporate into the estimation of the Fourier cost frontier defined in (9)—the monotonicity and curvature conditions are provided in Gallant (1982) without proof.

4. CONSTRAINED OPTIMIZATION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

In this section, we follow Gallant and Golub (1984) and show how the constrained nonlinear optimization approach can be used to impose the monotonicity and curvature constraints given in (14) and (18) on the parameters of the Fourier cost function, defined in (9). Using the reparameterization method suggested by Battese and Corra (1977), the model above is parameterized in terms of equation image and γ, where equation image is the overall variance, and equation image is an indicator of the relative importance of noise and inefficiency variances. Under these assumptions, constrained optimization will give asymptotically efficient estimates for all the parameters.

With the distributional assumptions in Section 2, the log-likelihood function for a sample of I firms for T periods of time is given by

  • equation image(19)

where equation image, equation image, and Φ(·) represents the distribution function for the standard normal random variables—see Battese and Coelli (1992) for details about the derivation of the log-likelihood function and its derivatives in the production frontier context. Estimates of equation image, and η2 can be obtained by minimizing − lnL(φ(θ)), that is, maximizing the log-likelihood function, lnL(φ(θ)) with respect to the parameters. In minimizing − lnL(φ(θ)), we use the TOMLAB/NPSOL tool box with MATLAB (see http://tomlab.biz/products/npsol). NPSOL uses a sequential quadratic programming algorithm and is suitable for both unconstrained and constrained optimization of smooth (that is, at least twice-continuously differentiable) nonlinear functions.

We first run an unconstrained optimization using (19) and check the theoretical regularity conditions of monotonicity and curvature. In case that the monotonicity and curvature conditions are not satisfied at all observations, we use the NPSOL nonlinear programming program to minimize − lnL(φ(θ)) with monotonicity and concavity imposed. Essentially, this becomes a constrained maximum likelihood problem.

While we follow Gallant and Golub (1984) and use nonlinear constrained optimization to impose curvature, we do not do it by constructing their submatrix K22 using a Householder transformation and then deriving an indicator function for the smallest eigenvalue of K22 and it derivative. Instead, we work directly with the matrix G defined in (18), restricting its eigenvalues to be nonpostive. This is because a necessary and sufficient condition for negative semidefiniteness of G is that all its eigenvalues are nonpositive (see Morey, 1986). Compared with the Gallant and Golub (1984) approach where a reduced matrix K22 is sought, the direct restriction of the eigenvalues of G to be nonpositive seems more appealing.

It is well known that an N × N real symmetric matrix has N eigenvalues, with these eigenvalues being real numbers (see Magnus, 1985). Let λ = [λ1, ·, λN] then denote the N eigenvalues of G, a real symmetric matrix defined in (18). The nonlinear curvature constraints for our constrained optimization problem can then be written as

  • equation image

The eigenvalues of G can be obtained by solving

  • equation image(20)

where IN is an N × N identity matrix. Clearly, λn(n = 1, ·, N) are functions of the elements of G, denoted Gij, which are in turn linear functions of equation image and ∇math imageg(l, q, ϑ) as can be seen from (18). In fact, in our case with N = 3, we have

  • equation image(21)

for n = 1, 2, 3, where

  • equation image(22)

for i = 1, 2, 3 and j = i, ·, 3. Explicit formulas for λn(ϑ) in terms of the Gij elements can be easily obtained using the symbolic toolbox in MATLAB. After substituting (22) into λn(ϑ), the eigenvalues in terms of equation image and ∇math imageg(l, q, ϑ) can be obtained.

As for the derivatives of λn(θ), they can be obtained using equation (21), as follows:

  • equation image(23)

All of equation image, and ∂[∇math imageg(l, q, ϑ)]/∂ϑ can be easily computed. In our case with N = 3, each of (the eighteen) ∂λn/∂Gij (for n = 1, 2, 3, i = 1, 2, 3, and j = i, ·, 3) are calculated using the symbolic toolbox in MATLAB.

In addition to the imposition of concavity, the monotonicity constraints in (14) also need to be imposed, if monotonicity is violated. The derivatives for the monotonicity constraints, ∂[∇math imageg(l, q, ϑ)]/∂ϑ and ∂[∇qmg(l, q, ϑ)]/∂ϑ, also can be easily computed. Hence, our constrained maximum likelihood problem can be written as follows:

  • equation image(24)

subject to

  • equation image(25)
  • equation image(26)

where λn is the curvature constraint for each observation and Wj is the monotonicity constraint for each observation as shown in (14). As already noted, we can impose the regularity constraints locally (at a single data point), regionally (over a region of data points), or fully (at every data point in the sample). After estimates of u0, β, equation image, γ, η1, and η2 are obtained, equation image and equation image can then be calculated by using equation image and equation image, both of which are discussed above.

Following Battese and Coelli (1992), the minimum-mean-squared-error predictor of the cost efficiency of the ith bank at time t, CEit = exp(−uit) is

  • equation image(27)

where

  • equation image(28)
  • equation image(29)

This framework allows us to calculate the efficiency level of each bank relative to the best-practice bank represented by the cost frontier.

While we follow Gallant and Golub (1984) in imposing the theoretical regularity conditions on the parameters of the Fourier flexible cost function, we extend Gallant's method in two ways. First, we extend Gallant's constrained nonlinear optimization approach from a traditional factor demand system framework to a stochastic frontier framework. This extension involves the use of a much more complicated log-likelihood function as the objective function, rather than the simple least squares based objective function used in Gallant and Golub (1984). This is because a composed error term is assumed in the stochastic frontier framework, whereas a simple i.i.d. N(0, σ2) error term is assumed in the traditional factor demand system framework. Second, we extend Gallant's method from a time series framework to a panel data framework.

5. THE DATA

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

The data used in this study, obtained from the Reports of Income and Condition (Call Reports), cover the period from 1998 to 2005. We examine only continuously operating banks to avoid the impact of entry and exit and to focus on the performance of a core of healthy, surviving institutions during the sample period. There were 10,139 banks in the US banking industry in 1998, and the number declined to 8,390 in 2005 due to industry consolidation. After deleting those observations whose input prices are negative or zero, we obtained a balanced panel of 6010 observations for 8 years, from 1998 to 2005.

In choosing which financial accounts to specify as outputs versus inputs, we use the accounting balance-sheet approach of Sealey and Lindley (1977). All liabilities (core deposits and purchased funds) and financial equity capital provide funds and are treated as inputs. All assets (loans and securities) use bank funds and are treated as outputs. This approach is different from the intermediation approach, which is consistent with the value-added definition of output production by financial firms and with user-cost price evaluation of the services of outputs. An accurate representation of the intermediation approach can be found in Barnett (1987), Barnett and Hahm (1994), Barnett and Zhou (1994), Barnett et al. (1995), and Hancock (1991).

In this paper, three output quantities and three input prices are identified. The three outputs are consumer loans, y1; non-consumer loans, y2, is composed of industrial and commercial loans and real estate loans; and securities, y3, includes all non-loan financial assets, i.e., all financial and physical assets minus the sum of consumer loans, non-consumer loans, securities, and equity. All outputs are deflated by the Consumer Price Index (CPI) to the base year 1988. The three prices include: the wage rate for labor, p1; the interest rate for borrowed funds, p2; and the price of physical capital, p3. The wage rate equals total salaries and benefits divided by the number of full-time employees. The price of capital equals expenses on premises and equipment divided by premises and fixed assets. The price of deposits and purchased funds equals total interest expense divided by total deposits and purchased funds. Total cost is thus the sum of these three input costs. This specification of outputs and input prices is the same as or similar to most of the previous studies in this literature (see, for example, Akhigbea and McNulty, 2003; Stiroh, 2000; Berger and Mester, 2003). Thus, M = N = 3 in this paper. The three outputs and three input prices are then scaled, using the formulas specified in equations (7)–(8) of Section 3 for each of the 12 asset size classes, which we will discuss in more detail below.

The set of elementary multi-indexes that satisfy equation image and have norm kiα ≤ 3 are displayed in Table II—these three kiα(i = 1, 2, 3) are the three elements in the kα vector corresponding to the three input prices. For this set E = 32, and we take J = 1. While Chalfant and Gallant (1985) and Eastwood and Gallant (1991) have suggested that the number of parameters to be estimated should be equal to the number of effective sample observations raised to the power of 2/3, in this paper we set the number of parameters such that kiα ≤ 3 in order to reduce the number of parameters to a manageable level, given that we also have to deal with hundreds of variables and thousands of highly nonlinear constraints. Thus we have a total of 1 + (N + M)+ E(1 + 2J) = 1 + (3 + 3)+ 32 × (1 + 2) = 103 free parameters (that is, parameters estimated directly).

Table II. Elementary multi-indexes
α1234567891011
l111000011011
l2−101000−101−10
l30−1−10000−1−10−1
q100011010001
q2000−10101000
q30000−1−100110
|kα|*22222233333
α1213141516171819202122
l101101110000
l21−101−1−1−11100
l3−10−1−1000−1−100
q11000−1000011
q200010−10−10−20
q3011000−10−102
|kα|*33333333333
α23242526272829303132 
l10000000000 
l20000000000 
l30000000000 
q11122000000 
q21101112210 
q3−1010−21−1101 
|kα|*3233323311 

However, the effective number of parameters is 85 owing to the following restrictions. The homogeneity restriction

  • equation image(30)

reduces the number of free parameters by one. The remaining restrictions are due to the overparameterization of the A matrix. In particular, A is a 6 × 6 symmetric matrix which satisfies three linearly independent homogeneity restrictions:

  • equation image(31)

Moreover, the symmetry of the matrix A also implies

  • equation image(32)

Thus A can have at most 15 free parameters, and in the parameterization

  • equation image(33)

15 of the u parameters are free parameters and 17 parameters must be set equal to zero. These 17 kα parameters are listed in the last 17 columns of Table II.

Following Berger and Mester (2003), we add three more variables into the Fourier cost function: financial equity capital, equation image, non-traditional banking activities, equation image, and a time trend, t. Financial equity capital is treated as a fixed net input and off-balance-sheet items are treated as a fixed net output. The time trend t is intended to capture the effect of technological change on cost. In the treatment of non-traditional banking activities, we follow Boyd and Gertler (1994) and use an asset-equivalent measure (AEM) of these non-traditional activities. We assume that all non-interest income is generated from off-balance-sheet assets, and that these non-traditional activities yield the same rate of return on assets (ROA) as traditional activities do. Thus, we transform the off-balance-sheet income into an equivalent asset. The two fixed net inputs are measured in 1998 constant dollars and used in logarithm form. When adding the equation image, and t variables in the Fourier cost function, these variables are used in linear and quadratic form (i.e., equation image), and do not interact with the outputs and input prices in order to reduce the number of parameters to a manageable level and to lessen the effects of multicollinearity.

Separating banks into asset size classes is a common approach in assessing the performance of banks' asset size. However, given the unique nature of the distribution of asset size for commercial banks in the United States, it is very difficult to categorize banks based upon asset size and also there is no industry standard on asset ranges. Over our sample period, from 1998 to 2005, around 85% of all commercial banks report less than $ 500 million in total assets. However, over that same time period, there exists a cluster of extremely large banks with over $ 3 billion in total assets that accounts for roughly 2.3% of all commercial banks. In this paper, we classify all banks into three groups: banks with over $ 500 million in total assets are classified as large banks, banks with assets between $ 100 million and $ 500 million are classified as medium banks, and banks with under $ 100 million in assets are classified as small banks.

This classification is mainly based on the standard asset size categories that are used by the Federal Financial Institutions Examination Council (FFIEC), as specified in forms 031, 032, 033, and 034. The only difference is that FFIEC sets the asset cap for medium banks to $ 300 million. The reason for this change is to keep consistency with the Financial Modernization Act and many previous studies which use $ 500 million as the lower limit for large banks. To reduce the computation time for each of the bank subgroups and in order to avoid heterogeneity biases associated with asset size, we further classify each of the three bank groups into several subgroups. Specifically, we use cutoffs at $ 20 million, $ 40 million, $ 60 million, and $ 80 million within the small bank group; $ 200 million, $ 300 million, and $ 400 million within the medium bank group; and $ 1 billion and $ 3 billion within the large bank group. Table III presents the 12 bank subgroups, together with their corresponding asset ranges at 2000 dollars and at 2005 dollars, as well as the number of banks in each subgroup.

Table III. Bank asset size classes
Bank groupsAsset size (in millions of 2000 dollars)Asset size (in millions of 2005 dollars)Number of banksShare of banks
Large banks
Group 1Assets ≥ 3000Assets ≥ 34021412.3%
Group 21000 ≤ assets < 30001134 ≤ assets < 30002183.6%
Group 3500 ≤ assets < 1000567 ≤ assets < 11343816.3%
Medium banks
Group 4400 ≤ assets < 500453.6 ≤ assets < 5672013.3%
Group 5300 ≤ assets < 400340.2 ≤ assets < 453.63215.3%
Group 6200 ≤ assets < 300226.8 ≤ assets ≤ 340.260210.0%
Group 7100 ≤ assets < 200113.4 ≤ assets ≤ 226.8126221.0%
Small banks
Group 880 ≤ assets < 10090.72 ≤ assets ≤ 113.44777.9%
Group 960 ≤ assets < 8068.04 ≤ assets ≤ 90.725979.9%
Group 1040 ≤ assets < 6045.36 ≤ assets ≤ 68.0466911.1%
Group 1120 ≤ assets < 4022.68 ≤ assets ≤ 45.3681313.5%
Group 12assets ≤ 20assets ≤ 22.683285.5%
Total  6010100%

It is to be noted, however, that this classification keeps the asset ranges fixed for the asset classes from year to year. These fixed asset ranges raise a serious question regarding the usefulness of the results when a long sample period, such as this study's sample period, is under examination. To deal with this problem, an approach similar to that laid out in the Financial Modernization Act (FMA) is used. In particular, we define a community bank to be an institution with average total deposits over the preceding 3 years of no more than $ 500 million. Each subsequent year, the asset cap is adjusted upward by the growth in the CPI (for all urban consumers) unadjusted for seasonal variation for the previous year (see Federal Register, 2000). The cap for each year is published in the Federal Register, early in the year, along with the inflation rate used in the adjustment. For example, the official asset cap for community banks in 2005 is adjusted to $ 567 million (see Federal Register, 2005). Consistent with the approach, all the asset size cutoffs are set at 2000 constant dollars, and are adjusted upward by the growth in the CPI.

6. EMPIRICAL RESULTS

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

We use the TOMLAB/NPSOL tool box with MATLAB to estimate the model using panel data for each of the bank subgroups. For each subgroup, the model is estimated under four different levels of constraints: with no constraints imposed; with only the curvature constraint imposed; with only the monotonicity constraint imposed; and with both the monotonicity and curvature constraints imposed. For each of the latter three cases, we impose curvature and/or monotonicity in a stepwise manner—first locally and then globally in case that regularity is not satisfied when local imposition is employed. Tables IV–XV summarize the results for each of the 12 subgroups in terms of parameter estimates, together with the percentages of monotonicity and curvature violations. Due to space limitations, we report only the intercept, u0, the coefficients on the first order terms, b, the coefficients on the second-order terms, equation image, and the coefficients on the time trend and equation image and equation image variables.

Table IV. Parameter estimates for group 1
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u07.37396.60177.75577.3661
b10.66990.68580.91930.6671
b20.08180.0949−0.09710.0824
b30.15570.11880.19550.1353
b4−0.4922−0.5293−0.0912−0.2805
b50.56390.69360.57900.3820
u010.30940.26550.09540.3084
u02−0.2870−0.2473−0.2998−0.2855
u030.38500.34960.22520.3825
u040.0021−0.0460−0.00410.0089
u05−0.0701−0.0773−0.0645−0.0712
u060.50850.28200.55830.3658
u070.01120.00970.18870.0163
u080.08370.08160.12840.0911
u09−0.0450−0.0436−0.0559−0.0520
u010−0.0577−0.0657−0.0733−0.0640
u011−0.1521−0.1403−0.4515−0.1577
u012−0.3178−0.2892−0.2849−0.3127
u013−0.3056−0.2670−0.2984−0.3022
u0140.43260.39220.48420.4286
u0150.00620.00680.15670.0105
t−0.1080−0.1050−0.1021−0.1076
t20.00450.00430.00330.0045
Nontrad−0.0910−0.0622−0.1062−0.0886
Nontrad20.01360.00930.01710.0136
Equity0.17260.29530.19960.1820
Equity2−0.0051−0.0089−0.0056−0.0054
equation image0.17190.16900.18620.1716
γ0.94730.94860.95430.9472
η10.05740.04380.03090.0560
η20.04490.04030.04360.0446
Log-likelihood611.5586.9593.4572.1
Curvature violations1.4%037.5%0
Monotonicity violations0.1%5.7%00
Mean efficiency0.8171  0.8219
Table V. Parameter estimates for group 2
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u012.859412.348113.267112.9243
b10.85020.87100.85310.8048
b20.04930.04840.04270.1095
b30.11190.09720.11440.0718
b4−0.0299−0.02140.05030.1488
b50.34190.37930.33180.3613
u01−0.0274−0.0434−0.0449−0.0734
u02−0.0518−0.0229−0.04080.0338
u030.01740.0213−0.0002−0.0169
u04−0.0314−0.0265−0.0345−0.0147
u05−0.0050−0.00420.0042−0.0024
u06−0.0540−0.0515−0.0603−0.0633
u070.0013−0.00600.02070.0250
u080.02610.03180.02420.0248
u09−0.0178−0.0203−0.0163−0.0204
u010−0.0262−0.0273−0.0240−0.0262
u0110.03450.04920.01220.0144
u0120.03160.04560.02630.0473
u0130.01330.02950.00820.0326
u014−0.0268−0.0489−0.0210−0.0535
u015−0.0450−0.0501−0.0232−0.0162
t−0.0812−0.0849−0.0822−0.0896
t20.00430.00480.00440.0052
Nontrad0.03710.03710.03890.0370
Nontrad2−0.0026−0.0026−0.0031−0.0022
Equity−0.6221−0.6221−0.8316−0.8418
Equity20.02990.02990.03890.0389
equation image0.07690.07690.07710.0745
γ0.88510.88510.88660.8794
η10.15230.15230.18190.1282
η20.07940.07940.09230.0682
Log-likelihood934.6926.9933.2919.4
Curvature violations12.6%013.2%0
Monotonicity violations6.0%6.2%00
Mean efficiency0.88520.8820
Table VI. Parameter estimates for group 3
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u011.688011.978111.755911.2504
b10.64840.65030.64560.6142
b20.06960.07980.06570.0936
b30.05520.09480.07820.0956
b40.24640.21130.19040.7056
b50.72170.73570.56110.5993
u01−0.1411−0.1574−0.0803−0.2350
u020.13230.15500.06850.2363
u03−0.1258−0.1401−0.0657−0.2247
u04−0.0275−0.0332−0.0127−0.0135
u05−0.0181−0.0175−0.0104−0.0120
u06−0.0316−0.0207−0.0222−0.0234
u07−0.0395−0.0430−0.0539−0.0909
u080.05110.03740.03790.0340
u09−0.0344−0.0225−0.0266−0.0220
u010−0.0428−0.0335−0.0318−0.0289
u0110.03180.04060.0486−0.1009
u0120.18320.19290.12920.1390
u0130.18520.19640.12830.1394
u014−0.1932−0.2029−0.1406−0.1495
u015−0.0228−0.0303−0.03660.1070
t−0.0953−0.0954−0.0878−0.0869
t20.00450.00450.00390.0038
Nontrad−0.0249−0.0288−0.0581−0.0600
Nontrad20.00630.00680.01040.0105
Equity−1.0046−1.0320−0.9691−1.0279
Equity20.04500.04620.04290.0456
equation image0.06790.06730.06700.0664
γ0.80950.80730.80220.7996
η10.14680.16870.17910.1917
η20.08340.08960.09980.1047
Log-likelihood1230.11227.51212.81211.4
Curvature violations3.9%01.9%0
Monotonicity violations14.2%13.0%00
Mean efficiency0.89640.9038
Table VII. Parameter estimates for group 4
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and Curvature
u016.320213.417713.754913.9571
b10.84000.89320.82970.8821
b20.0051−0.00470.06990.0065
b3−0.1996−0.14710.1099−0.0925
b40.64060.71470.84580.7259
b5−0.3814−0.40590.0530−0.3640
u010.10100.0541−0.15300.0817
u02−0.1280−0.11160.1309−0.1301
u030.15400.1168−0.09630.1366
u040.01980.0152−0.01230.0235
u05−0.0175−0.0204−0.00960.0105
u06−0.1723−0.1521−0.0834−0.0881
u070.16400.17700.20310.1396
u080.11370.10060.04210.0993
u09−0.1032−0.0920−0.0285−0.0784
u010−0.1216−0.1030−0.0380−0.0845
u011−0.1620−0.1587−0.1988−0.1310
u012−0.1940−0.1791−0.0568−0.1841
u013−0.2006−0.1820−0.0645−0.1827
u0140.22570.19210.07780.1969
u0150.14790.15900.18040.1266
t−0.0676−0.0649−0.0678−0.0600
t20.00200.00200.00210.0026
Nontrad0.0037−0.0001−0.0221−0.0126
Nontrad20.00440.00530.00710.0053
Equity−1.6658−1.1140−1.4599−1.3593
Equity20.08280.05680.07270.0671
equation image0.07980.07770.07920.0908
γ0.90760.90210.90480.8900
η1−0.0478−0.0074−0.02100.1715
η20.04490.06040.05320.0678
Log-likelihood985.9972.6975.5885.0
Curvature violations27.4%026%0
Monotonicity violations8.9%5.3%00
Mean efficiency0.89480.8856
Table VIII. Parameter estimates for group 5
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u01.10761.75482.17741.1061
b10.86420.81340.87770.8547
b20.19170.12110.19950.1784
b30.34250.16990.09570.3717
b41.25901.29560.77111.2594
b50.33050.29920.13450.3497
u01−0.5118−0.4297−0.2394−0.5000
u020.46700.39490.19550.4597
u03−0.4237−0.3655−0.1489−0.4161
u04−0.0886−0.0785−0.0432−0.0818
u050.01530.02220.01540.0191
u06−0.0727−0.0928−0.0430−0.0337
u070.32830.33470.18790.3356
u08−0.05370.00540.0190−0.0405
u090.0478−0.0037−0.02080.0431
u0100.05630.0017−0.01180.0481
u011−0.30640.0017−0.1673−0.3173
u0120.05790.0394−0.00430.0578
u0130.07300.04860.00800.0578
u014−0.0795−0.0509−0.0106−0.0649
u0150.30660.32500.1610−0.0649
t−0.0704−0.0708−0.0735−0.0703
t20.0041−0.08840.00430.0038
Nontrad−0.1196−0.0884−0.1454−0.1237
Nontrad20.01960.01580.02230.0203
Equity0.73710.62240.79590.7308
Equity2−0.0402−0.0343−0.0433−0.0401
equation image0.05860.06090.05790.0605
γ0.81550.81790.81150.8132
η10.22490.20200.23930.2228
η20.09670.09160.10040.0996
Log-likelihood1246.21224.61238.31210.8
Curvature violations23.4%020.9%0
Monotonicity violations5.1%3.2%00
Mean efficiency0.89750.8961
Table IX. Parameter estimates for group 6
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u07.72796.70667.39816.3933
b10.76620.68100.82050.7750
b20.03720.1106−0.00140.0679
b30.19210.16920.1683−0.0108
b40.20530.38390.3832−0.6297
b50.13100.10470.22170.1336
u01−0.0341−0.0749−0.09000.2667
u020.01120.08980.0497−0.2775
u03−0.0282−0.0642−0.08320.2919
u04−0.0406−0.0347−0.01830.0069
u05−0.0100−0.0098−0.00730.0003
u06−0.0758−0.0744−0.0384−0.0234
u070.01470.06370.0455−0.2440
u08−0.0200−0.00910.00660.0493
u090.02220.01450.0082−0.0413
u0100.00770.0021−0.0023−0.0456
u011−0.0180−0.0705−0.04510.2499
u012−0.0311−0.02990.0067−0.0138
u013−0.0318−0.03370.0049−0.0179
u0140.05050.04190.01030.0264
u0150.01610.06500.0453−0.2500
t−0.0822−0.0840−0.0841−0.0819
t20.00470.00500.00490.0049
Nontrad0.04840.05310.04270.0413
Nontrad2−0.0021−0.0028−0.0013−0.0016
Equity−0.2235−0.0551−0.20710.3698
Equity20.01230.00370.0112−0.0187
equation image0.06690.06650.06910.0719
γ0.87360.86970.87660.8762
η10.05850.07350.06470.0938
η20.05520.05910.05610.0686
Log-likelihood1579.31559.51567.61528.5
Curvature violations17.4%017.4%0
Monotonicity violations8.1%5.8%00
Mean efficiency0.88780.8890
Table X. Parameter estimates for group 7
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u06.00689.26795.65879.2110
b10.68840.70390.72270.7683
b20.13020.10140.13080.1054
b30.26850.11940.22810.1085
b40.8774−0.75821.0423−0.6621
b50.1013−0.15700.1091−0.0876
u01−0.27100.2711−0.28580.2395
u020.2489−0.29250.2555−0.2766
u03−0.21640.3174−0.22470.2989
u04−0.0634−0.0529−0.0388−0.0269
u05−0.0215−0.0148−0.0091−0.0010
u06−0.0034−0.01450.00070.0046
u070.1988−0.20220.2111−0.2064
u08−0.02330.0189−0.01900.0130
u090.0310−0.01260.0231−0.0097
u0100.0091−0.03100.0098−0.0194
u011−0.20840.1982−0.21810.2070
u012−0.0124−0.0966−0.0087−0.0749
u013−0.0075−0.0910−0.0026−0.0662
u0140.01370.09860.00970.0772
u0150.1954−0.20770.2077−0.2154
t−0.0577−0.0572−0.0593−0.0584
t20.00310.00300.00320.0031
Nontrad0.07170.07560.06530.0711
Nontrad2−0.0021−0.0028−0.0017−0.0025
Equity−0.2653−0.2342−0.2424−0.2603
Equity20.0125−0.23420.01080.0117
equation image0.05960.06050.06060.0613
γ0.72850.72890.73090.7300
η10.54450.53180.53690.5202
η20.24180.23750.23910.2347
Log-likelihood2165.92152.72156.92143.1
Curvature violations10.8%08.0%0
Monotonicity violations22.1%20.3%00
Mean efficiency0.91810.9178
Table XI. Parameter estimates for group 8
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u05.08325.52415.42295.6218
b10.78780.80740.73540.7780
b20.13470.11060.16380.1307
b30.16060.10130.15480.1355
b40.58050.64840.56950.5083
b50.14290.28290.28580.3314
u01−0.1841−0.2200−0.2056−0.1941
u020.14180.17350.17570.1544
u03−0.1301−0.1600−0.1569−0.1377
u04−0.0495−0.0415−0.0420−0.0389
u050.00960.01530.00380.0060
u06−0.0187−0.0157−0.0128−0.0108
u070.10690.13120.10370.0922
u08−0.00340.0038−0.0035−0.0020
u090.01660.00320.00980.0061
u0100.0062−0.00650.0040−0.0006
u011−0.1040−0.1284−0.1049−0.0903
u012−0.00590.02350.02850.0356
u0130.01730.04430.04580.0530
u014−0.0106−0.0295−0.0446−0.0436
u0150.10310.12440.10270.0864
t−0.0502−0.0500−0.0487−0.0486
t20.00210.00220.00190.0020
Nontrad0.08940.08790.07930.0775
Nontrad2−0.0088−0.0085−0.0076−0.0073
Equity0.23110.09420.08850.0754
Equity2−0.0109−0.0034−0.0035−0.0026
equation image0.06900.06840.06920.0686
γ0.81950.81650.81880.8161
η10.34680.35270.34410.3507
η20.18070.18110.18180.1829
Log-likelihood1307.21302.21303.31298.7
Curvature violations19.2%016.6%0
Monotonicity violations8.4%6.9%00
Mean efficiency0.91450.9129
Table XII. Parameter estimates for group 9
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u03.68714.93814.09455.3413
b10.69420.65760.71990.7237
b20.09300.10850.07920.0698
b30.39370.26340.32750.1084
b40.4650−0.35270.3046−0.6066
b50.0291−0.20140.0867−0.0221
u01−0.18410.1708−0.12420.2453
u020.1683−0.17480.1023−0.2700
u03−0.16070.1874−0.09760.2700
u04−0.0352−0.0182−0.02660.0001
u05−0.0178−0.0177−0.0123−0.0130
u06−0.01490.0108−0.01600.0026
u070.0902−0.14680.0360−0.2222
u08−0.0672−0.0269−0.04420.0251
u090.06820.02740.0484−0.0194
u0100.04660.00550.0292−0.0387
u011−0.08730.1434−0.03440.2233
u012−0.0147−0.0841−0.0039−0.0413
u013−0.0055−0.07550.0057−0.0312
u0140.00720.0810−0.00160.0393
u0150.0956−0.14130.0415−0.2191
t−0.0585−0.0567−0.0597−0.0579
t20.00360.00360.00360.0037
Nontrad0.13360.13740.13030.1374
Nontrad2−0.0157−0.0163−0.0156−0.0166
Equity0.41820.49070.36890.4779
Equity2−0.0210−0.0257−0.0194−0.0264
equation image0.09480.09340.09330.0909
γ0.84290.83820.83920.8316
η10.39100.39880.39000.4058
η20.18750.18930.18640.1908
Log-likelihood1305.11292.31300.51280.9
Curvature violations17.3%015.6%0
Monotonicity violations5.6%6.2%00
Mean efficiency0.89160.8936
Table XIII. Parameters estimates for group 10
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u07.71787.73857.09277.4785
b10.45810.55020.55660.6193
b20.33860.23370.24940.1889
b30.09490.13250.12480.0854
b40.16160.20840.24850.2047
b5−0.02040.02200.12630.0414
u01−0.0765−0.0842−0.1249−0.0646
u020.09550.08680.11890.0469
u03−0.0226−0.0365−0.0701−0.0126
u04−0.0215−0.0260−0.0184−0.0163
u05−0.0085−0.0118−0.0088−0.0070
u06−0.0089−0.0129−0.0145−0.0104
u07−0.00140.01730.01800.0156
u08−0.0366−0.0386−0.0238−0.0098
u090.01800.02400.01450.0018
u0100.00390.01180.0058−0.0041
u011−0.0208−0.0305−0.0363−0.0281
u012−0.0202−0.01820.0087−0.0204
u013−0.0125−0.01040.0150−0.0120
u014−0.00450.0073−0.02700.0104
u0150.01070.02280.03150.0215
t−0.0436−0.0422−0.0440−0.0421
t20.00390.00380.00390.0038
Nontrad−0.0343−0.0455−0.0516−0.0562
Nontrad20.00660.00830.00850.0095
Equity−0.3892−0.3857−0.3324−0.3070
Equity20.02110.02180.01700.0166
equation image0.05930.06000.06040.0610
γ0.80380.80290.80650.8054
η10.49570.49240.48780.4878
η20.18850.18800.18650.1874
Log-likelihood1276.01261.91270.11257.2
Curvature violations27.5%022.4%0
Monotonicity violations8.4%7.1%00
Mean efficiency0.90030.9001
Table XIV. Parameter estimates for group 11
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u02.45552.42452.29332.3250
b10.71530.71530.70350.6845
b20.01260.01230.03650.0321
b3−0.0280−0.02800.06220.0609
b4−0.3451−0.34510.24830.2173
b50.71150.71150.39070.3571
u01−0.0803−0.0803−0.1619−0.1266
u020.00980.00970.09710.0768
u03−0.0078−0.0080−0.0875−0.0696
u04−0.0338−0.03450.00080.0003
u05−0.0045−0.01050.00110.0016
u06−0.0474−0.0458−0.0358−0.0323
u07−0.1212−0.12110.02820.0233
u080.02680.02660.00450.0060
u09−0.0328−0.0327−0.0082−0.0094
u010−0.0477−0.0446−0.0199−0.0201
u0110.12410.1241−0.0247−0.0194
u0120.14610.14610.05740.0486
u0130.15060.15060.05810.0495
u014−0.1382−0.1376−0.0487−0.0389
u015−0.1085−0.10840.03480.0279
t−0.0315−0.0315−0.0342−0.0339
t20.00310.00290.00330.0033
Nontrad−0.1056−0.1056−0.1309−0.1292
Nontrad20.01430.01420.01690.0167
Equity0.80970.80970.70470.7264
Equity2−0.0571−0.0571−0.0512−0.0521
equation image0.07060.07060.07280.0722
γ0.84380.84380.84380.8421
η10.46680.46680.45260.4605
η20.20680.20680.20520.2071
Log-likelihood1283.91266.81258.91257.6
Curvature violations3.8%02.3%0
Monotonicity violations21.4%25.1%00
Mean efficiency0.90340.9024
Table XV. Parameter estimates For group 12
ParameterUnconstrainedCurvature onlyMonotonicity onlyBoth monotonicity and curvature
u09.97188.94118.180311.0359
b10.83870.76290.78570.6458
b20.02000.06770.04790.1558
b30.0051−0.01500.03780.2757
b4−0.03200.08400.12721.4352
b50.33040.16580.21470.4760
u01−0.0956−0.0606−0.0784−0.5649
u020.00180.00270.00840.5311
u03−0.00360.0116−0.0010−0.4995
u04−0.0754−0.0664−0.0203−0.0495
u05−0.0191−0.0112−0.0017−0.0163
u06−0.0472−0.0437−0.0116−0.0158
u070.01350.04380.01710.3617
u080.04510.04690.0220−0.0450
u09−0.0385−0.0422−0.02060.0439
u010−0.0536−0.0538−0.02940.0389
u0110.0005−0.0307−0.0021−0.3706
u0120.05540.00950.03400.1043
u0130.05710.01130.03310.1016
u014−0.0513−0.0087−0.0363−0.1090
u015−0.00550.0258−0.00250.3537
t−0.0517−0.0512−0.0563−0.0637
t20.00440.00430.00550.0062
Nontrad0.00070.01110.0277−0.0078
Nontrad20.00470.00360.00020.0037
Equity−1.4230−1.1720−1.0914−2.6916
Equity20.08710.07040.05900.1637
equation image0.06790.07060.05790.0632
γ0.85130.85370.80830.8155
η10.25590.23580.38560.3318
η20.10180.09820.13570.1149
Log-likelihood660.6651.8627.5603.1
Curvature violations34.1%024.9%0
Monotonicity violations46.5%46.6%00
Mean efficiency0.88670.8856

A parametric bootstrapping method is usually used in constrained optimization to obtain statistical inference for the estimated parameters (equation image) or nonlinear transformations of these parameters (equation image, i.e., elasticities or efficiency) (see Gallant and Golub, 1984). This involves the use of Monte Carlo methods, generating a sample from the distribution of the inequality constrained estimator equation image, large enough to obtain a reliable estimate of the sampling distributions of equation image and equation image. However, the possibility of the use of Monte Carlo methods depends on the complexity of the problem in question. For a simple problem where the objective function is simple and the number of observations and constraints is small, like the traditional factor demand problem with 24 observations in Gallant and Golub (1984), a few hundred simulations are easily affordable in terms of computing time. Unfortunately, this is not the case with our problem. The complicated objective function and the large number of observations and constraints render the Monte Carlo method almost unaffordable. In particular, it takes at least 1 hour of CPU time on a Pentium 4 PC to run the optimization problem once. A 500 simulation would take at least 500 hours. When coupled with the number of bank subgroups, 12 in our case, it would take over 6000 hours of CPU time to obtain standard errors for all the 12 groups. This is certainly unaffordable at present. Therefore, only point estimates are provided for the estimated parameters (equation image) in the following tables.

When neither monotonicity nor curvature is imposed (see the second column of each table), both monotonicity and curvature are violated for each of the 12 subgroups, with the percentage of curvature violations ranging from 1.4% to 34.7% across subgroups and that of monotonicity violations ranging from 0.1% to 46.5%. Since regularity is not achieved for all of the 12 bank subgroups, we first impose curvature alone on the parameters of the cost function. Clearly, the imposition of curvature alone reduces the percentage of curvature violations to zero for each of the 12 bank subgroups; however, it does not guarantee the satisfaction of monotonicity at every data point for all the 12 subgroups (see the third column of each table). In particular, the percentage of monotonicity violations still ranges from 3.2% to 46.6% across bank subgroups when only curvature is imposed. We further notice that, while the imposition of curvature alone reduces the percentage of violation for all of the 12 bank subgroups, it may also induce more violations of monotonicity that otherwise would not have occurred. Taking bank subgroup one (see Table IV) for example, the percentage of monotonicity violations is 0.1% when no constraints are imposed, but increases to 5.7% when curvature alone is imposed. This confirms Barnett's (2002, p. 202) argument that ‘imposition of curvature may increase the frequency of monotonicity violations. Hence equating curvature alone with regularity, as has become disturbingly common in this literature, does not seem to … be justified.’

Similarly, the imposition of monotonicity alone reduces the percentage of monotonicity violations to zero for each of the 12 bank subgroups, but it does not guarantee the satisfaction of curvature at every data point (see the fourth column of each table). In particular, the percentage of curvature violations still ranges from 5% to 20% across subgroups when only monotonicity is imposed. We also notice that the imposition of monotonicity alone may induce more violations of monotonicity curvature that otherwise would not have occurred (see, for example, bank subgroup 1). This further confirms the argument of Barnett and Pasupathy (2003, p. 135) that ‘regularity requires satisfaction of both curvature and monotonicity conditions. Without both satisfied, the second order conditions for optimizing behavior fail and duality theory fails.’ We thus followed the procedures discussed in Sections 3 and 4 and imposed both curvature and monotonicity on the parameters of the Fourier cost function for each of the 12 bank subgroups. As expected, regularity is satisfied at every data point after curvature and monotonicity are globally imposed (see the fourth column in each of Tables IV–XV).

A common practice in this literature is to derive cost efficiency measures from cost functions without theoretical regularity imposed. While permitting a parameterized function to depart from the neoclassical function space is usually fit-improving (this can be seen from the decrease in the log-likelihood values as constraints are imposed), it also causes the hypothetical best practice firm not to be fully efficient at those data points where curvature and/or monotonicity are violated. In particular, the violation of curvature at a data point (pjt, yjt) implies that the quantities of some outputs increase as their corresponding prices increase (holding other things constant); and the violation of monotonicity at that data point implies the quantities of some outputs decrease as total cost increases (holding other things constant). Both of these two cases mean that the best practice firm is not minimizing its cost at (pjt, yjt). Therefore, cost efficiency, which is supposed to be measured relative to a cost-minimizing best practice bank, is not accurate for all the 12 bank subgroups when monotonicity and curvature are not imposed. In fact, we find that the difference in the 8-year mean efficiency between the unconstrained models and their corresponding curvature and monotonicity constrained versions range from − 0.73% to 0.92% (see Table XVI).1 Hence, the failure to impose monotonicity and curvature can produce misleading estimates of cost efficiency.

Table XVI. Differences in average efficiency between unconstrained and regularity constrained models
Bank groupDifference in average efficiency
Large banks
Group 1−0.48%
Group 20.32%
Group 3−0.73%
Medium banks
Group 40.92%
Group 5−0.05%
Group 6−0.12%
Group 70.03%
Small banks
Group 80.16%
Group 9−0.20%
Group 100.01%
Group 110.10%
Group 120.11%

Another issue of particular interest is whether failure to impose theoretical regularity affects the ranking of individual banks in terms of cost efficiency. We calculate the Spearman rank correlation coefficient between unconstrained models and their corresponding (curvature and monotonicity) constrained versions, using the following formula:

  • equation image(34)

where nk is the number of banks in the subgroup, Rankj1 is the rank of bank i based on the constrained version of the model, and Rankj2 is the rank of the same bank based on the unconstrained version of the model.2

If R = − 1, there is perfect negative correlation; if R = 1, there is perfect positive correlation; and if R = 0, there is no correlation. As can be seen in Table XVII, all of the 12 rank correlation coefficients are different from 1, indicating that the ranking of banks in term of cost efficiency changes due to the imposition of theoretical regularity.

Table XVII. Spearman rank correlation coefficients between unconstrained and constrained models
Bank groupRank correlation coefficient
Large banks
Group 10.9997
Group 20.9861
Group 30.9792
Medium banks
Group 40.9460
Group 50.9809
Group 60.9742
Group 70.9869
Small banks
Group 80.9963
Group 90.9918
Group 100.9911
Group 110.9772
Group 120.8684

Roughly speaking, the rank correlation coefficient between unconstrained and (theoretical regularity) constrained models is negatively related to the percentage of monotonicity and curvature violations. For example, bank subgroup 1, which has the lowest percentage of monotonicity violations (0.1%) and the lowest percentage of curvature violations (1.4%), has the highest rank correlation coefficient (0.9997); bank subgroup 12, which has the highest percentage of monotonicity violations (46.5%) and the highest percentage of curvature violations (34.1%), has the lowest rank correlation coefficient (0.8684). Hence, we alert researchers to the potential problems caused by failure to check for and impose (if necessary) theoretical regularity.

6.1. Cost Efficiency and Productivity of US Banks

We now turn to the discussion of cost efficiencies by asset size class, reported in Table XVIII. Clearly, the mean efficiency for each of the 12 subgroups ranges from 82.19% to 91.78%, implying that about 8–18% of incurred costs over the sample period can be attributed to cost inefficiency relative to the best cost-practice banks. These results are similar to earlier estimates that examined commercial banks. Berger and Humphrey (1997), for example, report mean cost efficiency of 84% with a standard deviation of 6% across 50 studies of US banks using parametric frontier techniques. Likewise, Berger and Mester (1997) report average cost efficiency of 87% using a large dataset of almost 6000 US commercial banks that were in continuous operation over the 6-year period from 1990 to 1995.

Table XVIII. Cost efficiency (in %) per asset group
Bank groupMeanMin.Max.5% percentile95% percentile
Large banks
Group 182.1942.5098.9566.6897.42
Group 288.2072.5299.2177.0498.36
Group 390.3872.2299.4581.7198.08
Medium banks
Group 488.5671.8698.5877.8997.86
Group 589.6174.1799.4879.7797.71
Group 688.9072.9099.4078.9798.20
Group 791.7878.5298.8583.8797.90
Small banks
Group 891.2977.1698.7983.1198.12
Group 989.3675.2899.3075.4298.07
Group 1090.0175.5698.9680.4897.93
Group 1190.2475.5398.9780.9997.85
Group 1288.5670.3298.9778.1897.58

There are several findings that emerge from Table XVIII. First, the largest two subgroups are less efficient than the other 10 subgroups. In particular, the very largest subgroup (with assets greater than $ 3000 million) is about 5.6% less efficient than the second largest subgroup and 7.8% less efficient than the third largest subgroup. The same subgroup is 6.3–9.6% less efficient than the medium-sized and small banks. The second largest subgroup (with assets between $ 1000 million and $ 3000 million) is 1.2% less efficient than the third largest subgroup, and ranges from 0.9% to 3.9% less efficient than medium-sized and small bank subgroups. Second, in general cost efficiency falls with bank size for banks with assets above $ 100 million except for subgroup 3 (with assets between $ 500 million and $ 1 billion) and subgroup 5 (with assets between $ 300 million and $ 400 million). However, cost efficiency increases with bank size for banks with assets below $ 200 million except for subgroup 9 (with assets between $ 60 million and $ 80 million) and subgroup 10 (with assets between $ 40 million and $ 60 million). These findings are partially consistent with Kaparakis et al. (1994), who applied a translog cost function to a large dataset of almost 5548 commercial banks in the United States. In particular, Kaparakis et al. (1994) also find that banks with assets greater than $ 1000 million are less efficient than smaller banks. However, they find that average efficiency increases with bank size for banks with assets less than $ 500 million.

We are also interested in the time patterns of cost efficiency of the different bank subgroups, plotted in Figure 1. Several conclusions emerge. First, all the bank subgroups experienced a drastic decline in cost efficiency over the period from 1998 to 2004, and then showed an improvement in cost efficiency in 2005. For example, the cost efficiency of the largest bank subgroup (with assets greater than $ 3000 million) declined from 94.79% in 1998 to 73.65% in 2004, and then resurged a little bit to 73.9% in 2005. The most efficient subgroup with assets between $ 100 million and $ 200 million shows a decline in cost efficiency from almost full efficiency in 1998 to 80.12% in 2004, and then shows a rebound to 84.47% in 2005. Second, the largest bank subgroup is consistently less efficient than the other bank subgroups. Further, the gap in cost efficiency between the largest bank subgroups and the other bank subgroups has increased. For example, the largest banks were 3.34% less efficient than the second largest bank subgroup in 1998, but 7.24% less efficient than the second largest bank subgroup in 2005.

thumbnail image

Figure 1. Cost efficiency per asset class. This figure is available in color online at www.interscience.wiley.com/journal/jae

Download figure to PowerPoint

The drastic decline in cost efficiency for all asset size classes during the first 7 years of our sample period can be partially justified by the failure of banks to adjust to the rapid technological change of the best practice cost frontier. Figure 2 plots the technological change of the best practice cost frontier for all the size classes. Clearly, all 12 asset size classes have shown rapid technological change, with large banks being more favored by the technological change. In particular, the largest size subgroup (with assets greater than $ 3000 million) has seen the fastest technological change of around 6.71% per year; and even the second smallest size subgroup (with assets between $ 20 million and $ 40 million)—which has shown the lowest technological change—has also seen a technological change of around 1% per year. Rapid technological change, which makes feasible the production of given levels of outputs with fewer inputs (or, equivalently, the production of more outputs with given levels of inputs), could result in lower average bank efficiency, even if banks became increasingly productive over time. This can be clearly seen from equation (5).

thumbnail image

Figure 2. Technological change per asset class: 1998–2005

Download figure to PowerPoint

The second reason may lie in unmeasured improvements in service quality and variety. Banks have provided an improved array of services (e.g., mutual funds, derivatives, online services) that increased bank costs, but at the same time were able to raise revenues to more than cover these costs. This is consistent with a strong improvement in profitability over the sample period. Another partial explanation for the decline in cost efficiency for the very large banks (those with assets greater than $ 1 billion) is that many of them have been engaged in geographical diversification and product diversification. The passage of the Riegle–Neal Interstate Banking and Branching Efficiency Act of 1994 undoubtedly helped spur large banks to spread across state lines and to grow. This development helped create large, geographically diversified branch networks that stretch across large regions and even coast-to-coast. The Gramm–Leach–Bliley Financial Services Modernization Act of 1999 allowed the largest banking organizations to engage in a wide variety of financial services, acquiring new sources of noninterest income and further diversifying their earnings. While these geographical and product diversifications have increased the large banks' profits, they also greatly increase their costs.

Finally, one thing that needs to be clarified here is that a lower cost efficiency does not mean a lower productivity growth. In order to illustrate, we calculate the average productivity growth for each bank subgroup over the sample period. Within a cost frontier context, productivity growth can be decomposed into four components: a technological change term, a technical efficiency change term, an input allocative efficiency change term, and a scale effect term—see Kumbhakar and Lovell (2003) for more details. For simplicity, let us ignore the last term and call productivity growth, which is now composed of only the first three terms, ‘net’ productivity growth (NTFPG). Following Kumbhakar and Lovell (2003), we then express the net productivity change as

  • equation image(35)

where the first term is the technological change of the best practice cost frontier and the second term is the change in cost efficiency, including both technical and allocative efficiency changes. The average annual net productivity growth for each of the 12 subgroups is plotted in Figure 3. Generally speaking, the net productivity growth rate increases with asset size, with the largest four bank subgroups (with assets greater than $ 400 million) experiencing significant productivity gains (NTFPG > 1%) and the smallest eight subgroups (with assets less than $ 400 million) experiencing insignificant productivity gains (NTFPG < 1%) or productivity losses (NTFPG < 0). In particular, the largest size subgroup, which has the lowest cost efficiency, shows the fastest average annual net productivity growth of 3.3% whereas subgroup 7, which has the highest cost efficiency, shows a moderate average annual net productivity growth of 0.4%. This finding is also consistent with the view expressed by Berger (2003), Bernanke (2006) and others that technological advances have favored larger banks at the expense of small lenders. However, these productivity gains by larger banks are mainly due to technological advances rather than cost efficiency gains.

thumbnail image

Figure 3. Net productivity growth per asset class: 1998–2005

Download figure to PowerPoint

7. CONCLUSION

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

The estimation of stochastic cost frontier is popular in the analysis of bank efficiency. However, the theoretical regularity conditions (especially those of monotonicity and curvature) required by neoclassical microeconomic theory have been widely ignored in the literature. In this paper, and for the first time in this literature, we use the globally flexible Fourier functional form, as originally proposed by Gallant (1982), and estimation procedures suggested by Gallant and Golub (1984) to impose the theoretical regularity conditions on the Fourier cost function. Hence, we provide estimates of bank efficiency in the United States using (for the first time) parameter estimates that are consistent with global regularity.

We find that failure to incorporate monotonicity and curvature into the estimation will result in mismeasured magnitudes of cost efficiency and also misleading bank rankings in terms of cost efficiency. Regarding cost efficiencies from our theoretical regularity constrained models, we find that the largest two subgroups are less efficient than the other subgroups. We also find that all 12 asset size classes show a decline in cost efficiency from 1998 to 2004, and then see a slight improvement in 2005. This decline in cost efficiency can be the result of adjustments to fast technical progress or unmeasured improvements in service quality and variety. For the very large banks, the decline in cost efficiency can be a result of their engagement in geographical diversification and product diversification after deregulation. Further, we find that the largest four bank subgroups (with assets greater than $ 400 million) experienced significant productivity gains (NTFPG > 1%) and the smallest eight subgroups (with assets less than $ 400 million) experienced insignificant productivity gains (NTFPG < 1%) or productivity losses.

In estimating bank efficiency and productivity in the United States, we have also highlighted the challenge inherent with achieving economic regularity and the need for economic theory to inform econometric research. Incorporating restrictions from economic theory seems to be gaining popularity as there are also numerous recent papers that estimate stochastic dynamic general equilibrium models using economic restrictions (see Aliprantis et al., 2007). With the focus on economic theory, however, we have ignored econometric regularity. In particular, we have ignored unit root and cointegration issues, because the combination of nonstationary data and nonlinear estimation in large models like the ones in this paper is an extremely difficult problem. Dealing with these difficult issues is an area for potentially productive future research.

Acknowledgements

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

This paper builds on material from Chapter 3 of Guohua Feng's PhD dissertation at the University of Calgary. We would like to thank two referees and the members of his dissertation committee: Erwin Diewert, Francisco Gonzalez, Daniel Gordon, Lasheng Yuan, and Alexander David. Serletis gratefully acknowledges financial support from the Social Sciences and Humanities Research Council of Canada (SSHRCC).

. APPENDIX

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

Let equation image(p, y) denote the total cost function which is expressed in logarithmic form (i.e., translog or Fourier cost functions). By Shephard's lemma, equation imagei(p, y) = ∂equation image(p, y)/∂pi, where equation imagei is the demand for input i. Also, let si(p, y) denote the cost share for input i.

The element of the ith row and jth column of the Hessian matrix of equation image(p, y) can be derived as follows:

  • equation image

where δij = 1 if i = j and zero otherwise.

REFERENCES

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information
  • Aigner DJ, Lovell CAK, Schmidt P. 1977. Formulation and estimation of stochastic frontier production function models. Journal of Econometrics 6(1): 2137.
  • Akhigbea A, McNulty JE. 2003. The profit efficiency of small US commercial banks. Journal of Banking and Finance 27(2): 307325.
  • Aliprantis CD, Barnett WA, Cornet B, Durlauf S. 2007. Special issue editors' introduction: the interface between econometrics and economic theory. Journal of Econometrics 136: 325329.
  • Barnett WA. 1987. The microeconomic theory of monetary aggregation. In New Approaches to Monetary Economics, BarnettWA, SingletonK (eds). Cambridge University Press: Cambridge, UK; 115168.
  • Barnett WA. 2002. Tastes and technology: curvature is not sufficient for regularity. Journal of Econometrics 108(1): 199202.
  • Barnett WA, Hahm JH. 1994. Financial-firm production of monetary services: a generalized symmetric Barnett variable-profit-function approach. Journal of Business and Economic Statistics 12(1): 3346.
  • Barnett WA, Pasupathy M. 2003. Regularity of the generalized quadratic production model: a counterexample. Econometric Reviews 22(2): 135154.
  • Barnett WA, Zhou G. 1994. Financial-firms' production and supply-side monetary aggregation under dynamic uncertainty. Federal Reserve Bank of St Louis Review 76(2): 133165.
  • Barnett WA, Geweke J, Wolfe M. 1991. Semi-nonparametric Bayesian estimation of the asymptotically ideal production model. Journal of Econometrics 49(1/2): 550.
  • Barnett WA, Kirova M, Pasupathy M. 1995. Estimating policy-invariant deep parameters in the financial sector when risk and growth matter. Journal of Money, Credit, and Banking 27(4): 14021430.
  • Battese GE, Coelli TJ. 1992. Frontier production functions, technical efficiency and panel data: with application to paddy farmers in India. Journal of Productivity Analysis 3(1/2): 153169.
  • Battese GE, Corra GS. 1977. Estimation of a production frontier model: with application to the pastoral zone of eastern Australia. Australian Journal of Agricultural Economics 21(3): 169179.
  • Bauer PW, Berger AN, Ferrier GD, Humphrey DB. 1998. Consistency conditions for regulatory analysis of financial institutions: a comparison of frontier efficiency methods. Journal of Economics and Business 50(2): 85114.
  • Berger AN. 1993. Distribution-free estimates of efficiency in the US banking industry and tests of the standard distributional assumptions. Journal of Productivity Analysis 4(3): 261292.
  • Berger AN. 2004. The economic effects of technological progress: evidence from the banking industry. Journal of Money, Credit, and Banking 35(2): 141176.
  • Berger AN, Humphrey DB. 1991. The dominance of inefficiencies over scale and product mix economies in banking. Journal of Monetary Economics 28(1): 117148.
  • Berger AN, Humphrey DB. 1997. Efficiency of financial institutions: international survey and directions for future research. European Journal of Operational Research 98(2): 175212.
  • Berger AN, Mester LJ. 1997. Inside the black box: what explains differences in the efficiencies of financial institutions? Journal of Banking and Finance 21(7): 895947.
  • Berger AN, Mester LJ. 2003. Explaining the dramatic changes in the performance of U.S. banks: technological change, deregulation, and dynamic changes in competition. Journal of Financial Intermediation 12(1): 5795.
  • Berger AN, Kashyap AK, Scalise JM. 1995. The transformation of the US banking industry: what a long, strange trip its been. Brookings Papers on Economic Activity 1995(2): 55218.
  • Berger AN, Leusner JH, Mingo JJ. 1997. The efficiency of bank branches. Journal of Monetary Economics 40(1): 141162.
  • Berger AN, Demsetz RS, Strahan PE. 1999. The consolidation of the financial services industry: causes, consequences, and the implications for the future. Journal of Banking and Finance 23(2–4): 135194.
  • Bernanke BS. 2006. Community banking and community bank supervision in the twenty-first century. Remarks at the Independent Community Bankers of America National Convention and Techworld, Las Vegas, Nevada, 8 March.
  • Boyd J, Gertler M. 1994. Are banks dead? Or are the reports greatly exaggerated? Federal Reserve Bank of Minneapolis Quarterly Review Summer: 223.
  • Chalfant JA, Gallant AR. 1985. Estimating substitution elasticities with the Fourier cost function: some monte carlo results. Journal of Econometrics 28(2): 205222.
  • Charnes A, Cooper WW, Rhodes E. 1978. Measuring the efficiency of decision making units. European Journal of Operations Research 2: 429444.
  • Clark JA, Siems TF. 2002. X-efficiency in banking: looking beyond the balance sheet. Journal of Money, Credit, and Banking 34(4): 9871013.
  • DeYoung R. 1997. A diagnostic test for the distribution-free efficiency estimator: an example using US commercial bank data. European Journal of Operational Research 98(2): 243249.
  • DeYoung R, Hasan I, Krichhoff B. 1998. The impact of out-of-state entry on the cost efficiency of local commercial banks. Journal of Economics and Business 50(2): 191203.
  • Diewert WE. 2004. Preface. In Functional Structure and Approximation in Econometrics, BarnettWA, BinnerJ (eds). Elsevier: Amsterdam.
  • Eastwood BJ, Gallant AR. 1991. Adaptive rules for semi-nonparametric estimators that achieve asymptotic normality. Econometric Theory 7(3): 307340.
  • Federal Register. 2000. 65(51): 13867.
  • Federal Register. 2005. 70(5): 1444.
  • Ferrier GD, Lovell CAK. 1990. Measuring cost efficiency in banking: econometric and linear programming evidence. Journal of Econometrics 46(1/2): 229245.
  • Gallant AR. 1982. Unbiased determination of production technology. Journal of Econometrics 20(2): 285323.
  • Gallant AR, Golub G. 1984. Imposing curvature restrictions on flexible functional forms. Journal of Econometrics 26(3): 295321.
  • Greene W. 2005. Reconsidering heterogeneity in panel data estimators of the stochastic frontier model. Journal of Econometrics 126(2): 269303.
  • Hancock D. 1991. The Theory of Production for the Financial Firm. Kluwer Academic: Boston, MA.
  • Jones KD, Critchfield T. 2005. Consolidation in the US banking industry: is the long, strange trip about to end? FDIC Banking Review 17(4): 3161.
  • Kaparakis E, Miller S, Noulas A. 1994. Short-run cost inefficiency of commercial banks: a flexible stochastic frontier approach. Journal of Money, Credit, and Banking 26(4): 875893.
  • Kroszner R, Strahan PE. 2000. Obstacles to optimal policy: the interplay of politics and economics in shaping bank supervision and regulation reforms. Working paper 7582, National Bureau of Economic Research.
  • Kumbhakar SC, Lovell CAK. 2003. Stochastic Frontier Analysis. Cambridge University Press: Cambridge, UK.
  • Lown CS, Osler CL, Strahan PE, Sufi A. 2000. The changing landscape of the financial services industry: what lies ahead? Federal Reserve Bank of New York Economic Policy Review 6(4): 3954.
  • Magnus JR. 1985. On differentiating eigenvalues and eigenvectors. Econometric Theory 1(2): 179191.
  • McAllister PH, McManus D. 1993. Resolving the scale efficiency puzzle in banking. Journal of Banking and Finance 17(2/3): 389406.
  • Meeusen W, van den Broeck J. 1977. Efficiency estimation from Cobb-Douglas production functions with composed error. International Economic Review 18(2): 435444.
  • Mester LJ. 1997. Measuring efficiency at US banks: accounting for heterogeneity is important. European Journal of Operational Research 98(2): 230242.
  • Montgomery L. 2003. Recent developments affecting depository institutions. FDIC Banking Review 15(2): 5460.
  • Morey ER. 1986. An introduction to checking, testing, and imposing curvature properties: the true function and the estimated function. Canadian Journal of Economics 19(2): 207235.
  • Peristiani S. 1997. Do mergers improve the X-efficiency and scale efficiency of US banks? evidence from the 1980s. Journal of Money, Credit, and Banking 29(3): 326337.
  • Rossi MA, Ruzzier CA. 2000. On the regulatory application of efficiency measures. Utilities Policy 9(2): 8192.
  • Ryan DL, Wales TJ. 2000. Imposing local concavity in the translog and generalized Leontief cost functions. Economic Letters 67(3): 253260.
  • Sealey C, Lindley J. 1977. Inputs, outputs, and a theory of production and cost at depository financial institutions. Journal of Finance 32(4): 12511266.
  • Serletis A, Shahmoradi A. 2005. Semi-nonparametric estimates of the demand for money in the united states. Macroeconomic Dynamics 9(4): 542559.
  • Stiroh KJ. 2000. How did bank holding companies prosper in the 1990s? Journal of Banking and Finance 24(11): 17031745.
  • Wheelock DC, Wilson PW. 2001. New evidence on returns to scale and product mix among US commercial banks. Journal of Monetary Economics 47(3): 653674.
  • 1

    The 8-year mean efficiency for a subgroup is obtained by first averaging over 8 years (from 1998 to 2005), and then averaging over all banks in this subgroup, as follow:

    • equation image

    where T = 8 and nk is the number of banks in the subgroup.

  • 2

    Here, we use the time invariance cost efficiency for each bank, that is,

    • equation image

Supporting Information

  1. Top of page
  2. Abstract
  3. 1. INTRODUCTION
  4. 2. STOCHASTIC COST FRONTIER
  5. 3. THE FOURIER COST FUNCTION
  6. 4. CONSTRAINED OPTIMIZATION
  7. 5. THE DATA
  8. 6. EMPIRICAL RESULTS
  9. 7. CONCLUSION
  10. Acknowledgements
  11. . APPENDIX
  12. REFERENCES
  13. Supporting Information

The JAE Data Archive directory is available at http://qed.econ.queensu.ca/jae/datasets/feng001/ .

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.