Operations research in the natural resource industry

Authors


Abstract

Operations research is becoming increasingly prevalent in the natural resource sector, specifically in agriculture, fisheries, forestry and mining. While there are similar research questions in these areas, e.g., how to harvest and/or extract the resources and how to account for environmental impacts, there are also differences, e.g., the length of time associated with a growth and harvesting or extraction cycle, and whether or not the resource is renewable. Research in all four areas is at different levels of advancement in terms of the methodology currently developed and the acceptance of implementable plans and policies. In this paper, we review the most recent and seminal work in all four areas, considering modeling, algorithmic developments and application.

1. Introduction

Operations research has played an important role in the analysis and decision making of natural resources, specifically in agriculture, fisheries, forestry and mining, in the last 40 years (Weintraub et al., 2007). At some level, these four application areas are quite distinct. The time horizons of growth and extraction (or harvesting) vary from months to a year for fisheries and agriculture, to almost a century for some tree species. Mining is non-renewable, and, as such, is associated with a different type of natural resource. Mine lives can run for a few years to centuries. Correspondingly, there are natural differences related to the form of managing the production in each application. In agriculture, farmers are primarily concerned with how to plant crops and raise animals more efficiently, e.g., how to design livestock rations, while governments are interested in understanding farmers' behavior in order to implement policies. Fishermen are interested in predicting fish populations, allocating fleet effort and avoiding fish depletion. Behavioral models are also relevant in this context. Decisions in forestry are centered around the strategic, tactical and operational levels of managing plantations and public lands to meet demands while adhering to supply restrictions, which are coupled with events such as forest fires and policies, e.g., environmental regulations and concerns. Mining companies are also interested in strategic, tactical and operational decisions, specifically, in this context, of how to design mines and extract the ore most profitably.

We see some similarities across natural resource areas as well. Specifically, multi-criteria decision making plays a role in all arenas, although it is not very prevalent in mining. Similarly, many models consider the environment, although these again are not common in mining. Agricultural and fisheries models often account for the behavior of farmers and fishermen, while forest and mining models consider, to some extent, stand-alone enterprises without regard for interactions between different owners. We discuss how operations research has been applied to handle problems in each area, elaborating on mathematical techniques, presenting challenges and describing successful applications.

This paper is organized as follows: in Section 'Agriculture', we discuss agriculture, specifically, planning, related environmental concerns, decision-making strategies and determining livestock rations. In Section 'Fisheries', we address fisheries, specifically, bioeconomic modeling, fishermens' behavior and decision-making strategies. In Section 'Forestry', we discuss forestry, specifically, strategic, tactical and operational models, and point out significant topics such as supply chain management and forest fire management. In Section 'Mining', we discuss mining, specifically, strategic, tactical and operational models, as well as the supply chain and topics just being introduced into the literature. Section 'Conclusions' concludes the paper.

2. Agriculture

Operations research (OR) models began to be applied in agriculture in the early 1950s. It was Waugh (1951) who first proposed the use of linear programming to establish least-cost combinations of feeding stuffs and livestock rations. The linear program minimizes the cost of the blend, while some specified level of nutritional requirements represents the model's constraints. Note that the founder of linear programming, George B. Dantzig, published his first related work in 1947, i.e., just 4 years before Waugh's publication. Heady (1954) proposed the use of linear programming for determining optimum crop rotations on a farm. In this case, the objective function represents the gross margin associated with the cropping pattern, while constraints relate to the availability of resources such as land, labor, machinery and working capital. Even though linear programs were the first OR models in agriculture, many other OR approaches have been used widely in farming over the last 60 years. We chronicle some of this research.

2.1. Agricultural planning

Heady's ideas have been extended considerably in planning and managing agricultural resources on a farm, as well as at the regional level. Extensions of linear programs include integer and binary programs for applications in which it is not realistic to assign continuous values to the decision variables (e.g., number of tractors, number of combine-harvesters). The inter-temporality underlying many agricultural decisions, especially those involving perennial crops, has required the use of multi-period models. Risk and uncertainty necessitate methods like game theory, Markowitz modeling, Monte Carlo simulation, dynamic programming and Markov chains (see Hazell and Norton, 1986; Kristensen, 1994; Yates and Rehman, 1996; Romero, 2000; Rehman and Romero, 2006; Weintraub and Romero, 2006).

There are sizable differences between farm-level, and regional- and national-level models. Perhaps the most significant difference lies in the structure of the objective function. At a regional and national level, the idea of maximizing the gross margin is not applicable. Within a macroeconomics context, social preferences are, in fact, much better represented by the joint maximization of consumer and producer surpluses subject to compliance with the usual technical and resource constraints, and additional conditions related to market clearance. This type of approach requires the use of non-linear models, often quadratic programs. Samuelson (1952) was a pioneer. Schneider and McCarl (2003) combine a regional planning model and a greenhouse gas mitigation model; Heckelei and Britz (2001) formulate a regional model for agricultural policy.

2.2. Agriculture and the environment

Farming has, for the last 20 years, been considered a joint production process. Through this process, agriculture produces outputs of great importance for the welfare of society, but, environmentally speaking, agriculture also produces “public bads.” For example, maize produces not only grain but also nitrate leaching and salt contamination in soil. The interaction between agriculture and the environment in terms of sustainable management practices is currently of paramount importance. Crop simulation models help to quantify the environmental effects (such as soil erosion or pesticide use) associated with different management practices. This information is incorporated into various types of optimization models (e.g., linear programs, dynamic programs, quadratic programs) in order to determine feasible solutions from an economic as well as an environmental perspective (i.e., sustainable solutions), e.g., Pacini et al. (2004).

A more recent attempt to link agriculture and the environment incorporates geographical information systems that recognize and quantify the spatial dimension underlying many agricultural planning models. Merging geographical information systems with OR methods, chiefly mathematical programming, leads to spatial decision support systems that are proving to be extremely promising (Zekri and Boughanmi, 2007).

Different multiple-criteria decision-making methods have been used to quantify the trade-offs among different indicators of agricultural sustainability. Applying multi-objective programming techniques, it is possible to determine or to approximate the Pareto frontier between economic, as well as environmental, indicators. This type of information is essential, among other things, as a powerful aid to design efficient agri-environment policies (see Agrell et al., 2004).

Finally, the connection between agriculture and the environment implies not only the recognition of different decision-making criteria but also the consideration of several stake-holders with different perceptions towards these criteria. Accordingly, a combination of multiple-criteria decision making and group decision making with public participation has arisen. In this, albeit rather new, line of work, the use of OR is appropriate (see Marchamalo and Romero, 2007; Zekri and Boughanmi, 2007).

2.3. Assessment of agricultural policies

Mathematical programming models assess the effects of different agricultural policies on a farm and at the national level. This type of modeling is initially addressed within a purely normative context under the assumption that farmers are “profit maximizers”; consequently, a mathematical programming model maximizes profits subject to a realistic set of constraints. This model can be used to assess the reaction of farmers to different policy scenarios. However, in many cases, the actual behavior of the farmers is not realistically explained by a “profit maximization hypothesis”; thus, the policy predictions are not very accurate.

Two lines of research move from normative to positive economics and, accordingly, more accurate policy predictions have appeared recently. One is the approach known as Positive Mathematical Programming (PMP) proposed by Howitt (1995a,1995b). PMP is a linear programming-based method that calibrates the model according to farmers' actual behavior, and uses information derived from dual variables that correspond to calibration constraints. PMP has been used extensively to evaluate the effects of changes in several agricultural policies on farmers' behavior, see, e.g., Júdez et al. (2001).

The second line of research in a positive economics context is the ability to elicit, in a non-interactive way, a utility function able to reproduce the observed behavior, followed by a particular farmer or by a particular group of farmers. Sumpsi et al. (1997) and Amador et al. (1998) propose a method based on goal programming to obtain such a utility function within a multiple-criteria context. This construct is then used to simulate the farmers' reactions to different policy scenarios. This approach has been used widely to obtain much more accurate policy predictions than when normative profit maximization behavior is assumed, see e.g., López-Baldovin et al. (2006).

2.4. Multiple criteria for agricultural decisions

It is self-evident nowadays that the effective functioning of an agricultural system involves considering biological, technical, private, social and environmental criteria, as well as resolving the conflicts inherent therein. Therefore, multiple criteria are the rule rather than the exception in agricultural decision making, whether the decision maker is a farmer or a policy maker.

Multiple-criteria decision-making methods in agriculture assume both continuous and discrete forms. In the continuous case, we have a feasible set with an infinite number of points defined by several linear and non-linear constraints. This type of continuous setting is very common in agricultural planning problems at any level of aggregation. Within this scenario, the most widely used multiple-criteria decision-making methods have been goal programming, multi-objective programming and compromise programming. Hayashi (2000) presents an extensive survey of continuous applications, and Romero and Rehman (2003) provide a comprehensive reference on multiple-criteria decision-making analysis for continuous problems in agriculture. In the discrete case, the feasible set is characterized by a finite and usually relatively small number of alternatives that represents potential solutions to the decision-making problem. This type of discrete setting is quite common in selection problems in agricultural systems, where a finite number of systems have to be ranked according to several criteria. The most widely used multiple-criteria decision-making methods for a discrete setting have been the analytic hierarchy process, multi-attribute utility theory and approaches such as ELECTRE (Roy, 1991) based on outranking relations. An updated and extensive survey of discrete applications using these approaches is provided in Hayashi (2007).

2.5. Efficiency analysis in the agricultural industry

For many years, the efficiency of several agricultural units like farms, cooperatives and districts was analyzed using econometric methods based on parametric stochastic frontier techniques. Since the appearance of the non-parametric approach known as data envelopment analysis at the end of the 1970s, there have been a sizable number of applications that research the relative efficiency of agricultural units at different levels of aggregation. Data envelopment analysis is a linear programming-based method that operated initially under the assumption of constant returns to scale (Charnes et al., 1978), but was later extended to variable returns to scale (Banker et al., 1984). The key drawback of parametric methods based on stochastic frontiers is the inherent need to assume the functional form of the production frontier. However, the data envelopment analysis approach does not require restrictive assumptions about the production frontier. This makes these methods more flexible, especially for problems involving multiple inputs and outputs, as usually applies in agriculture. For these reasons, papers using data envelopment analysis for efficiency analysis in agriculture in the last 10 years are important, see e.g., Iraizoz et al. (2003), which addresses horticultural farms in Navarra, Spain.

2.6. Determining feeding stuffs and livestock rations in agriculture

The least-cost determination of feeding stuffs and livestock rations is perhaps one of the problems where linear programming has been applied most widely. Nowadays, most feeding stuffs companies use linear programming to calculate different commercial mixes. This approach has been extended to parametric linear programming to study the effect of ingredient price changes on the optimum mix and to chance constraint programming to incorporate the uncertainty surrounding the actual content of some of the ingredients. Literature has emerged since the mid-1980s espousing new methods that represent a promising line of research, although their level of application in the industry is still very scant. When a livestock ration is calculated at the farm level, the farmer is interested in a mix that achieves a trade-off between several criteria such as cost, the bulkiness of the mix and nutritional imbalances. These realistic considerations transform linear programming problems into multiple-criteria decision-making problems (e.g., Rehman and Romero, 1984; Czyzak and Slowinski, 1991; Tozer and Stokes, 2001).

An important problem for the feeding stuffs case is the over-rigid specification of nutritional requirements. Some relaxation of the nutritional constraints can lead to important reductions in the cost of the mix without significantly affecting its nutritional quality. This type of relaxation has been addressed with operations research tools like goal programming with penalty functions (Lara and Romero, 1992), fuzzy mathematical programming (Czyzak, 1989) and interactive multi-objective programming (Lara and Romero, 1994).

Finally, very recently, the calculation of livestock rations has incorporated environmental concerns. Within this context, the aim of the mix is to obtain highly productive results like animal weight gain at a low cost and with minimum environmental impact (e.g., minimize the nitrogen discharge on the farm). Castrodeza et al. (2005) and Peña et al. (2007) successfully address this type of problem by using multi-objective fractional programming techniques.

3. Fisheries

Many coastal areas are highly dependent on the fishing sector. This fact, together with the over-exploitation of many fisheries (Bodiguel et al., 2009), has prompted numerous governments to subsidize the fishing industry. It would therefore be desirable to increase efforts in collecting and collating more accurate data on the landings and discarding of fish, the state of fish stocks and the economics of fishing fleets as a basis for improved bioeconomic analyses to support management decisions.

Since seminal papers on fisheries management (Gordon, 1954; Scott, 1955), operations research in fisheries has played a prominent role. While the initial contributions developed mathematical models to analyze fish stocks, research from the 1970s extended the applications to estimations of the technical efficiency of vessels, capacity utilization, fishermen's behavior and compliance, as well as the effect that social networks exert on vessel performance. Bioeconomic models are becoming increasingly complex. In this section, we provide an overview of operations research in fisheries considering the following four areas: (i) bioeconomic modeling, i.e., mathematical modeling that includes an economic model and the dynamics of living resources, (ii) technical efficiency, (iii) fishermen's behavior and (iv) multi-criteria decision making; this contribution is an update with a focus different from that of previous work (Bjørndal et al., 2005).

3.1. Bioeconomic modeling

The classical application of operations research in fisheries is that of biological and bioeconomic models. These models combine population dynamics with the economics of fishing fleets. Biological models can be classified into two main types: those designed to utilize only aggregate data and those designed to utilize disaggregated and more detailed data, e.g., year-class models. Schaefer (1954) develops a classical aggregated biological model. This model assumes a continuous time logistic growth function; the logistic growth function is the most traditional and commonly used growth function for many fish species and assumes that the rate of change in the fish population size is a parabolic function of the current population size. Extensions to the model afford more flexibility (Fox, 1970). Analogous models use discrete time periods and consider, e.g., the age structure of the biomass and estimate the potential yield by balancing growth against mortality (Beaverton and Holt, 1957); the growth parameters in this model are not density-dependent on the biomass.

Other authors extend this work to assign quotas by including an economic model of the fishing fleet (Gordon, 1954), resulting in a static bio-economic model, known as the Gordon–Schaefer model, which has been widely applied to analyze the open access fishery and to derive the maximum economic yield, i.e., the yield given by the value of the largest positive difference between the total revenue and the total cost of fishing. Scott (1955) introduces dynamic modeling aspects; Clark and Munro (1975) apply optimal control theory to undertake dynamic optimization of the Gordon–Schaefer model to derive conditions for an optimal stock level and associated harvest and effort levels of fishing; the authors make various assumptions regarding prices, costs and the discount rate. Clark (1976), an influential textbook on bioeconomics, has inspired economists, quantitative biologists and applied mathematicians.

Surplus production models in our context are fish population dynamic models that consider only the changes in the biomass due to its growth and the effect of fishing. To estimate the parameters in the model, data requirements are limited to a time series of catch and a relative abundance index (such as captures per unit of effort). While surplus production models have been criticized for their simplicity, their simplicity has also been a subject of praise. The models' applicability is still a matter of debate (Prager, 2003); however, they seem to represent the best, and possibly only, option for use when data availability is poor. Bjørndal and Munro (1998) provide an overview of the economics of fisheries management including bioeconomic modeling.

3.2. Technical efficiency

Technical efficiency represents the deviations of results in the use of resources in a production process. Parametric and non-parametric (deterministic) techniques can be used to measure technical efficiency. Among the former, the most common is the stochastic production frontier, whereas among the latter, data envelopment analysis is most widely used.

Stochastic production frontier allows the inclusion of a random error term that can be relevant in noisy environments. In contrast to the non-parametric techniques, several tests can be used to check for the significance of the parameters and the main hypothesis. The main disadvantage of stochastic production frontier is that it can only account for one single output. Other disadvantages when compared with data envelopment analysis are that a functional form expressing the production process must be imposed and that distributional assumptions have to be made for error and inefficiency terms.

In all cases, the output from an optimization model allows for the construction of a frontier. This frontier represents the maximum output, given a certain level of efficiently used inputs. In data envelopment analysis, the observations cannot be above the frontier because the technique does not consider the effect of random noise. Consequently, there are observations under the frontier; these are considered inefficient. On the contrary, in the stochastic production frontier, the observations can be found either over or under the frontier due to the effect of random noise.

Stochastic production frontier is an extension of a production function. Production functions are estimated based on how well the different observations perform, on average, given the observed data. As a consequence, production functions assume that all observations are equally efficient and any difference in their performance is purely random. In contrast, stochastic frontiers distinguish between differences in performance due to the inefficiency effect from differences due to the random (stochastic) effect. As a consequence, two random terms appear in stochastic frontiers: the traditional error term, which accounts for random variability inherent in any production activity, and an inefficiency term, which accounts for persistent deviations from the mean of a certain unit over time. This is a non-parametric method to estimate technical efficiency based on linear programming techniques. Data envelopment analysis frontiers are based on optimal observations and the frontier represents the maximum level of output that could be obtained, given a certain level of inputs if the observations considered were efficient. Data envelopment analysis presents the advantage that it allows not only the inclusion of multiple inputs but it also enables the inclusion of multiple outputs.

The more recent multi-output stochastic approach, known as stochastic distance functions, has the advantage of including random error and, at the same time, includes multiple outputs in the analysis. However, it requires restrictive assumptions (like the property of homogeneity of degree one for the outputs). Moreover, if there is evidence that a certain distribution can be assumed for the inefficiency term, then stochastic production frontier provides better estimates.

Kirkley et al. (1995, 1998) take the first steps towards the formal measurement of technical efficiency by assessing it in the Mid-Atlantic sea scallop fishery using the stochastic frontier approach. Campbell and Hand (1999) and Sharma and Leung (1999) analyze the technical efficiency of other fisheries, also using the stochastic production frontier approach. The Food and Agriculture Organization (FAO) promotes efficiency measurement as an important tool of fisheries analysis and suggests that the best methodology to assess capacity utilization in fisheries is data envelopment analysis (FAO, 1998, 2000). Similarly, the European Commission (ECOM) financed several research projects to measure fishery efficiency based on the necessity for reducing fishing capacity (ECOM, 2001).

At the start of the new millennium, many papers appeared measuring technical efficiency in different fisheries, e.g., Pascoe et al. (2001) compare physical versus harvest measures, whereas Herrero and Pascoe (2003) combine value-based versus catch-based output measures. These techniques are also used to estimate fish stock indices (Pascoe and Herrero, 2004) or to measure capacity utilization (Kirkley et al., 2001, 2002; Pascoe et al., 2001). Pascoe et al. (2007) use distance functions to analyze two North Sea fleet segments, specifically, the catch composition through targeting using elasticities of substitution; the authors conclude that larger vessels are more selective than smaller ones. Two review articles on efficiency and productivity, Felthoven and Morrison (2004) and Herrero (2004), suggest directions for productivity measurement in fisheries and compare different efficiency techniques applied to two Spanish fisheries, respectively.

3.3. Fishermen's behavior

Imposing a new management regime on a fishery changes incentives for fishermen. Management systems can fail because of unexpected reactions (Gillis et al., 1995). McFadden (1974) estimates how fishermen react to regulations using a discrete choice model; other authors subsequently develop these models, e.g., Sampson (1994). Behavioral models optimize a log-likelihood function, which can assume several forms, e.g., binomial or multinomial logit. The binomial form is most commonly used when there are simply two possibilities of behavior, while the multinomial logit form is used for the case in which there is a finite number of, but more than two, possibilities. The dependent variable in these cases is the proportion of the actions taken by a fisherman or a fleet. Behavior is assumed to depend on fishery-specific characteristics (such as profitability or landing taxes) and on vessel-specific characteristics (such as skipper experience, vessel age and technology). While most applications of behavioral models in fisheries are related to entry, exit or status quo situations in a given fishery (Bjørndal and Conrad, 1987), some studies investigate information flow among vessels (Mangel and Clark, 1983; Little et al., 2004) or responses to stock collapse (Mackinson et al., 1997). Eggert and Ellegard (2003) examine behavior associated with regulation compliance; Babcock and Pikitch (2000) investigate different strategies in multi-species fisheries.

3.4. Multi-criteria decision-making analysis

Overexploitation of many fish stocks combined with excess fishing capacity has led governments to impose regulations that control fishing. Often, regulations imply reductions of the fishing effort or the fishing capacity to rebuild stock size and to achieve a higher efficiency in the long term. However, these regulations bring about undesirable effects in the short term, e.g., unemployment and/or reductions in the fishermen's income. Thus, the long-term environmental and economic objectives conflict with short-term social and political goals. As a consequence, politicians often prefer to maintain the status quo rather than resorting to draconian measures to ensure long-run sustainability. Mardle and Pascoe (2002) use a multi-objective model of the North Sea to conclude that this is essentially a principal–agent problem; the objectives of the policy makers do not necessarily reflect the objectives of society as a whole. Sandiford (2008) is among the first to apply multi-criteria decision-making techniques to fisheries; the author determines optimal resource allocation for a Scottish fishery. Stewart (1988) analyzes a South African fishery. Using a European database with case studies from Denmark, France, Spain and the United Kingdom, Mardle et al. (2002) compare several types of fisheries and fishery management systems. A related problem is one of aggregating decision-makers' preferences, e.g., Mardle et al. (2004), who review preference procedures for different stakeholders in North Sea fisheries.

4. Forestry

Operations research has been in use in forest management since the 1960s. The United States Forest Service implemented the first widely used linear program, Timber RAM (Navon, 1971). Early applications focused on efficient management and harvesting; however, since the 1980s, environmental issues have become increasingly relevant, in particular for native forest. Problems in forestry can be defined using a categorization of strategic, tactical and operational.

4.1. Strategic forest management

Government agencies and private firms typically use linear programming to manage long-range, e.g., 200-year, planning. During the horizon, models address two to three tree rotations of 80 years each within which silvicultural policies, aggregate harvesting, sustainability of timber production and environmental concerns are considered (Richards and Gunn, 2003). Well-known linear programming-based systems (Navon, 1971; Johnson and Scheurman, 1977; Weintraub and Romero, 2006) increasingly incorporate environmental concerns. Successful applications for plantations, mostly pine and eucalyptus, have also been developed by private firms in Canada, Chile, Sweden and the United States (García, 1984; Epstein et al., 1999a,1999b; Ouhimmou et al., 2009). Models aggregate time, space and tree species to maintain a moderate size, and their use is considered standard industry practice.

4.2. Tactical forest planning

Tactical planning serves as an interface between the more conceptual strategic planning and detailed operational decisions (Church, 2007). At the tactical level, tree species and timber products are aggregated, but spatial definitions are considered in detail. Typical horizons encompass decisions related to harvesting already planted trees, which can range from several years for pine or eucalyptus plantations, to several decades for slower growing native species, such as hardwoods. Related decisions involve harvesting as it relates to location, to address environmental concerns and infrastructure; in native forests, road building comprises about 40% of operational costs. Mixed integer programming models have been used to consider decisions that integrate road building and harvesting. The United States Forest Service used a mixed integer program solved with a heuristic (Kirby et al., 1986; Weintraub et al., 1994b), and a Chilean forest firm applied Lagrangian relaxation to a strengthened linear programming formulation (Andalaft et al., 2003). Church (2007) presents a series of applications that the United States Forest Service uses to interphase strategic plans with specific operational actions on the ground.

4.3. Operational decisions

Operational, or “on the ground,” decisions are made with horizons of 1 day to several months, and include harvesting, machine location and transportation scheduling. The most relevant decisions at the operational level include: (i) which areas should be harvested within the planning horizon and how to cut (or buck) trees into logs so as to efficiently satisfy the required demand in length, diameter and quality; (ii) how to allocate harvesting equipment; and (iii) how to haul timber. To make these decisions, both supply and demand need to be known. Demand characteristics for each product, generally length and diameter, are typically known for the next 6–16 weeks. In terms of supply, inventory models estimate the characteristics of the standing timber available in each harvesting area.

4.3.1. Harvesting

Linear programming models have been proposed and, in part, implemented to support decisions involving both the selection of areas to harvest as well as the bucking instructions to foresters. Eng et al. (1986) propose a Dantzing–Wolfe decomposition approach, where subproblems generate the tree bucking patterns. García (1990) develops a linear program used in New Zealand. Epstein et al. (1999a) develop a branch-and-bound column generation approach to produce bucking patterns successfully used by Chilean forest firms. In some cases, bucking is carried out at plants, which receive whole logs. While more expensive logistically, this approach takes better advantage of the timber, as each log is bucked individually, considering the logs and the demand for products. Marshall (2007) demonstrates that software based on dynamic programming, network models, simulation and/or heuristics can significantly increase the value obtained from each log. This process can be based on the notion of “buck-to-value,” where each final product has a given value or “buck-to-order,” where specific quantities of each product need to be produced.

4.3.2. Location of machinery and roads

In mountainous forests, locating harvesting machinery is non-trivial. Skidders are used for flat terrains while cable logging or towers are located on hilltops to transport logs via cables. Secondary roads provide access to tower locations and flat areas for skidders. Geographical information systems have been developed to support decisions regarding where to locate machinery and roads. Firms in Chile have used one such system, PLANEX, which contains a heuristic (Epstein et al., 2006). This is a difficult problem that combines plant location type (timber cells to be harvested from different machine locations) with fixed-cost multi-commodity flow (timber flow and road building).

4.3.3. Transportation

Hauling timber in plantation harvesting constitutes at least 40% of total operational costs. There are several forms in which transportation takes place. In countries like Brazil, Chile and New Zealand, daily transport is based on trucks hauling logs of various dimensions from different forest harvesting locations to destinations such as plants or ports. Operations research models support these decisions. Firms in Brazil, Chile and South Africa have successfully used the system, ASICAM (Weintraub et al., 1996), which contains a heuristic based on simulation, and that quickly develops daily schedules for fleets of up to several hundred trucks. Its use has significantly reduced vehicle queueing and costs. A real-time truck dispatch system based on queueing and heuristic column generation was developed in New Zealand (Rönnqvist and Ryan, 1995). Taking backhauling in to consideration can result in significant savings (Forsberg et al., 2005). In Sweden, trucks collect loads at several forest sites before delivering them to a plant. Column generation has successfully solved this routing problem (Andersson et al., 2008). Epstein et al. (2007) provide a more complete description of different transportation systems.

4.4. The forest supply chain

The forest production chain extends from trees in forests to primary and secondary transformation plants and then to markets as processed products like paper or panels. Strategic supply chain decisions include how to design a coordinated long-range sustained harvesting plan, plant capacities, market demands and transportation capabilities. At the operational level, there is an obvious need to coordinate sales contracted with production capabilities, harvesting and transportation. In practice, coordination at the operational level is not often done well although some models propose to integrate the forest supply chain (D'Amours et al., 2008).

4.5. Environmental concerns

Environmental concerns assume several interrelated forms: preservation of wildlife and scenic beauty, and prevention of erosion and water contamination. Spatial characterization of harvesting areas plays an important role in handling these concerns. One important legal restriction applied in countries such as Canada, New Zealand and the United States is the so-called maximum allowed opening size, which limits the amount of continuous area harvested, usually between 25 and 50 hectares. This restriction protects wildlife (for example, elk do not feed in a clearing unless they are close to the protection yielded by mature trees), scenic beauty and soil quality. Typically, forests are divided into cutting units (smaller than the allowed maximum opening size). Commonly applied rules dictate that if one unit k is harvested in period t, all neighboring units cannot be harvested until the trees in unit k reach a minimum height. Given their implementation complexity, heuristics are used to solve problems containing these types of constraints. Exact approaches also prove successful: Weintraub et al. (1994a) propose a column generation approach, where, in the subproblem, a stable set problem is solved; Murray and Church (1996) propose strengthening the formulation by replacing the pairwise adjacency constraints with cliques.

Basic units in a forest are smaller than the harvesting blocks, and typically range from 1 to 10 hectares. Traditionally, experienced foresters form harvesting blocks by grouping basic units based on information from geographical information systems. Explicitly including the formation of blocks in a model provides better solutions, but a more complex problem, which currently presents challenges to forest researchers. Initially, only heuristic approaches were considered suitable (Richards and Gunn, 2003); recently, exact solution techniques solve moderately sized problems. One approach is based on enumerating all possible harvesting blocks in a strengthened clique formulation (Goycoolea et al., 2005). McDill et al. (2002) constrain the presence of minimum infeasible blocks. An even more challenging problem, tackled to date with only heuristics, involves incorporating large continuous areas of old tree growth in order to preserve some species. See Murray et al. (2007) for an extended discussion.

Another important issue is the analysis of how wildlife can be protected and preserved. Static and dynamic models, models of spatial autocorrelation and of sustainability are developed to analyze the behavior of wildlife population growth and dispersal patterns. Decisions involve the treatment of forest areas, including defining protected areas. A typical problem concerns how to best use a limited budget to protect the habitat of a species (Hof and Haight, 2007).

4.6. Forest fires

Forest fires play an important role in native forests. Once fires ignite, if they spread they can severely damage large forests, and even threaten nearby urban areas. Operations research has played a role in fire prevention, fire detection and containing fires once they have started. Models have been developed to create fire breaks, which help to contain fires by harvesting or eliminating flammable vegetation in certain areas. Models have also had an impact in determining the best fleet required to satisfy initial attacks of fires and to determine real-time dispatching rules of aircraft and crews (Martell, 2007).

4.7. Multiple-criteria decision making and uncertainty

Multi-criteria decision making and uncertainty are important aspects of models that determine timber production, mitigate effects on the environment and wildlife and spur the local economy. Diaz-Balteiro and Romero (2007) present a state-of-the-art analysis on multi-criteria decision making including goal programming, analytical hierarchy programming and multi-attribute compromise programming, and discuss specific cases of multiple objectives including the volume of timber harvested, the economic return and timber production and inventory policies.

Market prices, timber growth rates, the occurrence of fires, the abundance of pests and wildlife growth and migration patterns are all associated with uncertainty. Lohmander (2007) uses chance-constrained programming and stochastic dynamic programming. Because of implementation difficulties, such as assessing decision makers' preferences and determining robust probabilities of events, incorporating multi-criteria decision making and uncertainty explicitly into decision-making processes has been confined principally to case studies, with few reported applications.

5. Mining

Mining is the undertaking of exploiting naturally occurring, nonrenewable resources within the earth for a profit. Mining differentiates itself from other common natural resource areas, e.g., fisheries, agriculture and forestry, primarily in that minerals are nonrenewable. Once a deposit has been fully exploited, the site is permanently closed. Furthermore, mankind has no control over where these resources occur, e.g., one cannot plant a gold deposit. And once an ore body has been identified and analyzed via geological sampling, the reserves are proven and do not diminish (or flourish) based on external environmental factors such as floods or benevolent weather. In fact, human intervention is primarily responsible for the grade and type of ore recovered relative to the projections of the nature of an ore body. Nonetheless, as with any natural resource, it must be exploited subject to certain geometrical (precedence) restrictions, limits on the rate at which the resource can be processed and the quality of resource extracted. Goals in extracting the resource are similar to those regarding the extraction of other natural resources: (i) maximize net present value, (ii) minimize the deviation between the amount of extracted resource and a contractually specified amount, (iii) minimize impact to the environment, (iv) maximize production or throughput and/or (v) maximize operational flexibility with a view to minimizing risk. The first of these objectives appears most commonly in the academic literature.

The stages of mine development consist of prospecting and exploring an ore body of interest to determine whether ore extraction might be economically viable. Economic, geological and statistical tools are often used at this stage to predict the quality of the ore body using sampling techniques and to estimate the value of the ore body using economic techniques. If an ore body is deemed economically viable, it can be profitably developed and exploited. It is in these stages that operations research tools are most commonly used to design the ore body and subsequently to extract ore. Finally, the exploited area must be returned to its original, or at least to an environmentally acceptable, state.

Ore bodies are developed and exploited either on the surface or underground. Surface mining is the more common and straightforward method. Typically, open pit models maximize net present value subject to restrictions on the way in which ore can be extracted, and to resource, e.g., production and capacity, constraints. Underground extraction is more complicated, not only because there are a variety of underground methods that are used depending on the nature of the ore body and surrounding waste rock but also because more operational restrictions are usually involved, and because these operations tend to be very specific to a particular mine. Operations research modeling applied to mining applications dates back to the 1960s. Currently, operations research models for strategic, tactical and operational levels of planning within the development and exploitation phases have been constructed and implemented. In this section, we briefly review some of the seminal works in these categories.

5.1. Strategic mine planning

Very early applications of operations research focused on a strategic question in open pit design, namely the design of the ultimate pit limits, or the boundary that separates notional blocks of waste from notional blocks that contribute directly toward profitable material. Profitable material consists of either those three-dimensional blocks that yield a profit in and of themselves or that must be removed in order to remove ore blocks beneath them. Ignoring the time aspect and any associated resource constraints (e.g., production capacities) and for a given cutoff grade, i.e., grade that separates an ore block from a waste block, this problem is a network model that can be solved easily using a maximum flow algorithm. Seminal work carried out in this area by Lerchs and Grossmann (1965) has been improved, e.g., Underwood and Tolwinski (1998) and Hochbaum and Chen (2000). Because this problem is so easily solved, mine managers still apply the Lerchs–Grossmann algorithm (or some variant thereof) to aid in determining production schedules. Unfortunately, an optimal production schedule, or sequence of blocks to be mined throughout the horizon subject to geometric and resource constraints, may not necessarily fall within the ultimate pit limits. Therefore, although the ultimate pit limits were an important concept in guiding a production schedule decades ago, with today's computing power and algorithmic advancements, the ultimate pit limit problem is becoming anachronistic.

Underground mine design is more complicated, not only because there are more operational constraints to consider but also because there is no single, generic design model that is applicable to all underground mines. Furthermore, the mathematical structure of such a model is usually an integer program whose solution necessitates the use of well-designed algorithms, many of which are still being developed. Efforts to optimize the design of underground mines have been made only in the past decade or two. One early attempt in particular, Alford (1995), borrows ideas from the open pit design. Brazil and Thomas (2007), together with a group of researchers, have made great inroads into the design of haulage roads for a sublevel stoping mine. The primary considerations in their model are to minimize road construction costs subject to constraints on the turn radius of the roads and on access to the ore to be extracted from various predetermined stopes.

5.2. Tactical mine planning

At a lower level of planning, there is work addressing the block sequencing problem. This work can be categorized either as strategic or as tactical depending on the size and number of blocks under consideration, the length of the time horizon, cutoff grade assumptions and the types of operational constraints included in the model. For example, models concerned with a fixed cutoff grade, many hundreds of thousands of blocks, 10 or 20 years, and no blending or inventory may be thought of as strategic, while those containing fewer blocks and a shorter horizon, while making decisions as to whether a block should be sent to a mill, a stockpile or a waste dump (i.e., assuming a variable cutoff grade) might be considered tactical. Early attempts at making decisions at either the strategic or the tactical level result in linear programs, which incorporate decisions regarding how much to extract from a block (and, for example, to send to inventory) in a time period, but do not handle sequencing constraints. As such, many early models assume a fixed mining sequence. More recent work addresses the discrete nature of the problem, and work in the academic literature reveals a number of ways in which researchers have attempted to tackle the problem. For example, Onur and Dowd (1993) use dynamic programming, Caccetta and Hill (2003) use branch-and-bound-and-cut techniques, Dagdelen and Johnson (1986) use a decomposition technique, specifically, Lagrangian Relaxation, and Denby and Schofield (1994) use genetic algorithms. While the first three of these are exact techniques, the latter authors face difficulty in bounding the quality of their solutions. On the other hand, dynamic programming becomes unwieldy for large problems. While both branch-and-bound(-and-cut) techniques, and Lagrangian Relaxation have shown promise, more research is needed to expedite the solution time and/or to obtain a feasible solution to the original problem. Aggregation procedures are also showing promise at reducing problem size and, correspondingly, increasing tractability, e.g., Ramazan (2007) and Boland et al. (2009), although disaggregating the solution into a usable result can still be problematic.

The block sequencing problem also exists in underground mines in much the same way as it exists in surface mining problems; however, in this situation the sequencing constraints between blocks can be more complicated and mine- and/or mining-method specific. Again, depending on the number of blocks, the horizon under consideration and the detail, these models can be thought of as strategic or tactical. Whether the model is strategic or tactical, the basic sequencing question exists. Carlyle and Eaves (2001) plan block extraction for a sublevel stoping mine. Stopes are like pipes from which the ore is drawn, and the question arises as to when to open a stope and how much to draw from a particular stope. Additional considerations include the timing of various development and drilling activities. The authors consider a time horizon spanning ten 3-month periods. Epstein et al. (2003) present a mixed-integer program to determine the levels of extracted ore from several different underground copper mines; their long-term model yields profit improvements of 5% at El Teniente, the largest underground copper mine in the world. Sarin and West-Hansen (2005) schedule operations at a given mine that uses three different mining methods: longwall, room-and-pillar and retreat mining. They determine through the use of binary variables when various sections of ore should be mined via which types of equipment. Continuous variables track the amount of ore extracted, which is subject to quality constraints. Because of the presence of both binary and continuous-valued decision variables, inter alia, they can use a tailored Bender's Decomposition approach to solve their model; they present a case study containing 100 weekly time periods. Newman and Kuchta (2007) present a model in which machine placements, or areas of material, are scheduled for extraction in a sublevel caving mine. Their model minimizes deviation from contractual agreements while primarily adhering to machine placement sequencing rules. While this model, which plans production years in advance with monthly fidelity, can be thought of as strategic, in a subsequent paper, Newman et al. (2007) present a tactical scheduling model for the same mine. At this level of detail, current (or active) machine placements are subdivided into production blocks. The ore from the production blocks is subject to more detailed constraints such as production capacity.

5.3. Operational mine planning including transportation

Operational models are most commonly used to dispatch trucks in either an open pit or an underground mine. Weintraub et al. (1988) develop a network-based truck routing model whose implementation results in about an 8% increase in productivity at Chuquicamata, a large open-pit mine in northern Chile. Soumis et al. (1989) model shovel operations and truck transportation by determining where to locate shovels and subsequently how to dispatch trucks. White and Olson (1992) present software for open pit mines based on optimization models to first generate shortest paths between all locations in the mine; the linear program then determines material flows along these paths. Finally, the dynamic program assigns trucks to operate between shovels and dumps. Equi et al. (1997) describe a model for truck routing in an open pit mine to transport minerals and carry waste to different dumps outside the mines. Alarie and Gamache (2002) provide an overview of this work.

Vagenas (1991) models the truck dispatch in underground mines, generally according to a shortest path between an origin and a destination but more specifically, correcting these paths to resolve vehicle conflicts; the goal is to minimize loader delay. Researchers use simulation models to assess the productivity of underground systems, particularly in underground coal mines, where a variety of transportation systems, e.g., trucks, trains and conveyor belts, as well as the mining equipment itself, must operate in conjunction with each other. For example, McNearny and Nie (2000) study a conveyor belt system used in an underground (longwall and continuous miner) mine to transport coal from a mine face to the surface with the goal of balancing the cost of the conveyor belt system with overall performance. Of particular interest is the identification of bottlenecks in the system and the impact of adding a surge bin to diminish the negative impact of bottlenecks. Simsir and Ozfirat (2008) use a simulation model to assess the efficiency of loaders, crushers and conveyor belts, inter alia, and the number of cuts in a coal seam.

Little published work has been carried out at the operational level, in comparison with the strategic and tactical levels. In particular, the literature appears to primarily contain papers on using the Lerchs–Grossmann algorithm to determine ultimate pit limits and scheduling around these limits. This leads to several realizations: (i) there is a body of literature still focused around an algorithm that answers a somewhat outdated question, (ii) many researchers are not willing to investigate new questions, perhaps due to a lack of implementation of already-published work and (iii) there is a lack of trust and/or ability regarding the use of optimization models for short-term, real-time production planning. In all fairness, there are tactical and operational block sequencing models that have been implemented, e.g., Kuchta et al. (2004) and Carlyle and Eaves (2001). But, in many cases, even the state-of-the-art hardware and software cannot incorporate the size and complexity of today's scheduling models; hence, many of the more traditional questions remain to be correctly posed and solved.

5.4. Mining supply chain

Some researchers have endeavored to integrate the mine–mill–market supply chain. An early piece of work on this topic, Barbaro and Ramani (1986), formulates a mixed-integer programming model to determine whether or not a mine produces in a given time period, whether or not market demand is satisfied in a given period, where to locate a given processing facility and the amount of ore to ship from a mine to a processing facility and then to a market. Elbrond and Soumis (1987) discuss an integrated system for production planning and truck dispatching in open pit mines. Pendharkar and Rodger (2000) present a nonlinear programming model (with continuous-valued variables) to determine a production, transportation and blending schedule for coal, and then market destinations to which to ship the final product. Caro et al. (2007) describe a model that determines long-term production schedules both for open pit and for underground mines, considering not just block extraction but also the downstream processing of the extracted ore. This model is being used at the Chuquicamata mine in Chile.

5.5. Emerging areas

Researchers are just beginning to incorporate aspects of uncertainty, principally ore grade and price uncertainty, into the aforementioned design and block sequencing optimization models. For example, Ramazan and Dimitrakopoulos (2004) present an integer program that incorporates stochastic ore grade for a long-term open pit mine scheduling model; the model contains elastic constraints on production capacity and arguably produces more easily implementable solutions than more classical models without elastic constraints. Grieco and Dimitrakopoulos (2007) present an integer program to schedule extraction from an underground sublevel stoping mine subject to ore grade uncertainty. The objective maximizes a probability-weighted metal content across all extracted areas and constraints incorporate uncertainty in terms of a minimum acceptable level of risk (as defined by the probability of an extracted area meeting a specified cutoff grade), inter alia. Lemelin et al. (2007) model price uncertainty according to well-regarded stochastic models; they update operational plans as information on prices, determined through a real options approach, becomes available. One inhibitor to extending research to the stochastic realm is that models incorporating uncertainty tend to be far more complex than their deterministic counterparts. As solving deterministic models remains a fruitful research area, so too does formulating and solving tractable stochastic programming models. Another, related obstacle is that as no stochastic models have been applied successfully, it is difficult to assess the value of the solutions they provide. Therefore, it is difficult to determine whether the existing models are formulated correctly or whether alternative formulations would provide solutions that would be useful to mine managers.

At present, operations research models in the mining industry are limited in their scope, tractability and use. While there are examples of successful applications, there are many models in the literature that are not being used. Models are becoming better developed and more easily solved, and mining engineers are taking note of the successful applications at some mines, e.g., in the United States, Chile and Sweden. We therefore expect research in mining applications to extend into areas relevant to all natural resource fields, e.g., models that address environmental concerns, safety and disaster recovery. We refer the interested reader to Newman et al. (2010) for a comprehensive literature review of operations research in mine planning.

6. Conclusions

We have discussed the modeling, algorithmic and applications contributions of agriculture, fisheries, forestry and mining to the operations research literature. While the amount of literature is significant and growing, different areas have had various impacts on practice. Specifically, because the behavior of fish is difficult to predict and no one has ownership of the sea, fishing work is not easy to implement; on the other hand, agricultural and forestry models have had a significant impact on how farms, plantations and government-owned land is used. Rigorous mining work is more recent, although the industry is very mature. As a result, there are many emerging areas in mining, e.g., environmental concerns that are already well developed in the other natural resource sectors. In each area, we have summarized the principal problems tackled with operations research. These problems arise in similar ways in the various fields and also in different ways. We supply a thorough list of references for the interested reader. In all areas, models continue to be developed, refined and implemented, and we predict that the use of operations research will become increasingly significant as mathematical modeling and technology, e.g., computer hardware and software, improve while the demand for natural resources increases and resources themselves diminish.

Acknowledgements

The authors thank Kelly Eurek, Division of Economics and Business, Colorado School of Mines, for his technical work on the paper and gratefully acknowledge the support of the Technical University of Madrid and the Autonomous Government of Madrid under project #Q090705-12 for Carlos Romero's contribution.

Ancillary