Jürg Utzinger Office of Population Research, Princeton University, Princeton, NJ 08544, USA. Tel.: +1 609 258 6965; Fax: +1 609 258 1039; E-mail: email@example.com
It has long been suggested that malaria is delaying the economic development of countries that are most severely affected by the disease. Several studies have documented the economic consequences of malaria at the household level, primarily in communities engaged in subsistence farming. A missing element is the appraisal of the economic impact of malaria on the industrial and service sectors that will probably become the backbone of many developing economies. We estimate the economic effects of integrated malaria control implemented during the colonial period and sustained for 20 years in four copper mining communities of the former Northern Rhodesia (now Zambia). Integrated malaria control was characterized by strong emphasis on environmental management, while part of the mining communities also benefited from rapid diagnosis and treatment and the use of bednets. The programmes were highly successful as an estimated 14 122 deaths, 517 284 malaria attacks and 942 347 work shift losses were averted. Overall, 127 226 disability adjusted life years (DALYs) were averted per 3-year incremental period. The cumulative costs of malaria control interventions were US$ 11 169 472 (in 1995 US$). Because the control programmes were so effective, the mining companies attracted a large reservoir of migrant labourers and sustained healthy work forces. The programmes averted an estimated US$ 796 622 in direct treatment costs and US$ 5 678 745 in indirect costs as a result of reduced work absenteeism. Within a few years of programme initiation, Northern Rhodesia became the leading copper producer in Africa, and mining generated the dominant share of national income. Copper production and revenues, which increased dramatically during malaria control interventions, amounted to the equivalent of US$ 7.1 billion (in 1995 US$). Integrated malaria control in copper mining communities was a sound investment. It had payoff for public and occupational health, generally, and without it copper extraction and social and economic development would have been impossible.
The global annual incidence of clinical malaria is estimated at 300–500 million (WHO 2000). At least 750 000 children under the age of 5 years die every year in sub-Saharan Africa (Snow et al. 1999), causing an estimated burden of 35.7 million disability adjusted life years (DALYs) lost (WHO 2001). The situation is grave and the disease is on the rise because of a myriad of factors: emergence and rapid spread of drug-resistant parasite strains and insecticide-resistant vectors; population movements caused by poverty, resource scarcity, civil wars and social unrest; natural disasters, meteorological changes; and development activities and associated ecological transformations that create new mosquito breeding sites (WHO 2000).
The public health importance and the economic impact of malaria have been recognized for centuries, and causal links between the disease and poverty were suggested shortly after the discovery of the transmission cycle more than 100 years ago (Ross 1911; Watson 1921; MacDonald 1950; Winslow 1951). However, Paul Russell stated as early as 1959: `…questions about economic and social impact of malaria are frequently asked but accurate and authoritative answers are difficult, indeed for the most part impossible to formulate' (cited in Packard 1997). The complexity of this causal link and the lack of good-quality epidemiological data precluded detailed appraisals of the macroeconomic impact of malaria. In this paper, we attempt to provide an in-depth response to Russell's challenge by drawing upon a substantial body of experimental and observational evidence accumulated across multiple settings and by bringing together the epidemiology and control of malaria with the economic and social sciences.
Recent efforts have been undertaken to estimate the extent of a malaria-related negative effect on economic growth. Using aggregated national health statistics and controlling for potential confounding factors, it was found that highly endemic countries (more than 50% of the population living at risk of becoming infected with Plasmodium falciparum) had average income levels that were one-third of those in non-malarious countries. Cross-country regressions over 25 years commencing in 1965 confirmed these findings; the annual growth rate of gross domestic product (GDP) in countries of intense malaria was 1.3% lower than in those countries with less malaria (Gallup & Sachs 2001; Sachs & Malaney 2002). Using a comparable approach but a different data source and regression over a shorter time period, the estimated annual growth rate reduction attributable to malaria was considerably lower, namely 0.25% (McCarthy et al. 2000). With a different approach – using four case studies in different epidemiological settings and extrapolating to all of sub-Saharan Africa – the annual economic burden of malaria was estimated to be equivalent to 0.6–1.0% of the GDP (Shepard et al. 1991). Although the extent to which malaria hinders economic development will be further debated and refined, the most recent estimates by Gallup and Sachs (2001) surprised the development community and called for significant up-scaling of malaria control efforts (Sachs 2001; Vogel 2001; Sachs & Malaney 2002). It is important to note, however, that these associational analyses fail to assess the causal relationships and underlying mechanisms of how malaria inhibits economic development.
A notable missing element is appraisal and discussion of the economic impact of malaria on the industrial and service sectors of sub-Saharan Africa. These sectors are increasing in importance and are likely to become the backbone of many developing economies. Here, we present a comprehensive analysis of integrated malaria control programmes implemented in four copper mining communities of former Northern Rhodesia that were sustained for two decades during the British colonial period. We estimate direct and indirect costs of malaria control and assess the consequences for deaths, malaria attacks, DALYs and work shift losses averted. We argue that consistently dramatic decreases of these outcome and impact measures across multiple settings, together with documentation of event sequencing (i.e. establishing temporal plausibility), build a compelling body of evidence in support of a chain of steps linking malaria control to copper output. Our findings are, in fact, used to show the explicit chain of relationships that connect malaria control per se with copper extraction productivity, revenue received, and impact on national income. Finally, we discuss analogous situations in contemporary settings where implementing and sustaining integrated malaria control will be a crucial component of further social and economic development. The central point is that the historical experience in Northern Rhodesia has many features that are relevant for both policy makers and the research community today.
Materials and methods
The work reported here focuses on four copper mining communities of former Northern Rhodesia (now Zambia) during the English colonial period between 1929 and 1949. Copper had been discovered in Zambia more than 100 years ago, but it was only in 1909 that the economic exploitation of this natural resource began, following the completion of a railway into the copperbelt. Until the mid 1920s, outcrop copper deposits at Bwana Mkubwa and Roan Antelope were the most important mining sites (Mitchell 1961). However, at the time, Zambia's copper production was insignificant compared with neighbouring Democratic Republic of Congo (formerly Belgian Congo), which extracted rich surface deposits, and was by far the leading producer in Africa.
After 1923, there was a sharp increase in the world demand for copper, accompanied by rising copper prices on the world market. Many companies were more willing to invest capital in extensive scientifically based prospecting methods and systematic boring technology (Mitchell 1961; Parpart 1983). This led to the discovery of rich copper beds in Zambia (Mendelsohn 1961; Fleischer et al. 1976). Subsequently, four copper mining companies were inaugurated; and they significantly expanded their production between 1929 and 1936 (Watson 1953; Parpart 1983). They are situated on the southern slope of the watershed between the Zambezi and Zaire drainage systems, approximately 1300 m above sea level. In late 1929, the Roan Antelope mine, located near the town of Luanshya in the Ndola Rural district, was the first to begin major operations. It became the second most important site in terms of the number of employees. Mufulira, situated 61 km north of the Roan Antelope mine and in close proximity to the border of Democratic Republic of Congo, began operating in early 1930. Nkana-Kitwe, located 27 km south of Mufulira in the town of Kitwe, was established in February 1923. However, the mine only became fully operational 8 years later. It rapidly grew and became the largest of the four companies. Nchanga, the smallest of the four mining sites, is situated 45 km west of Mufulira and was opened in 1936 (Figure 1). Details of the geology and morphology of the mining areas, as well as the estimated ore reserves of the Zambian copperbelt in general and these four mining sites in particular, are provided elsewhere (Mendelsohn 1961; Fleischer et al. 1976).
Copper mining population
For the Roan Antelope mine, detailed population census data are available when copper production was launched in 1929/1930, as well as 10 and 20 years later (Watson 1953). Based on these data, annual population estimates were extrapolated, assuming constant growth rates for each of the two 10-year periods (see Utzinger et al. 2001). For Mufulira, census data are also available for 1940 and a decade later, however, no such data were collected at the onset of copper extraction (Watson 1953). As total population growth rates for the period 1940–50 were remarkably similar for Roan Antelope and Mufulira, we assumed that the annual growth rate estimated for Roan Antelope between 1930 and 1940 also occurred in Mufulira.
In the absence of detailed census statistics for Nkana-Kitwe and Nchanga, we estimated annual population sizes utilizing aggregated hospital in-patients records, which are available for all four mining communities for the period 1940–46 (Watson 1953). Specifically, we estimated the population size at Nkana-Kitwe by multiplying the known population size at Roan Antelope by the ratio of numbers of in-patients over 7 years at Nkana-Kitwe to the number of in-patients over 7 years at Roan Antelope. We estimated the population size at Nchanga by the same method by using number of in-patients over 7 years at Nchanga in the numerator of the above in-patient ratio. That this method of estimation has some validity is attested to by the fact that when we applied it to Mufulira, where we know the population size in 1940 and 1949, the estimated and actual sizes approximate each other (e.g. known population size at Mufulira in 1940: 18 229; estimated population size based on hospital in-patient data: 19 898).
The total annual number of employees at all four mining sites, stratified by Africans and Europeans, is available from 1932 onwards (Parpart 1983). In 1930 and 1931, we estimated the total work force and the proportion of Africans to Europeans by assuming the same employment rate and the same ratio of Africans to Europeans as in 1932.
Copper production and revenue
Data on annual global copper production and annual copper extraction in Zambia and neighbouring Democratic Republic of Congo between 1926 and 1950 were obtained from statistical yearbooks initially prepared by the League of Nations and subsequently by the United Nations (League of Nations 1936; United Nations 1951). The annual revenue of the total copper extracted in Zambia during the period 1930–49 was calculated based on the annual production and the average annual copper prices paid on the world market in New York. Average annual copper prices were obtained from commodity yearbooks published by the Commodity Research Bureau (Commodity Research Bureau 1940, 1951). We converted these annual revenues into 1995 US$, based on the purchasing power of the dollar, derived from the US consumer price index (US Census Bureau 1966, 1999). Finally, we calculated the cumulative economic return, also in 1995 US$, for the total amount of copper extracted in Zambia over the 20-year period, starting in 1930.
Detailed annual income and expenditure statistics for Zambia before and during the colonial era are virtually non-existent, which is probably also the case for most other African countries south of the Sahara. However, we identified one study that estimated the national income and expenditures for Zambia in 1938. The same concepts and techniques were applied for Zambia (which was then British colonial territory) as used in Britain (Deane 1948). The national income was defined as the aggregated net value of all goods and services produced within the country boundaries for the year 1938. Subsistence activities were included, employing a system of hypothetical prices, based on common market prices adjusted for transportation costs to the market. The total taxable national income was estimated as the sum of subgroups of agricultural, manufacturing and industrial activities and services, which were all recorded in British pounds. We converted these taxable national incomes into 1995 US$, using British historical statistics (Mitchell 1988) and the US consumer price index (US Census Bureau 1966, 1999).
Integrated malaria control
Early on in the decision process to extract Zambian copper resources on a large scale, members of the Roan Antelope and Mufulira mining boards realized that effective tropical disease control measures would be crucial for sustaining healthy labour forces. This would be the foundation for sound economic development (Watson 1953). Malaria was known to be of particular importance, placing a heavy burden on the native population as well as immigrants and expatriates. The high endemicity of the disease was confirmed when the first Europeans arrived at Roan Antelope. Early health records revealed that in a single month there were 105 malaria attacks per 1000 people, counting only those cases which were treated at the medical department of the mine (Watson 1953). When mining activities began, malaria parasite rates and spleen indices among children from neighbouring villages ranged between 50 and 60% (Rodger 1944). Today, more than 70 years later, a considerable body of clinical, epidemiological and entomological data has been accumulated, confirming that malaria was, and continues to be, highly endemic in this part of Zambia (Friis-Hansen & McCullough 1961; Wenlock 1978; Snow et al. 1999). In the 1920s and 1930s, advice was sought from the Ross Institute in London, which sent a delegation of leading malariologists and tropical sanitary engineers to the designated mining sites. Members of the delegation had previously achieved outstanding success at malaria control in the Malay States (now Malaysia). They had primarily used environmental management interventions targeting the larval stages of malaria vectors (Watson 1921).
Following the delegation's recommendations, water supply, sanitary facilities and housing conditions were greatly improved. Basic hospital amenities were put in place and run by trained personnel. Initial antimalarial measures consisted of house screening and, for part of the mining communities, the use of mosquito nets and administration of quinine for prophylaxis and treatment. Ongoing surveillance and monitoring, however, showed that these interventions alone were insufficient to substantially reduce the sickness figures, and the incidence of malaria remained high. Nonetheless, rapid diagnosis and treatment of infected individuals was maintained throughout programme implementation, alongside health education (Watson 1953).
An important feature of malaria control therefore consisted of a package of environmental management interventions that were designed and readily adapted to the local ecological settings. Interventions were primarily aimed at modification or destruction of larval habitats of Anopheles gambiae and A. funestus, the chief malaria vectors identified at the start of the programmes through a series of systematic entomological surveys. Briefly, they consisted of vegetation clearance, swamp drainage and river boundary modification (details in Utzinger et al. 2001). An additional component, although less environmentally sound, was regular application of oil to all open water bodies. Environmental management interventions were launched in late 1929 and adaptively tuned and maintained for two decades. It is remarkable that these measures proved particularly successful against the larval stages of A. gambiae, while modifications of larval habitats of A. funestus were more challenging. This is confirmed by the consistently higher adult catches of the latter vector species at several monitoring stations within the control area throughout the programmes (Watson 1953; Utzinger et al. 2001). Most importantly, malaria incidence rates decreased sharply. During the last 4 years of the programmes, residual house spraying with dichlorodiphenyltrichloroethane (DDT) became the final, and widely applied, intervention tool. It is important to observe that DDT was an addition to the previous package of interventions, never a substitute for them.
Similar interventions built around environmental management were implemented at Mufulira, Nkana-Kitwe and Nchanga between 1930 and 1936. At Nchanga, the initial progress of control measures was somewhat slower than in the other mining sites, and interventions stagnated during the war years in the early 1940s.
Cost of integrated malaria control
Annual implementation and maintenance costs of malaria control measures at the Roan Antelope mine are available for the entire 20-year period (Watson 1953). For the Mufulira mine, annual maintenance costs from 1933 onwards are also available from the published programme budget. As the extent of control areas, the package of interventions and the total maintenance costs for 1933–49 were virtually identical for the Mufulira and Roan Antelope companies, we assumed that the capital investment in 1930 and the maintenance costs for the first 2 years were equal at the two sites. In both settings, physical resources and unit prices were separately recorded for each intervention, and the accounting system remained fixed throughout the programme. Costs of integrated malaria control at the Nkana-Kitwe and Nchanga mining companies are more fragmentary than for the Roan Antelope and Mufulira mines. Hence, several assumptions had to be made. First, we assumed that capital investment – mainly employed for the initial drainage, river boundary modification and vegetation clearance – was equal in all four mining sites. Secondly, the cumulative maintenance cost at Nkana-Kitwe and Nchanga for the years 1945 and 1949 indicated that they were comparable with the cumulative maintenance costs at Roan Antelope and Mufulira for the same 2 years. Therefore, annual maintenance costs at Nkana-Kitwe and Nchanga were estimated on the basis of the annual maintenance costs at Roan Antelope and Mufulira. Thirdly, in 1949, the costs for maintaining malaria control interventions at Nkana-Kitwe were 8% higher than at Nchanga. We assumed that the annual maintenance costs in the previous years were also 8% higher at Nkana-Kitwe than at Nchanga.
Direct treatment costs for clinical malaria episodes were also estimated. As the detailed control programme budgets never specified any costs for malarial treatment (Watson 1953), we assumed that the costs were directly borne by the mining employees and their families. The total annual costs were calculated by multiplying the number of malaria attacks at each mining site within 1 year with the mean treatment costs per malaria attack. The number of malaria attacks per year was estimated by multiplying the mean annual incidence rate with the estimated annual population at risk of malaria. Detailed clinical records from the hospital at the Roan Antelope mine suggested that an effective system of rapid diagnosis and treatment with quinine, following 9–10-day regimens, was in place (Rodger 1944). In the absence of the actual treatment costs for a single malaria episode at the time of programme implementation, we used an estimated mean cost of US$ 2.22 (in 1995 US$). This value is the translation of US$ 1.87 (in 1987 US$), provided by Shepard et al. (1991), and is based on cost analyses carried out in four different epidemiological settings across sub-Saharan Africa in the 1980s.
Finally, we estimated total indirect costs due to work shift losses because of malaria. We estimated the number of work shifts lost per clinical malarial attack based on the total annual work shifts lost for a representative sample of the total labour force at the Roan Antelope mine for 1944–49. The total income lost because of malaria-related work absenteeism was estimated for the active labour force at each mine. Labour wages were stratified for Europeans and Africans as they differed by more than an order of magnitude. Although data on wages for mining employees during the colonial era are scarce, we found historical accounts indicating that Europeans earned between £1.375 (profession: miner) and £1.625 (platelayer) per day in 1930 (Anonymous 1931). This corresponds to a daily wage of US$ 55–65 (in 1995 US$), and we used an average of US$ 60 for further analyses. We also assumed that Europeans, mainly engaged in skilled labour, earned a monthly salary and that they were also paid during illness episodes.
The average wage of an African employee for 30 working shifts at Roan Antelope in 1930 was £1.05 for surface and £1.65 for underground work (Parpart 1983). The average annual earning of an African employee in the year 1946 was £30 (Arrighi 1973). These wages convert to US$ 511–802 per year (in 1995 US$). For further analyses we used an average annual wage of US$ 700. There are clear indications that African mine workers were not paid when they suffered clinical malarial attacks and could not work in the mines. However, the wages African employees were paid were considerably higher than what they would have gained with subsistence farming.
These various cost estimates were all discounted to allow for time preferences. We used a discount rate of 3%, which is currently considered to be the standard rate in contemporary cost-effectiveness analyses (Gold et al. 1996). Our estimates facilitated calculation of annual and cumulative direct and indirect costs for malaria control by the mining companies and the employees and their families for the entire 20-year period.
Consequences of integrated malaria control
We assessed the consequences of the integrated malaria control programmes by calculating the number of averted deaths, malaria attacks, DALYs and work shift losses and respective costs. For Roan Antelope, malaria-specific mortality rates are available for Europeans at baseline (1929/30) and for the periods 1932–38 and 1938–43 (Rodger 1944; Watson 1953). We assumed that the latter rates remained constant until the end of interventions in 1949. In the absence of detailed malaria-specific mortality rates for the other mining communities, we assumed death rates equal to those at Roan Antelope, both at baseline and over the course of programme implementation. Deaths averted were estimated by the reduction in the malaria-specific mortality rates (comparing the rates before the programmes started with those measured during the implementation and maintenance phase) multiplied by the total person life years at risk.
The mean annual malaria incidence rates at the four mining companies facilitated estimation of reduction in incidence rates during the course of programme implementation. The data indicate that the initial package of integrated malaria control interventions reduced the baseline malaria incidence rate by 50–75% in the first 3 years. This rate was sustained until the mid-1940s. Indoor residual spraying with DDT resulted in another sharp decline in the annual malaria incidence rates (Watson 1953; Utzinger et al. 2001). We estimated the number of malarial attacks averted by comparing incidence rates prior and after implementation of integrated malaria control measures, multiplied by the total population life years at risk. Thus, without interventions, it was assumed that the baseline incidence rate would have remained the same for the next 20 years. This was compared with what actually happened based on the mean annual incidence rates during the course of programme implementation.
Estimation of DALYs averted followed the methodology presented in the Global Burden of Disease study (Murray & Lopez 1996). We adapted this technique for the mining communities studied here, as described in detail elsewhere (Utzinger et al. 2001). In brief, we stratified the mine populations into three age groups (0–4, 5–15, > 15 years), according to population percentages given by Snow et al. (1999) for communities living in areas of stable malaria transmission. We assumed equal mortality and malaria incidence rates for Africans and Europeans, which is justified by careful analyses of the original data records collected at Roan Antelope (Watson 1953; Utzinger et al. 2001). Furthermore, we assumed a life expectancy at birth of 50 years, employing a West African model life table (United Nations 1982). We used a discount rate of 3%, with no age-weights (Murray & Lopez 1996). Malaria-specific mortality and malaria incidence rates before and after programme implementations were the bases for DALY calculations. We used age-specific proportions according to Snow et al. (1999). In the absence of data on neurological sequelae and anaemia, we only considered clinical cases of malaria for disability calculations. This is justified, as the mining companies paid great attention to rapid malaria diagnosis and treatment with quinine. At the time of programme implementation, this drug was highly effective. Thus, occurrence of neurological sequelae and anaemia were likely very rare. We used setting-specific disability durations for a single malaria episode. Finally, DALYs were expressed in 3-year incremental periods as this corresponds to the short duration of cost-effectiveness analyses of contemporary malaria control programmes and allows comparison with other studies (Utzinger et al. 2001).
We performed a series of one-way sensitivity analyses for those parameters where inherent uncertainties were attached. We followed a similar analytical approach to those used in recent evaluations of costs, consequences and net cost-effectiveness of malaria control programmes (Aikins et al. 1998; Goodman et al. 2001). The parameters taken into account in our sensitivity analyses were direct treatment costs, the indirect costs because of work shift losses, the discount rate and the package of interventions. Based on the existing literature examining the direct treatment costs for a single malaria attack across multiple settings in sub-Saharan Africa, we used a range of US$ 1.00–5.00 (Shepard et al. 1991; Asenso-Okyere & Dyator 1997). For estimation of indirect costs because of malaria-related work absenteeism, we considered minimum and maximum labour wages, stratified for African and European employees. Wages were derived from the historical literature pertaining to the Northern Rhodesian copperbelt (Anonymous 1931; Arrighi 1973; Parpart 1983). We assessed the impact of different discount rates by either decreasing or augmenting the standard rate of 3% by 1 percentage point. Finally, we estimated costs, consequences and net cost-effectiveness ratios of the malaria control programmes between 1930 and 1945, prior to the use of DDT as an additional control intervention.
Connection between malaria control, labour force size and macroeconomy
We developed the following approach to facilitate appraisal of the macroeconomic impact of integrated malaria control in copper mining communities of the Northern Rhodesian copperbelt. First, in order to exploit large copper deposits and increasing annual productivity and revenues, access to a large reservoir of cheap labour was necessary. Labour migration in the 1920s was nothing new. In fact, Northern Rhodesia served as a reserve for mines in neighbouring Belgian Congo and southern Rhodesia (Parpart 1983; Ferguson 1999). Secondly, when companies decided to seriously invest in northern Rhodesia to significantly increase copper production, the promotion of rapid and sustained in-migration became of central importance. Indeed, there is clear documentation that an unsuccessful attempt on malaria control at Roan Antelope mine prior to 1929 resulted in migrant workers abandoning the site (Watson 1953). Thirdly, it was necessary to keep the number of work shifts lost at a low level. Fourthly, experience with the implementation and adaptive tuning of the malaria control measures revealed that the programmes displayed the desired outcomes within about 3 years. In delineating a plausible counterfactual situation for copper production in the absence of a sustained effective control programme, we used the actual control programme results and associated in-migration and copper production data only through 1933. We then assumed that without sustained and effective malaria control in place, there would have been only marginal in-migration thereafter; hence, the total work force would remain relatively constant. Consequently, copper production over the next several years would stay at the 1933 level. In the programme, effective malaria control led to diffusion of information among potential migrant workers about safe employment opportunity at the mines. Mine workers had effective networks for information exchange about living and working conditions and news circulated rapidly along the main labour routes (Parpart 1983). The subsequent large in-migration facilitated enhanced copper productivity, and revenues continued to increase. We then compared the percentage of national income attributable to mining (essentially derived from the four mines in this study) for the year 1938 – when malaria was successfully under control – with the estimated counterfactual revenues and their percentage contribution to the total national income under the scenario of no effective malaria control.
Copper mining communities
Annual population estimates for the four mining communities for 1930–49 are presented in Table 1. When extensive copper extraction started at Roan Antelope, the estimated population was 6067, consisting of 5000 Africans (82.4%) and 1067 Europeans (Watson 1953). At Mufulira, the initial population was estimated at 4897. One year later, Nkana-Kitwe started operation with an estimated population size of 8559. Finally, copper extraction commenced at Nchanga in 1936 with a population estimated at 5598. In 1940, the estimated total population living on the four mining sites was 77 872. Census data from Roan Antelope and Mufulira revealed that 91.8% of them were Africans, indicating that the proportion of Europeans was halved over the first 10-year period of copper extraction. One decade later, the total population residing in the four mining sites further increased by 80%, reaching an estimated size of 140 368. This dramatic increase is mainly because of sustained in-migration (Parpart 1983; Ferguson 1999). Census data from the Roan Antelope and Mufulira mines at the end of December 1949 revealed that the proportion of Europeans had increased from 8.2 to 9.9%.
Table 1. Annual population estimates for Roan Antelope, Mufulira, Nkana-Kitwe and Nchanga mines in the Zambian copperbelt between 1930 and 1949
The number of African and European employees and the total annual work force at all four mines from 1932 onwards were derived from Parpart (1983) and are shown in Table 2. The initial estimated work force in 1932 was 6465. Within 20 years it grew to 37 354, an increase by a factor 5.8. The mean proportion of employees to the total estimated population was 32.3%. The percentage of European employees varied between 9.8 and 13.8%, with an estimated mean of 10.8%. As expected in mining communities, the majority of the population were adult males. Census data at the Roan Antelope mine in 1949 revealed a sex-ratio between male and female of 1.9. The proportion of children under the age of 15 years was considerably smaller than what would be expected at that time for other communities in sub-Saharan Africa.
Table 2. Number (percentage) of African and European mining employees and total work force engaged in copper extraction in Zambia between 1932 and 1949 (source: Parpart 1983)
Copper extraction in Northern Rhodesia during the colonial period
In the second half of the 1920s, Northern Rhodesia contributed less than 5% of the African, and about 0.3% to the annual global copper extraction. At that time, Belgian Congo was by far the leading African producer, accounting for approximately 90% of the extraction on the continent. Belgian Congo controlled more than 7% of the world copper ore market. After the discovery of large copper deposits in Northern Rhodesia, and the decision by the British government to exploit them on a large scale, three copper companies started operation between late 1929 and 1931. Consequently, the annual copper extraction increased exponentially. While Northern Rhodesia produced 9100 metric tons in 1931, there was a 7.5-fold increase to 69 000 metric tons in the following year. Virtually overnight, the country became the leading copper producer in Africa (Figure 2). Within the next 8 years, and complemented by the inauguration of the Nchanga mining company, there was a 3.9-fold increase in annual copper extraction to 266 600 metric tons in 1940.
Northern Rhodesia was then the third most important copper ore producer worldwide, with a global share of more than 11% (Table 3). The United States and Chile ranked first and second, and Northern Rhodesia moved ahead of Canada and Japan. During World War II and particularly the early postwar years, the annual copper production in Northern Rhodesia dropped considerably. Nevertheless, the country continued to supply approximately 9–11% of the global annual copper demand.
Table 3. Annual global and Zambian copper production from 1930 to 1949 and economic revenues over this 20-year period expressed in 1995 US$
Average annual copper prices fluctuated significantly between 1930 and 1949, as shown in Table 3. Shortly after copper production was intensified in Northern Rhodesia, and in conjunction with a major global recession, the average price fell sharply from 28.90 US cents/kg in 1930 to 12.50 US cents/kg 2 years later – a reduction of more than 50%. However, as a result of increasing annual extraction, the revenues rose steadily between 1931 and 1937. Converted into 1995 US$, the annual monetary value of the Northern Rhodesian copper extracted in 1937 was more than half a billion US$. It rose to US$ 613 million in 1940. Copper prices remained constant throughout the war years and increased considerably in the first postwar years. The cumulative revenue between 1930 and 1949 from copper was US$ 7.1 billion (in 1995 US$).
Analysis of the Northern Rhodesian income and expenditure statistics for the year 1938 clearly revealed that copper production represented the dominant share of the country's economy. Converted into 1995 US$, the taxable national income because of mining was 329 million dollars, or 55% of the total taxable national income. This figure is somewhat smaller than the estimated annual monetary value of copper extraction in Northern Rhodesia in the same year – US$ 440.8 million (Table 3). The discrepancy can probably be explained by somewhat higher average copper prices paid on the New York market, compared with the prices paid in England. About two-thirds of the taxable national income because of mining activities was claimed in England, as it was the registered home country of the copper companies. The remaining one-third share was claimed by Northern Rhodesia. Subsistence agriculture, which engaged the majority of the population, followed at position two on the taxable national income list with a total of US$ 89.5 million (15%). The distribution and transport sector ranked third on the taxable income list with a total of US$ 81 million (14%).
Direct costs of integrated malaria control in the Northern Rhodesia copperbelt
The annual implementation and maintenance expenditures of integrated malaria control measures borne by Roan Antelope, Mufulira, Nkana-Kitwe and Nchanga copper mines for the period 1930–49 are given in Table 4. The most detailed cost data were available for the Roan Antelope mine. There was a high initial capital investment of £25 000 for environmental management interventions, which is the equivalent to US$ 1 013 119 (in 1995 US$). Annual maintenance costs, discounted by an annual rate of 3%, were significantly lower and ranged between US$ 38 153 and 174 154. The total cost for malaria control borne by the Roan Antelope mining company over the course of 20 years was US$ 2 952 284. Assuming the same initial capital investment and annual maintenance costs for the first 2 years at Mufulira and Roan Antelope, and taking into account the detailed annual maintenance costs from the original programme budget at Mufulira, the company spent US$ 2 793 946 on malaria control measures. The estimated cumulative costs for malaria control borne by the Nkana-Kitwe and Nchanga companies were US$ 2 894 699 and 2 528 542, respectively. Therefore, the overall cumulative costs of integrated malaria control borne by the four copper mining companies were US$ 11 169 472.
Table 4. Annual and cumulative direct costs of integrated malaria control measures implemented and maintained between 1930 and 1949. Costs in British pounds (£) were converted into 1995 US$ and were discounted by 3%
Direct and indirect costs borne by the mining employees and their families
Assuming treatment costs of US$ 2.22 (Shepard et al. 1991) for a single clinical malaria attack, we estimated the total direct costs borne by the mining employees and their families with and without malaria control programmes in place. They are presented in Table 5, alongside estimates of costs averted. Because of the integrated control measures being so effective in terms of reduction of incidence rates within the first 3–5 years of programme implementation and maintaining them at these lower levels, the costs averted gradually increased. Wide-scale application of DDT in the last 4 years of the malaria control programmes had a dramatic effect on malaria incidence rates. This led to a considerable increase in the total costs averted. Overall, the cumulative costs borne by mine employees and their families over the course of the malaria control programme were US$ 337 525 (in 1995 US$). In the absence of integrated control, these costs would have been significantly higher, namely US$ 1 134 147. The essential point is that by implementing simultaneously a multiplicity of interventions that interfere with one or more components of the transmission system, there is a reduction in the demand for any one tool (e.g. antimalarial drugs).
Table 5. Direct treatment costs borne by the mining employees and their families with and without malaria control measures. All costs were discounted by 3% and are expressed in 1995 US$
Based on a representative sample of the work force at the Roan Antelope mine, and the total number of work shifts lost by these employees during 1944–49, the mean duration of work shifts lost could be estimated. On average, an employee suffering from a clinical malaria attack missed 5.8 working days (Table 6). During this period there was no contribution to copper production. The indirect costs for mining employees are presented in Table 7. We assumed that the annum-specific proportion of African to European employees in the Northern Rhodesian copperbelt (Table 2) was equal for all four mining sites. Although there were approximately nine times more African employees, the total indirect costs borne by European mine workers for productivity lost was almost three-fold higher. This is explained by approximately 26-fold higher wages paid to Europeans. Analogous to what was observed for direct treatment costs, implementation and maintenance of environmental management interventions promptly decreased the amount of work shifts lost, hence the indirect costs averted gradually increased.
Table 6. Number of work shifts lost in relation to the mean annual malaria incidence rate for a representative sample of employees at the Roan Antelope mine between 1944 and 1949
Table 7. Indirect costs borne by the workers and their families (for African employees) and the mining companies (for European employees) for work time lost as a result of clinical malaria attacks. All costs were discounted by 3% and are expressed in 1995 US$
A direct consequence was an increase in the overall productivity of the mining companies. Use of DDT toward the end of the malaria control programmes had further profound effects on the number of working days gained as a result of lower malaria incidence. The overall indirect costs attributable to clinical malaria attacks and work shifts lost at the mines were US$ 2 607 427 (in 1995 US$). In comparison, we estimated that the total indirect costs because of work shifts lost would have been US$ 8 286 173, if no malaria control measures had been implemented and sustained. Therefore, successful integrated malaria control averted over US$ 5.6 million (Table 7).
Consequences of integrated malaria control
The baseline annual malaria incidence rate at the Roan Antelope mine was 514 per 1000. During the first 3 years of programme implementation the annual incidence rate was reduced to 135 per 1000. It remained at this lower level until the mid-1940s. Expansion of the control area and regular application of DDT from 1946 onward resulted in another sharp decline in the annual incidence rate to an average of 21 per 1000 (Table 8). Although there is no baseline malaria incidence rate available for the Mufulira mine, the annual incidence rates measured between 1936 and 1949 showed a similar trend, suggesting that the initial incidence rate at Mufulira was as high as the Roan Antelope rate. The baseline incidence rate in Nkana-Kitwe was slightly lower than at the Roan Antelope mine. However, the initial package of malaria control interventions was less successful than at the Roan Antelope and Mufulira mines. This is probably attributable to the greater difficulty of controlling malaria in an urban environment with even more human population movement (Ferguson 1999). The average annual incidence rates during the period 1936–43 were considerably higher. Implementation of additional drainage work in 1944 halved the incidence rate within 1 year. Regular application of DDT to the inside walls of the resident houses further decreased the incidence rates to 23 per 1000. Similar reductions in annual malaria incidence rates were also observed at the Nchanga mine (Table 8).
Table 8. Effect of integrated malaria control on mean annual malaria incidence rates in four mining communities of the Zambian copperbelt during the colonial period
The integrated malaria control programmes therefore averted more than half a million clinical malaria attacks during the 20 years of intervention (Table 9). Furthermore, the programmes averted the loss of an estimated 942 347 work shifts.
Table 9. Summary of total person life years at risk, costs (direct and indirect), consequences, as well as gross and net cost-effectiveness of integrated malaria control in four copper mining communities of Zambia between 1930 and 1949 (all costs expressed in 1995 US$)
At the Roan Antelope mine, the baseline annual mortality rate because of malaria was 10.3 per 1000. The rate fell sharply to 0.5 per 1000 during the period 1932–38 (Watson 1953) and was further reduced to 0.37 per 1000 between 1938 and 1943 (Rodger 1944). We assumed that the malaria-specific mortality rate did not exceed 0.5 per 1000 until the late 1940s and that malaria death rates in the other three mining sites were similarly low. Hospital in-patient records for Africans at Mufulira support this assumption, as there was only one patient who died of malaria in the year 1949 from a total of 3421 in-patients (case fatality rate because of malaria: 0.3 per 1000). Over the entire 20 years that control programmes were in place, 14 121 deaths were averted (Table 9).
We estimated that the malaria control programmes averted a total of 127 226 DALYs, as expressed in 3-year incremental periods (Table 9). Finally, these estimates were used to calculate the total costs per death and malaria attack averted, which were US$ 332.42 and 9.07. The average costs per DALY averted in 3-year incremental periods were US$ 36.90 (Table 9).
The results of the one-way sensitivity analyses are summarized in Table 10. Assuming direct treatment costs of a single clinical malaria attack of US$ 1.00 rather than 2.22 (expressed in US$ 1995) had only a small effect on the net cost-effectiveness ratios, as they changed by less than 10%. On the other hand, introducing higher direct treatment costs of US$ 5.00, the net cost-effectiveness ratios changed by slightly more than 20%. When we incorporated minimal or maximal daily wages for Europeans employees of US$ 55 or 65, respectively, rather than the mean daily wage of US$ 60, this affected the net cost-effectiveness ratios by less than 8%. Similarly, allowing for annual wage ranges of Africans between US$ 511 and 802 rather than 700, was accompanied by changes in the net cost-effectiveness ratios of less than 8%.
Table 10. Summary of results from one-way sensitivity analyses (all costs expressed in US$ 1995)
Assuming that the integrated malaria control programmes were only run from 1930 to 1945, and halted before the additional application of DDT, this had considerable implications for costs, consequences and net cost-effectiveness ratios. The total costs of control interventions for these first 16 years of implementation were US$ 9 915 185. The direct treatment costs averted and the indirect costs averted because of work shift losses were US$ 471 148 and 3 666 134, respectively. Therefore, the net costs were increased by 23%, namely to US$ 5 777 903. Under this scenario, there were an estimated 9132 deaths and an estimated 266 272 malaria attacks averted. Consequently, the net costs per death averted were US$ 632.70 and the corresponding net costs per malaria attack averted were US$ 21.70 (Table 10).
Impact of integrated malaria control on the macroeconomy
Major mining activities at three of the four sites were launched in 1930 and 1931. This coincided with a global economic recession. Large numbers of mining employees lost their jobs in all parts of south-central Africa. In 1932, the estimated work force in the Northern Rhodesian copperbelt was 6465. It increased by 27% to reach an estimated size of 8216 in the next year. Integrated control measures had now been implemented for some 3 years and dramatically reduced the malaria-specific mortality and annual incidence rates. By this time, the annual copper production was 105 900 metric tons and the annual revenues, expressed in 1995 US$, were US$ 182 million (Table 3). Toward the end of the recession, with copper prices sharply increasing, there was a need for large numbers of workers. As effective malaria control measures were sustained, there was significant improvement in overall living and working conditions. This was a key incentive in the employment seeking behaviour of potential workers. As a result, in-migration continued on a very large scale. The total work force almost doubled from 1933 to 1934 and by the year 1938 the size of the total work force was estimated at 22 654 (Table 2). These dramatic increases in the number of employees were in stark contrast to neighbouring Belgian Congo and Southern Rhodesia. Under the counterfactual assumption that no effective malaria control would have been maintained, inhibiting further in-migration of cheap labour into the copperbelt, we assume that productivity would have remained unchanged after 1934. In 1938, for example, the annual revenues would have been US$ 216 million (in 1995 US$), the increase simply being attributable to higher copper prices. In 1938, annual copper production had reached a volume of 216 400 metric tons. Revenues, in turn, were as high as US$ 441 million (in 1995 US$). The national income statistics for the same year revealed that 55% of the revenue was from mining. Thus, effective malaria control was a principle driving force behind Northern Rhodesian economic development. Without it, the taxable national income would have been 28% lower and the percentage of national income from mining activities would have dropped to 37%.
We have presented a comprehensive cost-effectiveness analysis of copper extraction in four mining communities of Zambia during the English colonial era by performing an empirical assessment of the impact of integrated malaria control. Complex as it was to connect malaria control to the macroeconomy in the present case, we believe that making this connection elsewhere is an order of magnitude more difficult. Although the programmes were launched more than 70 years ago, and we had to make a series of assumptions, including plausible counterfactual scenarios, we have clearly demonstrated the tremendous impact integrated malaria control programmes in copper mining communities of Northern Rhodesia had on the national economy. Therefore, our appraisal is a contribution to the challenge which Paul Russell formulated more than 40 years ago (Russell 1959, cited in Packard 1997). Drawing causal inference is a complex subject and in our view it demands a flexible, innovative and multifaceted approach (Marini & Singer 1988). Here, a large body of experimental and observational evidence has been accumulated across multiple copper production sites over a period of 20 years. The sequence of events, starting with the design and implementation of integrated malaria control, which fostered large-scale in-migration and successively led to exponentially growing copper extraction, is suggestive of strong temporal plausibility. Consistent observations across the four settings that all outcome and impact variables of the malaria control programmes exhibited the desired end points strengthens the plausibility of our causal hypothesis. In addition, these observations were in contrast to neighbouring copper producing countries. Consequently, the association between successful malaria control and economic development cannot readily be attached to an alternative causal hypothesis.
It should be noted that disease control in the Northern Rhodesian copperbelt was not the overarching objective, but instead a necessity to facilitate economic development. From the companies' perspectives, profit from copper was the primary driving force. In the first instance there was serious concern about the health of Europeans. Then there was fear among potential mine workers about death from malaria. Night shifts were a particular worry. Rumours persisted among new employees who sought work and often came from neighbouring Belgian Congo, Southern Rhodesia, and as far away as South Africa that they should only buy one-way train tickets, as there was suspicion that they would never return (Watson 1953). However, both rumours and fear disappeared shortly after integrated malaria control was in place and proved successful (Watson 1953; Holleman & Biesheuvel 1973; Utzinger et al. 2001). It stimulated unprecedented in-migration (Watson 1953; Mitchell 1961; Parpart 1983), which, in turn, provided for labour substitution at a level that would have ensured high revenues even with less successful malaria control programmes.
Our economic evaluation was carried out within the methodological framework proposed by Drummond and Stoddart (1985), slightly modified by Mills (1993b) and adapted for the setting of copper mining communities in Northern Rhodesia. Accordingly, costs and consequences of malaria control were assessed to facilitate the cost-effectiveness analysis. As we had to rely on historical records, it is important to note that record keeping during the British colonial period was excellent (Parpart 1983); hence, the present economic appraisal of a tropical disease control programme is comparatively thorough and unbiased. This kind of analysis, assessing the interrelationship between disease control and economic performance and development by clarifying the underlying mechanisms, is virtually non-existent in the extant literature, because it has been viewed as almost impossible to conduct (Russell 1959, cited in Weisbrod et al. 1973; Packard 1997).
On the cost side, the detailed programme budget for the Roan Antelope mine allowed appraisal of the total direct costs borne by the company, including the high initial capital expenditures and annual maintenance costs of the malaria control interventions. Assuming equal capital investments and estimating annual operational costs at the other three mining sites, also including a standard discount rate of 3%, facilitated estimation of the total costs that were spent on control measures over the entire 20 years of programme implementation. These costs allowed calculation of gross cost-effectiveness ratios. Costs borne by the patients and their families for curative therapies of clinical malaria attacks and indirect costs of work shifts lost were also estimated and the costs averted as a result of integrated malaria control were included in the final analyses of net-cost effectiveness. Again, a series of assumptions had to be made. For direct treatment costs of a clinical malarial attack, we used a mean value of US$ 2.22 (in 1995 US$). It stems from four case studies in different countries of sub-Saharan Africa (Shepard et al. 1991), approximately 50 years after the implementation of the control programmes presented here. Clearly, there are important differences in the range of treatment seeking options and behaviour, methods of diagnosis and malaria drugs used among mining communities in the first half of the last century and the participants in those four case studies. However, our analyses revealed that the overall contribution of direct treatment costs was relatively small compared with other direct and indirect costs. This was confirmed in the one-way sensitivity analysis, because more than doubling the costs for a single malaria attack changed the net-cost effectiveness ratios by only 21%.
The primary features of the control measures discussed here were their consequences for malaria and work force productivity. In this regard, it is also important to note that our estimated costs averted and the consequences because of the integrated malaria control programmes were probably at the upper limit of the possible. With less successful control measures, far fewer employees would have sought work on these mines; hence, less person life years were at risk of malaria. We found highly significant reductions in mortality, incidence rates, DALYs and days of work lost. Furthermore, the programmes were more cost-effective than other, currently widely applied, malaria control tools (Goodman et al. 1999, 2001). For example, we estimated gross and net costs per death averted of US$ 790.99 and 332.42, respectively. These values were at the lower end when compared with other studies, normally using only one single intervention, with cost-effectiveness analyses performed over short durations.
A central feature of the programme design employed in the Northern Rhodesian copperbelt was that several interventions were tuned to the local ecology and implemented simultaneously. Our analyses were based on historical controls, comparing the pre-programme stage with that during programme implementation. At present, the most widely accepted method for assessing the effectiveness of interventions in the health care field is evidence derived from randomized controlled trials (Drummond & Stoddart 1985; Goodman et al. 1999). However, we have argued recently that historical controls in the present settings are more appropriate particularly because of great ecological variations among the four mining communities (Utzinger et al. 2001). This inhibits comparison between the different settings and makes the notion of a control community meaningless. Matching communities on ecosystem structure is not practicable. Our present analyses clearly confirmed that there was great variability in the local ecology and epidemiology across the four mining communities, as can be seen, for example, by the different mean annual malaria incidence rates.
Our study shows that the overall costs of implementing and sustaining integrated malaria control programmes were minuscule when compared with the total revenues from the mining sector in the Northern Rhodesian copperbelt. Over the entire 20 years of programme implementation, a little more than US$ 11 million were spent for malaria control measures. As the programmes were highly successful, almost US$ 6.5 million in direct and indirect costs could be averted. Integrated malaria control therefore contributed a share of approximately 0.07% of the total revenues of US$ 7.1 billion by the copper mining companies (all figures in 1995 US$). We argue that the companies' initial capital investment mainly in environmental management strategies, the maintenance and tuning of these interventions to the local settings and the integration with additional malaria control measures were the key determinants of social and economic development in Northern Rhodesia. The underlying mechanism was that successful malaria control promoted unprecedented and essential in-migration and sustained healthy work forces. It is important to note that wage labour was nothing new when copper exploitation commenced on a large scale in the Northern Rhodesian copperbelt. Instead, it had persisted for at least two decades. Thousands of migrant workers from this country had previously gained wages for mine production work in neighbouring copper producing nations (Parpart 1983; Ferguson 1999). Initially, large numbers of unskilled workers were required for mine construction. Early estimates suggest that by 1930 there were nearly 30 000 workers; however, they only stayed for very short periods of time. The work force rapidly declined in parallel with the global economic recession at that time. From 1932 onward, there was a sharp increase in wage labour in the Northern Rhodesian copperbelt. This was distinctly different from neighbouring Belgian Congo. Two additional features were characteristic of this period. First, there was a sharp increase in the length of employment. Secondly, the proportion of married couples grew rapidly, suggesting that mine workers were encouraged to bring their wives and children to the mining sites – labour migration was gradually replaced by permanent urbanization (Parpart 1983; Ferguson 1999). But above all, the distinctively higher mining productivity compared with Belgian Congo, could not have been accomplished without successful malaria control (Watson 1953; Holleman & Biesheuvel 1973).
It is important to emphasize that successful control was achieved by a host of interventions. The key idea is that this strategy impacts the malaria transmission system at multiple vulnerable points. Using several interventions exerted less pressure on any one control measure (e.g. antimalarial drugs) to do the complete control task. Thus, the demand for drugs, for example, was reduced substantially beyond what it would have been if this had been the only control measure. Applications of oil to open water bodies, and during the last 4 years also residual spraying with DDT, were part of the interventions. Today, these tools are considered to be environmentally unsound. We did not adjust our costs of deaths, malarial attacks, DALYs and days of work lost averted for the potential environmental damage caused by these control measures. However, we did perform some sensitivity analyses under the scenario that control programmes were only operated until 1945, prior to the additional application of DDT. Although net cost-effectiveness ratios decreased considerably, the programmes were still highly cost-effective. According to our estimates, the control programme averted more than half a million clinical malaria attacks and prevented almost one million work shifts lost at the mines. Reduction in the demand for antimalarial drugs is especially pertinent for contemporary Zambia and many other parts of the humid tropics, as the smaller the fraction of the population in need of curative treatment, the greater the length of time that antimalarials will remain effective (Cross & Singer 1991; White 1999).
Integrated malaria control with strong emphasis on environmental management was not only carried out to enhance copper production in Zambia, but was also of central importance for the successful construction of the Panama canal in the early 1900s (Chamberlain 1929). In the first two decades of the last century, great success with environmental management was also achieved at rubber and tea plantations, as well as port cities of peninsular Malaysia (Watson 1921). A host of antilarval interventions proved to be the key measures for successful malaria control in Italy in the 1920s (Hackett 1929). Between 1920 and 1935, modification or destruction of larval habitats alongside general sanitary engineering proved an effective means of malaria control in various environments along the coastal plains throughout the Indonesian archipelago (Takken et al. 1990). In South-east Asia different water management strategies have been developed and successfully applied to control rice field-breeding mosquitoes (for review see Lacey & Lacey 1990). Importantly, sustaining these successful malaria control campaigns triggered in-migration and ascertained healthy labour forces and, in turn, brought about social and economic development. While most environmental management interventions to control malaria were carried out in areas of low transmission, we have now presented a fresh body of evidence that these control measures also succeeded in high transmission areas. Environmental management interventions targeting the larval stages of the malaria vectors were particularly successful against A. gambiae and less so against A. funestus. This stands in contrast to the widely expressed belief that the control of A. gambiae in Central and Southern Africa is more difficult than that of A. funestus, mainly because A. gambiae is a `rain water breeder' that uses all kinds of small water collections exposed to direct sunlight, which are particularly abundant after rainfall (Wilson 1949). The successful control of A. gambiae was achieved by filling soil depressions with mining tailings and the scrupulous regular application of oil to all remaining open water bodies. In the early 1930s, Sir Malcom Watson on his visit to Roan Antelope mine stated: `this success was a remarkable achievement and the first of its kind over A. gambiae in any part of the world' (Watson 1953). Of course, detailed knowledge of the ecology of the chief malaria vectors and adaptation of control interventions to the local settings were of pivotal importance.
With the discovery of DDT in the mid-1940s, it was believed that malaria could be eradicated by the widespread application of this powerful insecticide alone. As a consequence, attention to integrated control approaches diminished and the potential of environmental management was almost forgotten worldwide for about four decades (Ault 1994). There is currently renewed interest in these control strategies. Support for this position derives from experience in an industrial township of India, where implementation and maintenance of environmental management for malaria control during 1987–95 proved operationally feasible, sustainable and cost-effective (Dua et al. 1997). Careful analyses of recent large-scale malaria control programmes with environmental management as a central feature – implemented for nearly one decade in the towns of Dar es Salaam and Tanga in Tanzania (Yamagata 1996) – are urgently needed, as they might reveal similar outcomes as in the Zambian copperbelt more than 50 years ago.
In contrast to the successful programmes, it has been documented that failure to control malaria in present-day Jakarta during the 18th century claimed more than 85 000 craftsmen, sailors and soldiers among the personnel of the Dutch East India Company and financially ruined the enterprise (van der Brug 1997). An important study aiming at malaria control by means of environmental management, conducted in an army cantonment of Mian Mir in present-day Pakistan between 1902 and 1909, could not demonstrate the desired outcome of significantly reduced malaria-related mortality and incidence rates. It has been argued that the results of this study negatively influenced subsequent attempts to control malaria in India and elsewhere (Bynum 1994). However, there are reasons to believe that the control programme was inappropriately planned, inadequately adapted to the local ecology and ill-funded (Watson 1931; Bynum 1994). The critical evaluation by leading malariologists at that time – Ronald Ross and Malcom Watson – deserves further scrutiny, as there was a substantial record of successful malaria control dating back to 1901 (Watson 1921; Chamberlain 1929).
Our historical example, derived from the industrial sector of the Northern Rhodesian copperbelt, implies that implementing and sustaining an effective malaria control programme was essential for recruiting and maintaining healthy work forces. The cash-generating activities of copper production were the necessary incentives for the miners to prevail at the mines. This, in turn, was essential for the companies to operate smoothly and maximize copper profits. To ensure steady revenue flows, the management boards of Northern Rhodesian mining companies carefully monitored the number of shifts lost because of malaria and implemented measures to keep as few workers absent from the mines as possible. Clearly, there was a maximum tolerable level of absenteeism, beyond which copper extraction would not have been profitable. Furthermore, a sound control strategy reduces indirect costs borne by mining employees because of reduced work shifts lost. High malaria incidence rates at the Bwana Mkubwa mine prior to 1930 was the principal reason that copper production never really took off in that period (Mitchell 1961). However, in the present case study, integrated malaria control was so effective – reducing the incidence rates in the first 3 years by 50–75% – that there was never any shortage of labour to carry out copper extraction. Besides keeping malaria incidence rates as low as possible, there were important seasonal variations in transmission and, hence, incidence. Monitoring monthly rates showed that there was a distinct peak toward the end of the rainy season in March (Watson 1953; Utzinger et al. 2001). The Northern Rhodesian mining companies were most vulnerable during this time of the year. Going beyond the context of mining companies, seasonal variation in malaria transmission also applies to agricultural settings (Audibert 1986). Unfortunately, this feature of the malaria problem has received insufficient attention in health policy discussions, as well as in government planning and administration (Chambers et al. 1979).
Malaria control was a critical ingredient for profitable copper extraction. Thereby, it became a leading factor for economic development of Northern Rhodesia. In the 1950s, 1960s and early 1970s, copper production further increased to reach a peak of 778 900 metric tons in the year 1972 – a share of 9.9% of the total annual global copper production (Commodity Research Bureau 1984). Thereafter, copper production decreased gradually, and in 1998, Zambia produced a total of 315 000 metric tons of copper. Its global share had shrunk to 2.6% (Bridge Commodity Research Bureau 2001). This amount of copper production was virtually the same as half a century ago. Recently, the copper mines of post-colonial Zambia have been privatized and there is great interest in, once again, increasing annual copper production. Alongside these developments, there is renewed interest in the question of how to control malaria at these copper mining sites. Importantly, copper extraction in contemporary Zambia continues to be a major potential source of economic advancement for the country. The most recent economic indicators available for the year 1998 revealed that mining activities still made a contribution of 10.7% to the taxable income (Central Statistical Office 1999). Copper companies are currently aiming at a significant increase in copper production. Consequently, it is expected that the economic share of this natural resource to the Zambian GDP might increase considerably over the next several years. Integrated malaria control will be of pivotal importance to achieve the desired social and economic benefits.
Besides the present example, there are a host of analogous situations in the industrial and service sectors of sub-Saharan Africa suggesting that effective malaria control programmes will be a crucial element for much needed economic development. Implementation and maintenance of integrated malaria control measures in agricultural plantations and mining and oil drilling operations are of particular interest because of the high revenues on capital investments. Integrated malaria control is likely to succeed in urban centres and in port cities of sub-Saharan Africa, where large populations are aggregated on relatively small surfaces. The essential point is that high population densities are characterized by large-scale drainage works and freshwater pollution of remaining surface waters. Both features are negatively correlated with densities of anopheline mosquitoes. Effective vector control in these settings might have substantial positive impact on tourism (Sachs & Malaney 2002).
Finally, integrated malaria control approaches are likely to have methodological and infrastructure spillovers to fight other parasitic diseases, as well as HIV/AIDS and tuberculosis, so that the overall disease burden of poor countries can be substantially reduced. Implementation of these programmes requires effective public health infrastructures. This is where investment is critically needed. Education of local public health engineers and agricultural personnel who can carry out intersectoral and multicomponent intervention programmes is a pressing necessity. Substantial increases in global donor support – leading economists have forwarded figures of US$ 10–20 billion per annum – will be mandatory. It is encouraging that this is finally entering the contemporary discussion of the development communities.
We are grateful to Jacqueline V. Druery, Lara J. Moore, Susan B. White and Bobray J. Bordelon from Princeton University for invaluable help screening the historical literature. Thanks are addressed to Patrick Gerland for mapping support and two anonymous referees for a series of excellent comments and suggestions. This work was financially supported by research grants from the Centre for Health and Wellbeing at Princeton University (J.U. and Y.T.) and the Swiss National Science Foundation (J.U.).