I address a range of topics that provide the sociopolitical-technological setting for my professional life. I discuss some influential features of post–World War II world geopolitics, landmark technological developments of that era, and the resulting follow-up technologies that have made it possible to approach various problems in hydrology and water resources. I next address societal needs that have driven developments in hydrology and water resources engineering and follow with a discussion of the modern foundations of our science and what I think are the principal issues in hydrology. I pose three community challenges that when accomplished should advance hydrologic science: data network needs for improving the water budgets at all scales, characterizing subsurface water flow paths, and the information archiving and mining needs from instruments that will generate substantially richer data detail than have been used for most hydrologic work to the present. I then discuss several hydrologic and water resource risk-based decision issues that matter to society to illustrate how such risks have been addressed successfully in the past. I conclude with a long-term community “grand challenge,” the coupled modeling of the ocean-atmosphere-landform hydrologic cycle for the purpose of long–lead time hydrologic prediction.
If you can't find a tool you're looking for, please click the link at the top of the page to "Go to old article view". Alternatively, view our Knowledge Base articles for additional help. Your feedback is important to us, so please let us know if you have comments or ideas for improvement.
 The work that we can do is influenced heavily by the social and political setting and the technical capabilities of the times. I was born in August 1944 in Newcastle, Australia, so I have chosen to address issues that place my living and work environment into perspective starting from the mid-1940s. I start with some details of the world geopolitical setting and follow that with a discussion of landmark technological developments that have occurred during my lifetime and the resulting follow-up technology that made it possible to approach various problems. I next list what I consider to be the major issues in hydrology and water resources engineering and follow with a short discussion of the foundations of much of our science. With this background I address issues associated with improving the catchment water budget and pose three community challenges: the need for improved measurement of hydrologic fluxes and states, how to characterize “hillslope” heterogeneity, and data storage and use from increasingly data rich instruments.
 Hydrologists and engineers a century ago faced the same risk-based decision problems (e.g., floods and droughts) we face today, albeit usually with shorter records than are now available. One modern concern is how to account for apparent or possible “nonstationarity” in our data time series. I address this by considering two examples and also provide two inspiring examples for how societally important risk-based (under considerable uncertainty) designs were made with only short records. I close with a community “grand challenge” to work toward hydrologic prediction at time scales on the order of a decade and longer.
2. Post–World War II World Geopolitical Setting
 Professional historians would identify many more defining events and activities than I address here, but my selection captures the times of my formative years and my professional career. By the time of my birth in 1944, the end of World War II was in sight. For many economies this also brought to an end the worldwide “Great Depression.” It also meant that there would be major redeployments of capital for activities that had been neglected during the depression and a change from wartime to peacetime economies. The transition from war-based economies to peace-based economies was wrenching for all countries; considerable rebuilding was necessary in war-ravaged countries.
 A new focus was provided for the world by former British Prime Minister Winston Churchill in his “Sinews of Peace” address on 5 March 1946 at Westminster College, Fulton, Missouri (http://www.hpol.org/churchill/). He introduced the term the “iron curtain” that was the defining geopolitical domain for the following “cold war.”
From Stettin in the Baltic to Trieste in the Adriatic an “iron curtain” has descended across the Continent. Behind that line lie all the capitals of the ancient states of Central and Eastern Europe. Warsaw, Berlin, Prague, Vienna, Budapest, Belgrade, Bucharest and Sofia; all these famous cities and the populations around them lie in what I must call the Soviet sphere, and all are subject, in one form or another, not only to Soviet influence but to a very high and in some cases increasing measure of control from Moscow.
This quote emphasizes the geographic extent of the iron curtain. The cold war lasted from 1947 to 1991 and was the backdrop for much of what happened politically, militarily, economically, and scientifically in the world. The major conflict was with the then USSR and the Western world. Associated with this were the nuclear arms race and the space race (principally the United States and USSR). I was aware of this from about age 11 and particularly the potential for nuclear annihilation.
 The United States was the dominant world economy at the end of World War II, and there was recognition that western Europe had to be rebuilt. The Marshall Plan, 1948–1951, was the primary U.S. program for rebuilding the countries of western Europe. The first major “cold war” test of western world resolve came with the Berlin Blockade (24 June 1948 to 12 May 1949) by the USSR. The response to provide food and fuel for the citizens of Berlin was to deliver essential materials by air in what became known as the “Berlin Airlift.” Some 13,000 t of supplies per day were delivered by the U.S. Air Force and Britain's Royal Air Force. This was both a heroic and logistic marvel by any measure, and it changed the balance of power in Germany and also demonstrated a way forward for modern air cargo transportation.
 The Korean War (1950–1953) was the first major international conflict that I knew anything about. I watched a parade of returning Australian soldiers as a 9 year old and asked my parents: Where have they been? From about this age I had some awareness of world events, principally by listening to the comprehensive radio news broadcasts by the Australian Broadcasting Commission. One world event that seemed to have the greatest potential for escalating was the 1956 “Suez Crisis.” Egypt nationalized the Suez Canal on 26 July 1956. Britain, France, and Israel initiated military action to retake the canal on 29 October 1956. The United States, for multiple cold war reasons, did not join in. We now know that the Suez Crisis marked the end of colonialism. To a then 12 year old boy, this was a troubling time.
 We had constant reminders of the cold war and the possibility of nuclear annihilation. The British aeronautical engineer and author Nevil Shute (Norway) published his book On the Beach in 1957. This was an “end of the world” novel: World War III resulted in nuclear holocaust, with the last survivors against radiation poisoning residing in Melbourne, Australia, in the year 1964. This was all quite plausible: the world came to the brink of nuclear war in October 1962 with the “Cuban Missile Crisis” near the end of my first year as an undergraduate; I and my peers were very much aware of world tensions.
 One event that shattered us in Australia was the assassination of President John F. Kennedy in Dallas on 22 November 1963. My parents, brother, and I had attended a rare family night out at the movies the night before and watched the biographical film PT 109 about President Kennedy and his World War II U.S. Navy leadership. We were far removed from the American political scene, but we knew that the world could not be the same after the loss of President Kennedy.
 The Vietnam War (26 September 1959 to 30 April 1975) was a major influence on my generation, and many have been affected by it. Associated with these times was an overwhelming sense by some people, particularly intellectual leaders, of doom and gloom. There was doom and gloom about world population; the immediate post WWII optimism, particularly in the United States, was being replaced with marked pessimism. Paul Ehrlich published his book The Population Bomb in 1968. The subtitle was “Population Control or Race to Oblivion?” Ehrlich warned of mass human starvation through the 1980s. (The world population was 3.69 billion in 1970. It was estimated to be 6.81 billion in mid-2010.) There was doom and gloom about agriculture production. William and Paul Paddock published in 1967 Famine 1975! with the subtitle “America's Decision: Who Will Survive?” The Paddocks predicted worldwide famine and were extremely pessimistic about food supplies. They advocated “triage” and writing off whole countries. I read both books when I was a graduate student. I thought the authors to be alarmists, but they influenced indirectly some of my early work in which I investigated water supply reliability as functions of increasing population and agricultural-driven water demand, storage reservoir size, and the form of river flow variability. Pessimists that they were, they made no allowance for human adaptability or anticipated the “green revolution” that yielded substantially increased crop production.
 Oil was first used as a geopolitical tool starting with the first OPEC oil embargo in October 1973. There have been numerous disruptions throughout the world associated with the withholding of supplies of oil and natural gas for political power purposes since then. The threat of disruption to liquid fuel supplies remains, and the need for carbon-based geopolitical energy security has placed, and will continue to place, heavy demands on water and land and will affect food supplies as agricultural-based fuel sources are developed.
 Memorable and more recent events that shaped the world include the reunification of Germany in 1989 with the removal of the “Berlin Wall.” The terrorist attacks of 11 September 2001 remain etched in our minds. The world economic collapse of 2008–2010 will likely create many long-lasting influences that we cannot predict but that will influence all that we do. Economic disruptions are of many kinds. One of the most significant during my professional lifetime and one of the greatest economic and societal threats has been the debasing of world currencies. For example, the benchmark purchasing power of $1US in 1970 required $5.50US in 2009.
 There are many world events that I have not mentioned that have influenced directly or indirectly what has been possible for me and my generation to do professionally, principally through the reallocation of capital and corresponding research priorities. There have been numerous atrocities between 1944 and the present. I have concluded that “man's inhumanity to man knows no bounds.” Despite the turbulent times through which I have lived, I remain an optimist because the many positive developments appear to me to far outweigh the negatives.
3. Landmark Technological Developments
 Numerous inventions and scientific and engineering developments that have occurred during my lifetime have made it possible for me to address an expanding range of topics during my professional life. The first disruptive invention, the “point contact transistor” developed by (1956 physics Nobel laureates) Walter Brattain, William Shockley, and John Bardeen, was first demonstrated at Bell Laboratories, New Jersey, on 23 December 1947. This invention led to the subsequent miniaturization of electronic “gates” (vacuum tube diodes and triodes at the time) that are the essential elements of modern electronic circuits. The exceptional scientific “detective work” that led to publication of the “double helix” paper by (1962 Nobel laureates in physiology or medicine) Watson and Crick  changed profoundly work in molecular biology. Vacuum tube–based technology was used in the first (black and white) television sets available commercially in Australia in 1956. None of us realized when we first viewed this primitive television what role it would have in science. The rapid developments in related display technology at the personal use level made possible the ubiquitous use of “moving film” display of information so important to science and science communication.
 One of the most memorable technological developments was the launching (and viewing in the night sky) of the first artificial satellite, Sputnik 1, by the USSR on 4 October 1957 (40 years to the day from the Bolshevik Revolution). This 83.6 kg satellite had an orbital period of 96.2 min. We viewed it as it tracked across the night sky in Australia with wonderment and had no idea that its launch had started the era of space exploration and observations of and from space and a major expansion of science and engineering, particularly in the United States. The Sputnik launch provided a much-needed incentive for redeployment of considerable capital in the United States to support science and engineering education and research. We remain the beneficiaries, to this day, of this transformative investment in science and technology.
 The transistor was enormously important, but it was the development of the “integrated circuit,” first built by Jack St. Clair Kilby in 1958, that ushered in the era of modern electronic measuring instruments and computers. Kilby received the Nobel prize in physics belatedly in 2000 for this path-breaking work. It was Richard Feynman (1964 physics Nobel laureate) who posed for the world the extraordinary possibilities of miniaturization in his famous 29 December 1959 talk at the annual meeting of the American Physical Society at the California Institute of Technology. It was entitled “There's plenty of room at the bottom: An invitation to enter a new field of physics.” (The full text is available at http://www.zyvex.com/nanotech/feynman.html). Consider his insights and challenges to scientists and engineers for making future computers so soon after development of the first relative large-scale integrated circuit.
Miniaturizing the computer I don't know how to do this on a small scale in a practical way, but I do know that computing machines are very large; they fill rooms. Why can't we make them very small, make them of little wires, little elements—and by little, I mean little. For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across. Everybody who has analyzed the logical theory of computers has come to the conclusion that the possibilities of computers are very interesting—if they could be made to be more complicated by several orders of magnitude. If they had millions of times as many elements, they could make judgments. They would have time to calculate what is the best way to make the calculation that they are about to make. They could select the method of analysis which, from their experience, is better than the one that we would give to them. And in many other ways, they would have new qualitative features.
This was radical and different thinking and reflected both optimism and a “can do” attitude. Feynman anticipated modern computers, but the key components are still larger today than he had in mind.
 The invention of the laser (light amplification by stimulated emission of radiation) has had a huge influence on instrumentation and observation. Theodore Maiman demonstrated the first functional laser on 16 May 1960. The laser and its precursor, the maser, were products that resulted from military need that was related to the cold war. Few, if any, at the time the laser was invented anticipated its modern ubiquity.
 Perhaps the mission that captured the imagination and yielded the most technological innovation during my lifetime was the Apollo program and the culmination of the first moon landing by humans (Apollo 11) on 20 July 1969. The Apollo program had its genesis with the scientific and engineering awakening of the United States following the Sputnik challenge and the visionary leadership of President Kennedy. The accomplishments of the Apollo 11 crew, Neil Armstrong, Michael Collins, and Edwin E. Aldrin Jr., and the entire NASA organization energized and cheered many throughout the world. After this what could not be possible? This wonderful achievement was made during the thick of the Vietnam War and deep pessimism about world population growth and corresponding demands on resources and world food insecurity.
4. Follow-up Technology
 One of the most useful tools for engineers and scientists was the invention of the Hewlett-Packard HP 35 electronic pocket calculator. It had 35 function keys and was sold between 1972 and 1975. More than 100,000 sold in the first year at more than $300 (about $1650 today). This marvelous device (and others that followed soon afterward) was welcomed immediately; the ubiquitous slide rule, the principal tool used for scientific and engineering calculations, was rapidly made obsolete. It was not long before a “pocket calculator” was connected to a theodolite and, more importantly, laser technology was incorporated into the theodolite. The modern theodolite, known as a “total station,” that uses a laser and rod-mounted reflecting prism for distance measurement, provides accurate angular measurements and automatic data reduction to yield x, y, z for any location. This innovative instrument opened avenues of research that had been too time consuming using conventional (and highly skilled personnel) topographic surveying equipment. Much greater and more accurate detailed measurement of landforms and channel systems was now possible. The extension of laser distance measuring technology to aircraft platforms that permits spatially extensive lidar-derived x, y, z has been a boon to scientific development. The same caveats that hold for the accuracy of photogrammetry-derived x, y, z hold for lidar data: the accuracy of the values, particularly the vertical, z, are constrained by the quality and quantity of the ground level reference surveying.
 A third technological advance was the development of practicable fiber optic cables (since about 1981), which has revolutionized data transmission. The numerous NASA and other Earth observatories that are now available for monitoring planet Earth contain many elements that Feynman envisioned. The data transmission rates and data storage and retrieval available today were unimaginable as little as 20 years ago when the Earth-observing satellite system was planned. We are, however, approaching data transfer limits in fiber optic cables that are of considerable concern, see e.g., Richardson .
 The advances during my lifetime in computers, all forms of “surveying” the states of the Earth's “critical zone,” information transfer, data storage and retrieval, and accessibility of the “libraries” of the world (principally through the Internet and related infrastructure), were unimaginable 40 years ago. I have always been constrained by affordable, available technology. My principal calculating device from 1960 (fourth year of high school) to 1975 (when I purchased an affordable electronic calculator) was a “10 inch long” slide rule. My first computer code was written in 1966 (FORTRAN) to run on the only computer at the University of Newcastle, an IBM 1130 computer, that first became available commercially in 1965. That splendid computer had 64 kb of core memory! It seemed at the time an extraordinary machine. The desktop personal computer on which I wrote this paper has eight central processors and 8 Gb of high-speed memory. A 1 September 2010 press release from IBM Corporation announcing its newest commercial computer (http://www-03.ibm.com/press/us/en/pressrelease/32414.wss) puts computational gains into perspective:
The core server in the zEnterprise System–called zEnterprise 196–contains 96 of the world's fastest, most powerful microprocessors, capable of executing more than 50 billion instructions per second. That's roughly 17,000 times more instructions than the Model 91, the high-end of IBM's popular System/360 family, could execute in 1970.
You can now compute in 1 s what took about 4.7 h in 1970, if the machine would run for that long without required rebooting.
5. Societal Needs and Hydrology and Water Resources Engineering
 Developments in hydrologic science and hydrologic and water resource engineering have been driven, and will likely continue to be driven, by societal needs. Use of the best practicable science in professional practice necessitates constant assessment of available technologies and data measurement and recording systems and associated information and decision support systems. The principal issues in hydrology and hydrologic and water resources engineering are as follows (in no particular order).
5.1. Water Storage
 Liquid water is stored on vegetation and on the land surface, as “infiltration fingers” in soil or more uniformly in unsaturated soil, in larger surface depressions and lakes, in saturated hillslope domains, and in major aquifers. Solid water is stored above ground as ice on vegetation and on the land and as frozen streams or rivers, below ground as temporary or semipermanent ice, in snowfields and glaciers, and in major “ice sheets.”
 “The river owns the valley” should be the primary consideration of the hydrologist and water resources engineer when considering flood ecology, flood risk, or flood mitigation measures. Stream and river valley flooding results when excessive precipitation or snowmelt, or a combination of both, cannot be retained on or within a hillslope; by debris flows (non-Newtonian fluid) that change channel conveyances; and by breaches of “ice jams” in cold region rivers. The scale ranges from zero- to first-order basins to major river valleys. Floods have major ecological significance, aspects of which have been known for a long time [see, e.g., Matthes, 1934].
5.3. Water for Food and Fiber
 Water for food and fiber includes rainfall- and snowmelt-supported vegetation in natural and agricultural settings as well as transport of water from one region to another for supplemental irrigation.
5.4. Water as a Change Agent
 Water as a change agent involves the unique properties of the water molecule as it acts biogeochemically at small scale, the mechanical (impact) properties of raindrops, and surface and subsurface flow as an eroding and transport agent of minerals and organic matter.
5.5. Water Shortages and Drought
 Water shortages and drought occur at all scales from the zero-order basin to the continental scale and have a huge influence on ecology, human and animal populations, food and fiber supply, and human economies.
5.6. Irrigation and Industrial and Drinking Water Supply and Transport
 The principal issues with irrigation and industrial and drinking water supply and transport are concerned with human-engineered surface (dams, canals, and pipelines), subsurface (well field), and water treatment infrastructure in river basins for the purpose of supplying agricultural water, potable water for municipalities, and water of stable chemical properties for industrial processes. Multiple-purpose management of the basin-wide water resource for humans and ecology is an essential component.
5.7. Water for Ecology
 In human-influenced settings water for ecology is principally concerned in maintaining a balance for ecosystem services most often in the form of water in sufficient amounts and quality for aquatic populations.
5.8. Water and Land Use
 The major issues with water and land use changes to the hydrological cycle through land clearing and tilling for agriculture, livestock grazing, silvicultural practice, land use change from natural or agricultural to urban, the influence of fire and ecological imbalances that cause vegetation change (e.g., insect damage to grass and shrub lands and forests). The related issue of heterogeneity in land surface–atmosphere interactions resulting from changes to landscapes that alter the balance between latent (evaporative) and sensible heat fluxes is of considerable importance to spatial and temporal water and thermal exchanges between the land surface and the atmosphere.
6. The Foundations of Hydrologic Science
 I have benefited considerably from reading, and rereading frequently, early published work on all the above topics. Progress over the last 150 years in understanding resulted mainly from addressing water-related problems that mattered to society and to a lesser extent from “scientific curiosity.” The principal publication outlets were through the journals and books of the professional and scientific societies associated with civil engineering, agriculture, and forestry. The writings of George Perkins Marsh [Marsh, 1874] are as fresh today as when they were published. J. C. I. Dooge wrote extensively on the foundations of the science and professional practice of hydrology and hydrologic engineering. Much of what is relevant today (and recommended reading for all) was covered by Dooge , who discussed developments of hydrologic concepts in Britain and Ireland in the time window 1674–1874. Of the many significant contributions in that time period are the works of Robert Manning and Thomas Mulvany. Both were investigating aspects of flooding in Ireland that resulted from river channel straightening projects, starting in the 1840s, that were done largely to provide employment to help mitigate the misery that followed famine in Ireland. Many modern hydrologists and hydrologic engineers would benefit from reading the work of Mulvany  for the deep insights he offers. He was particularly aware of the importance of detailed and accurate data for hydrologic investigations.
 George Perkins Marsh had a deep appreciation of plant and animal ecology and land use disturbances, and their connections with hydrometeorology. Consider his observations under the heading “Uncertainty of Modern Meteorology” [Marsh, 1874, chapter 1]:
There is one branch of research which is of the utmost importance in reference to these questions, but which, from the great difficulty of direct observation upon it, has been less successfully studied than almost any other problem of physical science. I refer to the proportions between precipitation, superficial drainage, absorption, and evaporation. Precise actual measurement of these quantities upon even a single acre of ground is impossible; and in all cabinet experiments on the subject, the conditions of the surface observed are so different from those which occur in nature, that we cannot safely reason from one case to the other.
 Papers by Fuller  and Jarvis  that discuss flooding in the United States summarize the early 20th century state of understanding of floods and of flood risk analysis. Both papers are long and are notable for their richness of content and for the valuable multicontributor comprehensive discussions published with them. Allocation of scarce capital resources for flood damage mitigation provided the basis for these works. Emphasis was placed on estimation of risk-based design flood flow rates (from relatively short records), particularly for bridges (where prevention of flood-related catastrophic failure was the dominant issue) that were so vital to all forms of the expanding transportation system.
 Numerous post–World War I (1914–1918) activities influenced the development of the science and practice of hydrology. In 1919 the International Union of Geodesy and Geophysics (IUGG) was organized in Brussels. At the 1922 IUGG assembly in Rome, the section of “Scientific Hydrology” was established, and it became the International Association of Scientific Hydrology (IASH). IASH (or IAHS) and the American Geophysical Union (AGU) provided new publication outlets for hydrologic science. The first paper published by the new hydrology section in Transactions of the American Geophysical Union was by Robert Horton [Horton, 1931]. The title was “The Field, Scope, and Status of the Science of Hydrology.” Horton articulated what was to become known as “hydrologic science:”
In one sense the field of hydrology is the Earth and so is co-terminous with other geo-sciences. More specifically the field of hydrology, treated as a pure science, is to trace out and account for the phenomena of the hydrologic cycle [Horton, 1931, p. 190]. I have used field to describe the domain of hydrology as a pure science. The word scope may be used to describe the manner in which and the extent to which it covers this domain. …Both the scope and problems of hydrology are closely related to the various branches of applied hydrology. This is natural since it is mainly in the application that new problems arise and the scope of the science is extended [Horton, 1931, p. 191].
 Significant developments in hydrologic science occurred in the 1930s during worldwide economic depression. Most of these exceptional contributions to science and applications in hydrology through the 1930s are covered in detail in the definitive work edited by Meinzer , which should be required reading for all hydrologists.
 Activities in the 1960s that influenced the support and development of hydrologic science included the report of the Ad Hoc Panel on Hydrology , U.S. Federal Council for Science and Technology, chaired by Walter Langbein. Cold war era “goodwill” international scientific “East-West” collaborations were a major part of “The International Hydrologic Decade” (1965–1974). Major new journals that became flagships for publication of developments in hydrology and water resources were established following the report of the Ad Hoc Panel on Hydrology , including Water Resources Research (AGU), Water Resources Bulletin (American Water Resources Association), and Journal of Hydrology.
 During the 1960s era of post-Sputnik technologic development, the U.S. space program blossomed, the third generation of digital computers (1967) ushered in a new era of computationally based science, and space-based observation platforms offered new ways to view planet Earth. Addition of transistorized electronics to field equipment started in the late 1960s and early 1970s.
 Developments in hydrologic science in the late 1980s were substantially influenced by plans to launch the Earth Observing System (EOS) satellites. The opportunities created by the EOS instruments posed many technological challenges, including an almost impossible to imagine, let alone determine how to store and access, anticipated data stream of 15 Pb per year. Doppler radar precipitation measurement became a reality in the United States with the initial implementation of the network of WSR-88D NexRad radars. A special issue of Water Resources Research (see Burges  for a summary of the 15 papers) covered many of the likely future directions of our science.
 The Committee on Opportunities in the Hydrologic Sciences  priority categories of scientific opportunity (unranked p. 298ff.) remain relevant today and continue to define the research agenda. There were five priority categories of scientific opportunity as follows: chemical and biological components of the hydrologic cycle, scaling of dynamic behavior, land surface–atmosphere interactions, coordinated global-scale observation of water reservoirs and the fluxes of water and energy, and hydrologic effects of human activity. Four data requirements were identified: maintenance of long-term data sets, improved information management, interpretation of remote sensing data, and dissemination of data from coordinated experiments. Three educational requirements were emphasized: develop multidisciplinary graduate education programs, include experience with observation and experimentation, increase visibility of hydrology to undergraduate students.
 Considerable progress has been made in each category. The development of distributed measurement systems (all kinds) and coordination within the research community are notable. The formation of the Consortium of Universities for the Advancement of Hydrologic Science Incorporated (CUAHSI) in the United States in the past decade has aided development in, and community availability of, hydrologic process instrumentation. David Maidment (University of Texas at Austin) and colleagues have developed, and are continuing to develop, procedures and facilities for data archiving that will provide numerous benefits for individuals, groups, and agencies worldwide. Details of the “hydrologic measurement facility” and the “hydrologic information System” are available at: http://www.cuahsi.org/.
 An integrative framework for much of hydrologic science has been provided by focusing on the “critical zone,” first introduced in the U.S. National Research Council (NRC) report “Basic Research Opportunities in Earth Science” [Committee on Basic Research Opportunities in the Earth Sciences, 2001]. I am responsible, together with Gail Ashley (Rutgers University) and Larry Wilding (Texas A&M University), for introducing this framework to the NRC committee. The critical zone is the “heterogeneous, near-surface environment in which complex interactions involving rock, soil, water, air, and living organisms regulate the natural habitat and determine the availability of life-sustaining resources” [Committee on Basic Research Opportunities in the Earth Sciences, 2001, p. 2].
 The critical zone and research opportunities within it are covered in detail in pages 35–45 of the NRC report. The aboveground domain extends beyond the highest mountain (Mount Everest). The belowground domain extends to the limiting depth where life is found. Consider these domains relative to planet Earth. If we produced a 1 m diameter scale model of Earth, Mount Everest would be 0.7 mm above the sea level datum. The Mariana Trench would be 0.9 mm below datum. The critical zone is from about 1 mm below to about 0.9 mm above the datum. We know little about this domain and considerably less about the 1 m domain. There are enormous opportunities for integrated scientific enquiry in the critical zone.
7. Principal Issues in Hydrology and Three Community Challenges
 Hydrology is and has always been concerned with the water balance. The major issues involve delineating relevant surface and subsurface hydrodynamic and hydrothermal boundaries, characterizing the heterogeneous domains through and over which water moves, and measuring and modeling fluxes and states. The difficulties are all in the details; the domains are highly heterogeneous. Heterogeneity is not a new topic. Marsh [1874, chapter 1] presented the following discussion of heterogeneity:
In nature, the inclination and exposure of the ground, the degree of freedom or obstruction of the flow of water over the surface, the composition and density of the soil, the presence or absence of perforations by worms and small burrowing quadrupeds—upon which the permeability of the ground by water and its power of absorbing and retaining or transmitting moisture depend—its temperature, the dryness or saturation of the subsoil, vary at comparatively short distances; and though the precipitation upon very small geographical basins and the superficial flow from them may be estimated with an approach to precision, yet even here we have no present means of knowing how much of the water absorbed by the earth is restored to the atmosphere by evaporation, and how much carried off by infiltration or other modes of underground discharge.
 Progress in hydrology is made when improved and more complete representation of the surface and subsurface hydrogeochemical properties of the hillslope and measurements of hydrologic fluxes (e.g., precipitation, vapor transport, and water movement) and states (e.g., saturated water levels, soil moisture, soil and water temperature, and water chemistry) are made and combined at appropriate scales with continuous in time hydrologic models that use “moisture and energy” accounting schemes. The least progress has been made on how to characterize the hydrogeochemical features of the hillslope. There have been numerous improvements in devices and sensors for measuring fluxes and states and their sampling rates during my professional career. The great bulk of data from data networks (e.g., point precipitation, radar-estimated spatial precipitation, river flow rates, groundwater well water levels, and river and groundwater chemistry and biology) are stored by agencies for supporting the mission of the agency. Data collected by nonagency research groups can and should be archived in community data information systems (e.g., CUAHSI), but few incentives are in place to encourage community data archiving of comprehensive multiflux and multistate data sets collected by research teams from universities, research institutes, and agencies. This is both a present-day and future “community challenge.”
 In looking to the future, consider three challenges: data network needs for improving the water budgets at all scales, characterizing the subsurface domain and water flow paths, and the information archiving (and data mining) needs from instruments that will generate substantially richer data detail than have been used for most hydrologic work to the present. To illustrate the first two issues, consider Figure 1, modified from Figure 3.1 of Atkinson , which shows a hillslope domain and flow paths, with the groundwater flow path terminating at the stream channel bed. We need also to be concerned with much longer groundwater flow paths that are associated with water transport beyond the surface water catchment boundary. Tóth [1962, Figure 6], showed a theoretical groundwater flow path that extended laterally for about 11 km; Tóth [1963, Figure 2f] showed a theoretical flow path that extended vertically about 3 km and laterally about 6 km. Schaller and Fan  have demonstrated the importance of large-domain geology and long groundwater flow paths when considering the water budget over large domains that contain nested catchments. They show one case where the documented lateral groundwater flow length was about 400 km.
 I have highlighted five parts of Figure 1: 1, precipitation, which is usually measured or estimated; 2, flow from pipe outlet, which is rarely measured; 3, transpiration, which is usually estimated; 4, the vertical domain (soil profile, zone of percolation, and groundwater), which is usually unknown or poorly known and poorly characterized; and 5, hillslope flow paths, which are extremely difficult to map.
7.1. Community Challenge 1
 The community must work to implement greatly improved networks that provide accurate measures of precipitation, flow fluxes (surface and subsurface), and evaporation and transpiration to sharpen water budget accounting, which remains the most basic and fundamental problem in hydrology. Too few measurements have been made in the past to quantify catchment groundwater inflow and outflow.
 The hydrologic fluxes most commonly measured (but usually not at the scale of the hillslope) are precipitation rates and channel flow rates. Catchment water budget measurements are needed at a nested hierarchy of scales. Burges [2003, Figure 6] shows an example of the measurement of nested catchment outflow at 0.37, 37, and 123 km2 scales. If measurements were made at the smallest scale, the larger-scale catchment behavior could not be derived. If measurements were made for the intermediate- or larger-scale catchments, the bulk catchment behavior could be approximated but would be hopelessly wrong at the smallest scale.
 Existing measurements of precipitation and flow rates leave much to be desired for accurate accounting of the water budget at the hillslope, and larger, scales. Liquid precipitation measured by wind-affected rain gauges has a variably biased, and uncorrectable, under catch of 3%–15% [Duchon and Essenberg, 2001; Sieck et al., 2007]; a practicable and reliable way to measure “point” precipitation that reaches the ground is by use of a cluster of “pit” gauges [see, e.g., Burges, 2003; Sieck et al., 2007; Kampf and Burges, 2007, 2010]. The under catch for solid precipitation is much worse. River flow rates estimated using “stage-discharge” relationships are at best made to within ±5% and more usually ±10% or worse. If discharge is determined routinely using acoustic Doppler profilers (ADPs), discharge measurements can be made to within ±3%. Instrument tower measurements used to derive evaporation, transpiration, and sensible heat fluxes are few. Details of the world network of 532 FLUXNET towers (as of February 2011), the best available network of measurements, are provided at http://www.fluxnet.ornl.gov/fluxnet/viewstatus.cfm. All the measurement networks need considerable enhancement to advance hydrologic science and hydrologic decision making.
7.2. Community Challenge 2
 The second challenge is to develop tools to enhance quantitative spatial and temporal description of the hydrogeochemical properties of the subsurface domain and related flow paths. Imagine designing novel and exceptionally small “devices” in the spirit of Richard Feynman and deploying them at fine spatial and temporal scales into the heterogeneous subsurface domain. The devices would need the following properties: they can be transported by water in the vadose and saturated zones, they must be able to “survey” their surroundings, they must remember where they have been, and they must be interrogated.
 One relatively simple way to determine some features of the subsurface is to “dig a hole.” This might be by trenching or by using a handheld auger. I have used the latter on many occasions to obtain an estimate of the distance below the surface to a “hard soil layer.” An example of this is given by Wigmosta and Burges , who determined spatial soil and forest litter depths at 144 locations to delineate the hydrologically active soil layers in a 37 ha catchment.
 We must do better than we have done in the past. Visualize any landscape, be it desertic with cracked or cryptobiotic soils with sparse shrubs, a relatively extensive grassland with notable topographical features and perhaps with serpentine or other cracked or swelling soils, a wooded domain with notable relief that includes some small lakes or wetlands, a forested domain that has experienced extensive fire or insect kill. The scales could vary from Atkinson's  scale (Figure 1) with a hillslope length ranging from 50 m to 1 km or so, and the plan area could range from a few hundred square meters to several square kilometers.
 Much work has been done on flow and transport in the subsurface domain with major efforts associated with public health considerations (removal of pathogens from water by soil), pollution remediation, and determining travel pathways of organic and inorganic contaminants. There have been many efforts to characterize the domain with respect to water movement. A summary assessment of capabilities of the role of ground-penetrating radar has been provided by Lambot et al. . Robinson et al.  have summarized the “state of the art” of soil moisture measurement for hydrological and ecological observatories and identify opportunities for improvements. Developments in representing preferential flows are summarized by Horst et al. ; Nieber and Sidle  suggest a range of experiments that would help elucidate preferential flow path development. I anticipate that all of these approaches could make use of information that would be yielded by the future novel “devices.”
 Numerous efforts have been made to use “tracking” or “staining” systems to elucidate flow path features of the subsurface, including colloids and biocolloids. The largest dimension of a water molecule is less than 0.2 nm. A rotavirus is less than about 100 nm. An example of nanoscale fabrication (100 nm scale), indicative of some possibilities for the future small devices, is provided by Rothemund . The major activity in “nanomedicine” to design novel drug delivery systems to the cell is indicative of ways forward to build “intelligent” structures that have desired properties [see, e.g., Euliss et al., 2006]. The work on colloid and biocolloid, movement summarized by Saiers and Ryan , particularly at the smallest scales (see, e.g., Zvikelsky and Weisbrod , who examined movement of colloids as small as 20 nm), provides helpful background for identifying some limits and possibilities. Information from the small devices would be used with postulated subsurface structures, or structures indicated by other sampling approaches, to solve what I anticipate will be massively computationally demanding inverse problems to yield improved and updated hydrogeochemically subsurface structure and associated flow paths.
7.3. Community Challenge 3
 Modern measurement, and particularly modeling, systems can generate enormous amounts of “data.” The critical question is: What information must be stored? There are two broad data categories: massive amounts of throwaway data associated with development of the science and archival data for resource assessment and water management. There will be a major overlap of both sets of data as we advance. If we choose to discard data, they are lost forever.
 Extensive use has been made of data dense products from space-based platforms and “radar-rainfall” systems (e.g., NexRad in the United States). The huge advance in computer systems during the past 40 years has made it possible to store and access an almost infinite amount of information from such platforms.
 Professionals of every era have faced these issues of data storage and use. Before the invention and wide deployment of the third generation of digital computers, data storage and access were limited by “book storage” technology: How much data could be shown usefully on a printed page? This gave rise to records of daily average river flow rates, daily rainfall accumulation, and, as rain recording systems were improved (largely after World War II), the reporting and printing of hourly rainfall. The modern “unlimited” data storage possibilities have changed such that there is little if anything to be gained in hydrologic science by using data at such coarse scales as a “daily average” except for the slowest changing hydrologic states and fluxes. The typical data increment now is 5–15 min for both precipitation depth and river flow rates, estimated from stage-discharge relationships, but that should not be considered a constraint.
 What information needs to be stored? Consider two examples: high-resolution radar for precipitation “measurement” and acoustic Doppler river scans. Figure 2 shows information from the vertical scan using an experimental mobile Doppler on wheels (DOW) radar deployed at Goodwin Creek, Mississippi. The (X band) DOW, which operates at a 3 cm wavelength, provided radial velocity and reflectivity data at 50 m by 1° resolution in space with updates within tens of seconds in time. Full details of the deployment are given by Sieck et al. ; design details of the DOW are provided by Wurman et al. . Figure 2 (left) shows estimated radial velocity (the color bar is in m/s), and Figure 2 (right) shows radar reflectivity (dbZ). This measurement platform generates a huge amount of information that can be used to determine space-time patterns of storm precipitation.
Figure 3 shows the summary information from an ADP traverse across a major river of width 453 m. Figure 3 (top) shows the plan view track of the boat (white track). The green lines show the plan view local depth-averaged flow vector. The corresponding flow rate for this traverse was 1390 m3/s. The usual information that would be recorded for a stage-discharge relationship would be a single value: the measured flow rate and the corresponding water depth relative to datum (stage). Secondary field information typically would include velocity estimates at two depths (0.2 and 0.8 of the local depth) at 20–30 locations across the channel. Figure 3 (bottom) shows the velocity distribution determined from the ADP signals that was used to determine the integrated flow rate. The highest velocity shown on the color bar is 200 cm/s. The ADP provided 75 vertical profiles with varying vertical discretization. The derived flow field contains rich information that has value for a range of scientific questions. How much of it should be archived?
Figures 2 and 3 highlight the issues of what might need to be archived. Future instruments and, increasingly models, will likely produce many orders of magnitude greater data density. In experimental physics, for example, data from one major physics instrument, the Large Hadron Collider, will be about 15 Pb per year (http://public.web.cern.ch/public/en/LHC/Computing-en.html). Distributing this huge amount of data to users necessitated developing an expensive and extensive cyberinfrastructure global network, called the “LHC Computing Grid.” Details of this comprehensive system are given by Brumfield .
8. Hydrologic Issues that Matter to Society: Data Needed for Risk-Based Decisions
 How valuable are existing hydrologic records for decision making and risk assessment? The value of hydrologic data for decision making has always been questioned because of the usually relatively short record. The fundamental question has always been: Are these data representative of the future?
 Consider the suitability of past records for providing guidance to the future for policy making, risk decision making, and resource allocation. Most statistical analyses used for hydrologic risk assessment (e.g., flood or drought severity) make use of weak statistical stationarity assumptions largely by requiring that the mean and variance of the data used are approximately statistically stationary. The human eye is one of the best statistical tools in existence: for any time series of data, if there does not appear to be a change in the central tendency, the mean is likely to be approximately stationary. Similarly, if the degree of scatter is relatively constant, then the variance is approximately stationary. There are numerous formal tools to make these assessments.
 I have been concerned with hydrologic variability for 42 years in the context of risk-based engineering design and water resources management. Is there cause for concern about possible lack of “statistical stationary” and are we sufficiently skilled to recognize any nonstationary features in the relatively short (typically fewer than 100 observations) time series available? I illustrate this difficulty with two examples. The first is part of a 655 year long time series of annual tree ring growth index for the time period 1740–1880 for limber pine at Dell, Montana. The second is the time series of annual inflow (1911–1999) to the water supply dams for Perth, Western Australia.
 Dendrochronologists and hydrologists often fit regressions of measured annual river flow volume to the index of annual tree ring growth as a way to extend short measured river flow records using much longer tree index series. Figure 4 shows the Dell, Montana, tree ring time series. The long-term (655 year) average for the index is 100. Most colleagues when shown only the gray shaded central portion of Figure 4 think there is an upward trend. When shown the full record, they realize that the central part of this record is a fluctuation in the longer record. Figure 4 emphasizes the importance of seeking an explanation of causality of any apparent trend that is contained in a relatively short, but typical, record. Long and reliably measured records are needed to make non-Bayesian-based risk assessments. Lettenmaier and Burges [1977, 1978] have shown the importance of examining time series for ubiquitous long-term “Hurst” type persistence [Hurst et al., 1965] and for calculating the correspondingly greatly reduced “equivalent independent sample size” required for estimating “standard errors” when testing for any apparent change in the mean in high Hurst coefficient data series.
Figure 5 shows the combined annual inflow volume to the Perth, Western Australia (WA), water supply reservoirs. Most hydrologists and water resource engineers and planners would break the record into two parts: the period before about 1975 and the period from 1975 onward. The annual flow volume arithmetic averages for the periods 1911–1999 and 1975–1999 are 291 and 172 GL, respectively. Long-range plans for municipal water to be supplied from these reservoirs were for 200 GL/yr starting in the year 2000. That was obviously not feasible by 1995 but was far from obvious when the reservoirs were designed and built. The annual flow volumes are markedly different, with high skew up to about 1975, when there were many years of substantially above average inflow that would fill the reservoirs. After 1975 there were no large inflow years. I had available to me the record through 1994 when I was asked by the WA Environment Minister at a meeting in Perth in January 1996 whether there has been a climate change in the Perth region. I showed examples of river flow time series from the United States where long-term fluctuations occur but could not provide a definitive answer. A more complete question is: What caused this apparent change in annual river flow volumes? This can be addressed in a systematic hydroclimatology context; the work by Bates et al.  provides a framework for assessing situations of this kind.
 I used these two illustrations of apparent nonstationarity because both are representative of data time series that are available to water resource decision makers who have to make economic risk-based decisions that matter ecologically and to the well-being of people. These time series are indicative of what has always faced decision makers involved in the allocation of capital for water-related infrastructure solutions to societal problems. We should be careful about making any pronouncements about apparent trends in our ubiquitous short records. In the Perth situation, the decision makers elected to construct a desalting plant, which has been operating since November 2006, to provide the increment of needed water and to ensure system reliability.
 How have past water resource decision makers designed for unknown future conditions? We have numerous successful designs that have been implemented from which we all benefit today. All successful designs share much in common in that cost-effective solutions have been put in place that have left open options for future adaptability. In the case of robust surface water reservoir supply systems, the amount to be extracted from rivers is substantially less than the estimated mean annual flow. Lettenmaier and Burges , Hoshi et al. , and Hoshi and Burges , among others, have shown that when the volume of water to be extracted for use exceeds about 50% of the mean annual flow volume, the unknowable future forms of variability and fluctuations in apparent mean level reduce supply reliability considerably and can put society at great risk. The spread of agriculture to the drought-prone western United States provides another example of decision making under considerable uncertainty as to the nature and time and spatial extent of drought [see, e.g., Burges, 1979].
 Significant flood protection works were designed years before there were sufficiently long records of flood hydrographs that could be analyzed by means of modern statistical methods. The designers did not know what the future would hold, but societal well-being depended on the robustness of the designs to accommodate extreme floods. Of the many exceptional examples of wise design decisions for accommodating riverine floods, one of the most imaginative and effective has been the flood protection infrastructure for the Great Miami River Basin in Ohio. These works were designed by the Morgan Engineering Company following the disastrous floods of 1913 and the large loss of life in the Dayton-Hamilton region. The protective works, which included levees, five major dams (retarding basins), and channel hydraulic improvements, were completed by 1922. I recommend studying this work to all hydrologists and water resources engineers; extensive details of the design and construction of these works are provided by Morgan .
 A second example of adaptation and decision making under extreme uncertainty was the design of the dike defense system against the sea in the Netherlands. This is one of the most complex, multiobjective water- and land-related problems that has been tackled and with remarkable success. It is inspiring to read of the contributions of the 20th AGU Bowie medalist, Johannes Thijsse, to this most challenging and societally crucial situation [Thijsse, 1958].
 There are numerous examples of unwise decision making largely due to ignorance of the past as well as the future hydroclimatology, particularly concerning water and agriculture in semiarid areas [see, e.g., Burges, 1979], but we have many examples of success that should inspire us. I am optimistic that there are many skilled future policy makers, scientists, and engineers who will be as resourceful as those who have already shown us how to make wise use of limited water and land resource information to make good decisions.
9. Community Grand Challenge
 My “grand challenge,” which I have modified slightly from Burges , is to work towards connecting ocean, atmosphere, and hydrosphere interactions into a coherent approach that will yield hydrologically useful information at the hillslope, catchment, and continental river basin scale, for time scales up to on the order of a decade.
 The accomplishments of the giants of the profession upon whose work I have been able to build and the contemporaneous contributions of those whose work I have studied make me optimistic that the major hydroclimatological challenges that matter to society that need to be addressed are possible. We need not be constrained by perceived current limits to “predictability”; Kumar  provides a thoughtful assessment of all relevant issues. Farmers must make decisions on when or when not to plant their crops. In domains that are prone to long-term large-scale meteorological drought (multiple years to decades), such decisions have to be made multiple years ahead. Knowledge of potential spatial-temporal patterns of major floods, years ahead of occurrence, would be of enormous value in planning for capital intensive infrastructure development; infrastructure construction would be scheduled for predicted periods of lesser flood risk. We already know that flood frequency assessment can be conditioned in some parts of the world on the basis of sea surface temperature anomalies. What is needed is multiple year ahead forecasting of those anomalous states for such information to be useful in risk-based decision making.
Figure 6 shows a GOES infrared satellite image for 23April 2001 at 2345 UT. Figure 7 shows the corresponding (6 h) mosaic of radar reflectivity images for 24 April 2001 at 0000 UT. The strong band of precipitation evident in the radar mosaic in Figure 7 stretches from the Gulf of Mexico into Canada. Imagine what would be possible if we could model the cloud pattern that is shown in Figure 6 and then obtain the resulting precipitation that is associated with the band of convection shown in Figure 7. Imagine if we could do this long in advance and use the precipitation patterns for hydrologic prediction. The same needs to be done for long-duration periods of low or little precipitation. Eric Wood challenged us in his presentation at the symposium to make extensive use of the multiple measurements that are available from space-based remote sensing platforms and from those that will be soon be available as well as anticipated massive computer power for hydrologic prediction. This effort will require much closer collaboration with colleagues in atmospheric sciences and with those who develop and use global circulation models.
 I became aware of the long history of making errors in technological forecasts after reading work by Ayres  almost 40 years ago. Despite this, I am optimistic that significant progress can be made on the long-term prediction problem, particularly in a probabilistic rather than a strict deterministic sense. For a sense of what is possible currently, and in the not too far future, I recommend viewing the “grand challenge” Webcast of the 2010 American Geophysical Union Bjerknes lecture [Palmer, 2010], available at http://www.agu.org/meetings/fm10/lectures/videos.php.
Palmer  provides comprehensive coverage of current limitations of atmospheric and oceanic modeling and challenges the broad scientific community to advance the state of the science. I also think that developments in computer systems that led to the recent production of the IBM “Watson” machine show a promising way forward. I anticipate that such massively parallel machines would be used as “huge look up tables” to provide input to global climate models to provide much more realistic atmospheric dynamics than are currently available.
10. Concluding Remarks
 It has been my privilege to know and work with many exceptional colleagues over a 40 year period; ours is a remarkable and inspiring community. World events that influenced deployment of capital, attention to major issues such as societal concerns about flooding, droughts, and pollution abatement and cleanup, have influenced what research could be done and when. The developments in instruments, space-based remote sensing of planet Earth, and computing and information transfer have defined the timing of much of our work. One of the largest challenges all of us face is deciding what evolving technologies to embrace. Whenever possible, I tried to work on topics for which I thought we would need improved tools (including observations) about 10–20 years before they were needed. This permitted the time-consuming multiple iterations associated with “inventing” and providing guidance for best professional practice in hydrologic and water resource engineering. I have written about this previously [see, e.g., Burges, 1994].
 The community has accomplished much during a time period of considerable political and economic uncertainty, and I am sure there will be many great accomplishments in the future. I could not have anticipated the world of 2010 40 years ago; the socioeconomic, scientific, and technical developments have been remarkable and overwhelmingly positive. How could I be anything other than an optimist?
 I chose to cover time and place to try to put into perspective some of the work that led up to, and occurred, during my professional life. My community challenges are those of an optimist and ought to be viewed in the context of the first and third laws of science fiction writer, inventor, and futurist, the late Arthur C. Clarke. In the first law [Clarke, 1962], when a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. In the third law [Clarke, 1973], any sufficiently advanced technology is indistinguishable from magic.
 I have argued for the “possible” in all cases, Clarke's first law; community challenge 2 and the grand challenge are in accord with Clarke's third law.
 My wife, Sylvia, and I are enormously grateful to Dennis Lettenmaier for organizing the retirement symposium and for arranging for this special section of Water Resources Research. We thank the members of the organizing committee, all presenters, all who attended the retirement symposium, and those who have contributed to this special issue. Financial support was provided by the U.S. National Science Foundation Hydrologic Sciences Program through grant EAR1033769. Review comments on an earlier version of this invited perspective from four colleagues and Editor in Chief Praveen Kumar are deeply appreciated. University of Washington colleague Tim Larson provided helpful comments on the penultimate version of the revised invited perspective.