Transitioning Space Weather Models Into Operations: The Basic Building Blocks


  • Eduardo A. Araujo-Pradere

New and improved space weather models that provide real-time or near–real time operational awareness to the long list of customers that the NOAA Space Weather Prediction Center (SWPC) serves are critically needed. Recognizing this, SWPC recently established a Developmental Testbed Center (DTC [see Kumar, 2009]) at which models will be vetted for operational use. What characteristics should models have if they are to survive this transition?

The difficulties around the implementation of real-time models are many. From the stability of the data input (frequently coming from third parties) to the elevated information technology (IT) security atmosphere present everywhere, scientists and developers are confronting a series of challenges in the implementation of their models. Quinn et al. [2009] noted that “the transition challenges are numerous and require ongoing interaction between model developers and users.” However, the 2006 Report of the Assessment Committee for the National Space Weather Program (NSWP; see found that “there is an absence of suitable connection[s] for ‘academia-to-operations’ knowledge transfer and for the transition of research to operations in general.”

To facilitate the process, it is useful to establish a clear distinction between “academic” and “operational” real-time models, and to set specific requirements for transitioning academic models to operations. Academic models are assumed to be intended for the understanding of physical processes, cause-and-effect relationships, etc. They are not designed for production of customer-oriented products, even if they have been implemented as real-time models. By contrast, operational models offer products on which third parties rely to conduct their business. An operational product must be validated, fully documented, and reliable and must adhere to basic standards (among other requirements). Academic models need not fill these conditions—they serve the purposes of the scientists and could be adjusted at will to fit the researchers’ needs. However, by meeting some basic requirements, academic models could be better positioned to transition into operational models.

Identifying the Customer “Wish List”

Several models have already been or are in the process of being transitioned to operations by SWPC. These include the Storm Time Ionospheric Correction Model (STORM [see Araujo-Pradere et al., 2002]); the United States-Total Electron Content (US-TEC [see Fuller-Rowell et al., 2006]) model, a near–real time, data assimilation ionospheric representation; the ionospheric correction for the U.S. National Geodetic Survey's Online Positioning User Service (OPUS; see based on the US-TEC output; and the combined Wang-Sheeley-Arge empirical coronal model plus a theoretical three-dimensional magnetohydrodynamic heliospheric model (WSA+ENLIL (M. Gehmeyr et al., manuscript in preparation, 2009)). Using the experiences gained through these transitions as a guide, a basic set of requirements for other transitions can be pieced together.

First and foremost, the operational model must fit customer needs. Because of the increasing impact of space weather on technology, the need for a particular product is often clearly expressed by the interested customers and, if published, could become the metric against which products can be designed and tested. Two examples of known customer needs, according to the NSWP assessment report, are the desire of the U.S. Air Force Weather Agency (AFWA) for a model that can forecast Dst out to 72 hours, and the wish of NASA's Space Radiation Analysis Group (SRAG) to be able to forecast “all clear” several days in advance of spacewalks with less than 5% probability of solar particle activity.

Once the prerequisites of operational models are known, groups with models close to those requirements can make the effort to adapt their models to the users’ needs, beginning the process of transitioning their models to operations.

Verification and Validation: The Well-Known “V&V”

Verification and validation (V&V), defined by American Institute of Aeronautics and Astronautics (AIAA) [1998], represent key steps to ensuring that a model can be used reliably by a more general audience.

Verification is the process of determining whether the model's implementation and results accurately represent the developer's intended concept and purpose. An easy way to think of this is to ask, Are the equations solved correctly? Verification includes benchmarking, running a suite of tests, and comparing model outputs with first-principle simulations.

Validation is the process of determining the degree to which a model is an accurate representation of the real world. An easy way to think of this is to ask, Are the right equations solved? [Roache, 1998]. This process includes defining the concept being measured and comparing studies of modeled and observed time series to gain perspective on prediction efficiencies, distribution functions through time and space, model biases, outlying events, and model responses to outlying conditions. The general purpose of validation is to arrive at a complete characterization of the model's predictive behavior.

The verification of the model is an initial and basic step likely followed by those creating academic models. But for models aspiring to operational status, verification should be conducted several times during the transition process. The validation of the model should assure the users that the model is able to capture, with some measured degree of success, the behavior of the system; thus, it preferably is an ongoing process and could be included as an additional product.

In general, the behavior and quality of model predictions dramatically changes between quiet, steady conditions and fast changing, disturbed conditions. Thus, the initial validation should cover all conditions. Further, if a model has been built on a database, the validation should be based on data not used for the model development, in accordance with the correct criteria for model validations [Pittock, 1978; Araujo-Pradere et al., 2003].

The V&V process guarantees the quality of the model predictions and offers the final users a level of confidence when using the model.

Failures and Events (Yes, They Happen)

Specification of failures and events is an outcome of V&V. This specification should become part of the documentation associated with the model that is being transitioned to operations.

Documenting failures includes recording basic information on the reason for the failure, the possibility of it happening again, any particular events that may have caused the failure, and how the problem was fixed. For instance, if the model's failure was due to a memory leak in the code and the source of the leak was found, it is unlikely that this problem would happen again. However, for example, if a TEC model crashes because of the presence of sharp gradients over certain areas, it is likely to happen again under similar conditions.

Likewise, users of products from operational models would like to know how the model behaves when large or significant events occur. Would a super geomagnetic storm, for example, shut down the model? Would the output be saturated? Would the uncertainty of the model increase? Information on how a model responds to such events would provide users an idea of how to use the model, interpret outputs, and extrapolate from the output under different conditions.

Errors and Uncertainties

Error (a recognizable deficiency of the modeling process) and uncertainty (a potential deficiency of the modeling process due to a lack of knowledge) are critical terms to define within models [AIAA, 1998]. When errors are found in a model, all events and the conditions related to them should be documented and kept as part of the model documentation. Uncertainty, however, needs to be directly included as part of the model output. The representation of the uncertainty is not unique; for some models, error bars would be sufficient; for others an uncertainty map is required. Information about a model's uncertainties and errors helps operators gain a better understanding of the model behavior.

As an example, the calculation of the uncertainty in US-TEC ( includes an estimate of the TEC error from the data assimilation filter [Spencer et al., 2004] and information gained from validation studies [Araujo-Pradere et al., 2007; Minter et al., 2007], plus a percent of the trend at each point of the grid. The uncertainty map is also available as a product and helps the users to determine the applicability of the model under different conditions (Figure 1).

Figure 1.

Products from the United States-Total Electron Content (US-TEC) model include vertical TEC (VTEC) and TEC uncertainty maps.


Flagging refers to additional information incorporated in the product when its quality has been affected by particular conditions. To make decisions based on operational models, users should know the real status of the models and the implications for the quality of the output. Users and operators have to be able to evaluate the applicability of the output, and flagging is the most immediate tool they can use.

For instance, a flag should be used if a model had to interpolate output values because of missing values in inputs. Likewise, users should be warned if the output of a data assimilation model slowly drifts to the values of the background model because of input data gaps (see Figure 2). Under these conditions, the model is not nowcasting as it was designed to do; although the developers would expect this behavior in the absence of input data, the final output of the model is extrapolation rather than interpolation and thus may not be useful. Other cases could also require flagging, for example, if the cadence or density of the input data is lower than some predefined threshold, if the model is running near the limit conditions, or if the output has been saturated due to extreme conditions.

Figure 2.

The red “caution” at the top of the figure is an example of flagging in the products of the US-TEC model.

The Importance of Thorough and Understandable Documentation

The documentation of a model is the most direct communication tool between the scientists and the operators. Thus, all listed requirements could become part of the documentation of a product. But at least three descriptions should accompany a product to fulfill operational requirements.

First, documentation should include a description of inputs and outputs, as well as a primer on how to use the model. These are the most basic components of the documentation. The descriptions of input and output data, data or graphical output format, source, and other fundamental concepts should be enriched by clarification of rules for the use of the data.

The documentation should also include a description of the physics and methods underlying the code. A model could be used by operators as a black box without a clear understanding of the physics, but to better understand model outputs, the operators should have at least a basic understanding of the physics principles and assumptions involved. A model that is purely theoretical should be described in detail, and links to published papers should be offered. The description of methods should clearly explain the numerical and algorithmic implementation on which the model is based. Sometimes this description is short and clear; an empirical model using polynomial fits to data binned by latitude/longitude should be easy to describe. But when the model uses more complex approaches (data assimilation, ensemble predictions, or physics models), the developers should provide a precise and lucid description of the method.

Documentation must also contain the results of V&V, as previously discussed. Without this, the model has no credibility.

Other points could also be useful for operators and should be included in the documentation: IT requirements (software and hardware, security, etc.), setup methods, the history of different versions, the basic data flow scheme, how to operate in a fail-safe mode by backing up data, and a description of any redundant processing, licensing issues, and other factors operators might need to know to understand model outputs better.

Communication Between Developers and Operators

The efficacy of transitioning from models to operations will be considerably increased when the operators are trained by the developers. Thus, efficient communication paths must be established between developers and users during the transition and, later, during the operation of the model.

To streamline the process and to maximize the use of the available resources, operational centers should embrace standardization of nomenclature, common displays, and even operational frameworks.

Successful Transitions Through Community Discussions

Identifying users’ needs, verifying and validating models, identifying the range of potential failures, characterizing associated uncertainties and errors, flagging peculiar behaviors of models, carefully documenting model physics in addition to describing the nuances of the points listed above, and effectively communicating all information to users are the basic building blocks for model developers to transition their products into operational use.

This list is by no means exhaustive; as discussions continue, the space weather operations community will refine and add to these requirements as it builds a more standardized rubric for transitioning models to operational use. Until then, this list can guide scientists seeking to shift their academic models into products that help protect commercial assets.


The author thanks M. Gehmeyr, R. Viereck, and H. Singer for comments and useful discussions


  • Eduardo A. Araujo-Pradere is a space weather modeler at the University of Colorado's Cooperative Institute for Research in Environmental Sciences and NOAA's Space Weather Prediction Center.