A near-term iterative forecasting system successfully predicts reservoir hydrodynamics and partitions uncertainty in real time

Freshwater ecosystems are experiencing greater variability due to human activities, necessitating new tools to anticipate future water quality. In response, we developed and deployed a real-time iterative water temperature forecasting system (FLARE – Forecasting Lake And Reservoir Ecosystems). FLARE is composed of: water quality and meteorology sensors that wirelessly stream data, a data assimilation algorithm that uses sensor observations to update predictions from a hydrodynamic model and calibrate model parameters, and an ensemble-based forecasting algorithm to generate forecasts that include uncertainty. Importantly, FLARE quantifies the contribution of different sources of uncertainty (driver data, initial conditions, model process, and parameters) to each daily forecast of water temperature at multiple depths. We applied FLARE to Falling Creek Reservoir (Vinton, Virginia, USA), a drinking water supply, during a 475-day period encompassing stratified and mixed thermal conditions. Aggregated across this period, root mean squared error (RMSE) of daily forecasted water temperatures was 1.13 C at the reservoir’s near-surface (1.0 m) for 7-day ahead forecasts and 1.62C for 16-day ahead forecasts. The RMSE of forecasted water temperatures at the near-sediments (8.0 m) was 0.87C for 7-day forecasts and 1.20C for 16-day forecasts. FLARE successfully predicted the onset of fall turnover 4-14 days in advance in two sequential years. Uncertainty partitioning identified meteorology driver data as the dominant source of uncertainty in forecasts for most depths and thermal conditions, except for the near-sediments in summer, when model process uncertainty dominated. Overall, FLARE provides an open-source system for lake and reservoir water quality forecasting to improve real-time management. Key Points We created a real-time iterative lake water temperature forecasting system that uses sensors, data assimilation, and hydrodynamic modeling Our water quality forecasting system quantifies uncertainty in each daily forecast and is open-source 16-day future forecasted temperatures were within 1.4°C of observations over 16 months in a reservoir case study


Introduction 48
As a result of human activities, ecosystems around the globe are increasingly changing 49 [Stocker et al., 2013;Ummenhofer and Meehl, 2017], making it challenging for resource 50 managers to consistently provide vital ecosystem services [West et al., 2009]. In particular, 51 managers of freshwater ecosystems, which have been more degraded than any other ecosystem 52 on the planet [Millennium Ecosystem Assessment, 2005], are seeking new tools to anticipate 53 future change and ensure clean water for drinking, fisheries, irrigation, industry, and recreation 54 [Brookes et al., 2014]. 55 In response to this need, near-term, real-time iterative ecological forecasting has emerged 56 as a solution to provide stakeholders, managers, and policy-makers crucial information about 57 future ecosystem conditions [Clark et al., 2001;Dietze et al., 2018;Luo et al., 2011]. Here, we 58 define a real-time iterative forecast as a prediction of future ecosystem states with quantified 59 uncertainty, generated from models that can be constantly updated with new data as they become Multiple statistical approaches, such as Bayesian state-space modeling, particle filters, and 62 ensemble filters, are used to estimate and propagate the different sources of uncertainty that 63 contribute to the total uncertainty in a forecast (e.g., driver data, initial conditions, model 64 process, and parameters) [e.g., Clark et al., 2008;Dietze, 2017a;Ouellet-Proulx et al., 2017a]. 65 Fully specifying all of these uncertainty sources provides both an assessment of confidence in a 66 forecast for managers as they interpret the forecasts for decision-making, and valuable 67 information for researchers about how to improve forecasts [Berthet et al., 2016;Dietze, 2017a; 68 Morss et al., 2008]. 69 93 Figure 1. A simplified conceptual diagram describing the major data assimilation and 94 forecasting steps in the daily iterative forecasting cycle for the FLARE (Forecasting Lake And 95 Reservoir Ecosystems) forecasting system. The forecasting cycle has two key stages in which 96 data are assimilated each day (left box) to produce the initial conditions for a 16-day forecast 97 (right box). EnKF = Ensemble Kalman Filter; NOAA = National Oceanic and Atmospheric 98 Administration. 99 downwelling shortwave radiation (W m-2), downwelling longwave radiation (W m-2), air 169 temperature (℃), wind speed (m s-1), relative humidity (%), and precipitation (m day-1) as well 170 as daily rates of water inflow (m3 day-1) and inflow temperature (℃) and daily rates of outflow 171 (m3 day-1). The full GLM configuration with model driver files is provided in Supporting 172 horizon. The index t is the number of days since the initiation of forecasting and data 183 assimilation (i.e., 0 = the day the automated forecasting system is deployed). The index t is 184 different from f because t increases each day throughout a multi-year forecasting period, while f 185 is reset to zero each day before initiating a new 16-day forecast. represents the contribution of 186 process uncertainty [i.e., errors due to an imperfect representation of physical processes in a 187 model; Dietze, 2017a] to total forecast uncertainty and is randomly drawn from a multivariate 188 normal distribution with a mean of zero and a covariance matrix of Σ . The covariance matrix 189 (Σ ) represents the uncertainty at each depth (the diagonals of Σ ) and the covariance of 190 uncertainty between depths (the off-diagonals of Σ ). Σ is calculated from the residuals between 191 the ensemble mean and the observed temperatures before data are assimilated. The subscript t on 192 Σ signifies that the covariance matrix is determined by data assimilation of historical 193 observations and is constant over each day f in the 16-day forecast horizon. Eqn. 1 is applied at 194 the daily time step and the process uncertainty is added each day of the forecast. 195 Each 16-day forecast requires initial conditions of the water temperature at each modeled 196 depth ( =0 = , where t is the number of days since the automated forecasting system is 197 deployed) and model parameters ( ) on day f = 0 of the forecast. The variance in and 198 across ensemble members when the forecast is initialized represents the contribution of initial 199 conditions and parameters, respectively, to the total forecast uncertainty. To generate these initial 200 conditions while also calibrating the three focal parameters, FLARE assimilates temperature 201 sensor observations from the previous day into the GLM using an Ensemble Kalman Filter 202 [EnKF; Evensen, 2003;Evensen, 2009]. We used data assimilation rather than simply specifying 203 the initial conditions from the observations because the EnKF: 1) enables the generation of initial 204 conditions when sensor data were not available (e.g., during maintenance), 2) mechanistically 205 interpolates water temperature between sensor depths, 3) enables the automated calibration of the 206 three highly-sensitive model parameters, and 4) generates a historical data product of water 207 temperature with spatial and temporal gap-filling. The EnKF method of data assimilation is well-208 suited for non-linear mechanistic models like the GLM and enables ensemble-based forecasts of 209 conditions at a 16-day time horizon. Each day that a forecast is generated, the meteorological 219 driver data required by the GLM from NOAA GEFS are downloaded for the 12:00:00 UTC 220 forecast using the rNOMADS package (version 2.4.2) in R [Bowman, 2019]. 221 Additionally, our forecasts include the uncertainty from statistically downscaling the 222 gridded NOAA GEFS meteorological forecasts to local meteorological conditions. First, we 223 downscaled the NOAA GEFS forecasts to a local site using a linear relationship between the 224 NOAA GEFS forecast and observed meteorology. Next, we added random noise to each 225 meteorological variable for each day of each NOAA GEFS ensemble member. This random 226 noise was generated by a multivariate normal distribution that described the covariance in the 227 relationship between the observed meteorology and NOAA GEFS forecasted meteorology for 228 the meteorological variables required by GLM. By adding random error from this multivariate 229 normal distribution to each member of the 21-member NOAA GEFS, we generated an ensemble 230 of meteorological drivers that represented both NOAA GEFS forecast and downscaling 231 uncertainty (see Supporting Information C for more detail about the downscaling methods). 232 In addition to meteorological driver uncertainty, we include driver uncertainty from 233 future inflow discharge and temperature of the primary tributary to the lake or reservoir. For 234 simplicity, we assume here that there is only one primary inflow, but additional inflows could be 235 easily added in applications to other waterbodies. The inflow is forecasted using a simple first-236 order auto-regressive empirical model that predicts tomorrow's inflow discharge ( +1 ) as 237 function of today's flow ( ), today's daily precipitation , and random error ( ) that is normally 238 distributed with a mean of 0 and a standard deviation of : 239 in which 0 , 1 , and 2 are regression coefficients.
is estimated using the residuals from 241 the regression fit. Eqn. 2 is iteratively applied through the 16-day forecast horizon for each 242 downscaled meteorological driver ensemble member. 243 Similar to inflow, we forecast future inflow water temperature (T) as a function of today's 244 water temperature ( ), today's air temperature ( ), and random error ( ) that is normally 245 distributed with a mean of 0 and a standard deviation of 246 +1 = 3 + 4 + 5 + (eqn. 3) 247 3 , 4 , and 5 are regression coefficients.
is estimated using the residuals from the 248 regression fit. After inflow rate is calculated for each ensemble member, the outflow for that 249 ensemble member is set to equal the inflow to maintain constant water levels. 250 251 2.3 Application of forecasting system 252

Site description 253
We applied and evaluated FLARE at Falling Creek Reservoir (FCR), a dimictic, 254 eutrophic reservoir located in Vinton, Virginia, USA (37.30°N,79.84°W). FCR is a shallow 255 (maximum depth = 9.3 m, mean depth = 4 m), small (surface area = 0.119 km2) reservoir 256 [Gerling et al., 2016]. The lake generally exhibits summer thermal stratification from May to 257 October and is ice-covered for short durations during December to March [Carey, 2019]. FCR is 258 primarily fed by one upstream tributary and was maintained at full pond throughout this study by 259 the Western Virginia Water Authority (WVWA), who own and manage the reservoir as a 260 drinking water supply [Gerling et al., 2014;Gerling et al., 2016]. 261 We used high-frequency sensors to monitor FCR water temperature [Carey et al., 2020b] 262 and meteorology [Carey et al., 2020a], and measure the inflow discharge rate and water 263 temperature of the primary tributary entering FCR through a weir [Carey et al., 2020c]. retrieving the previous 24 hours of meteorological, inflow, and water temperature data from the 299 reservoir sensor network, 2) advancing the model states (i.e., the water temperatures at the 300 modeled depths) one day using the observed meteorology and inflows as drivers to the GLM 301 model, 3) using the EnKF to assimilate observed water temperature and update states and 302 parameters, and 4) initiating a 16-day forecast using updated states and parameters as initial 303 conditions and parameters, which started at 12:00:00 UTC of the current day. As result of the 304 daily data assimilation set in the forecasting cycle, parameter distributions and process 305 uncertainty (Σ ) were updated through each iteration of the forecasting cycle. Once a 16-day 306 forecast was launched from the initial conditions set by data assimilation, the parameters did not 307 change over its 16-day horizon. 308 We also used a persistence null model to forecast water temperatures from 28 August 309 2018 to 15 December 2019 as a baseline for evaluating forecast skill. We used a persistence null 310 model rather than a climatology null model because we lacked the multiple years of daily water 311 temperature observations that would be necessary to generate historical averages. Our null model 312 assumed water temperature did not change over the 16-day horizon based on eqn. 4. 313 where is represents the contribution of process uncertainty to total forecast uncertainty and is 315 randomly drawn from a multivariate normal distribution with a mean of zero and a covariance 316 matrix of Σ . We applied the same data assimilation methods used to determine the initial 317 conditions ( =0 = ) and process uncertainty covariance matrix (Σ ) in the forecast 318 (Supporting Information A) to the persistence null forecast. 319 320

Evaluation of forecasts 321
We used multiple metrics to evaluate forecast performance. First, we calculated root 322 mean squared error (RMSE) and the mean Continuous Ranked Probability Score (CRPS) for the 323 forecasted water temperature (F) and the observed water temperature (y) at each day in the 16-324 day forecast horizon. CRPS is analogous to mean absolute error and is used with probability 325 distributions produced by ensemble forecasts [Gneiting et al., 2005]. CRPS is defined as: 326 where 1 ≥ is a step function that takes the value 1 if u ≥ y and the value 0 otherwise [Gneiting 328 et al., 2005]. We used the "verification" package in R [Gilleland, 2015] to calculate CRPS. The 329 forecast skill was assessed by comparing the CRPS of the forecast ( ) and the null 330 model ( ) using a skill score: 331 Thus, we report three CRPS values: , , and CRPS forecast skill, which 333 was a normalized value in which zero indicated a forecast that performed the same as the null, 334 one was a perfect forecast in that it predicted observations with no error, and values less than one 335 indicated a forecast that performed worse than the null. 336 Second, bias was assessed using the absolute difference between the mean forecasted 337 water temperature ( ) and the observed water temperature ( ) at each day in the 16-day 338 forecast. 339 Third, the reliability of the confidence intervals was also assessed by calculating the total 341 number of observations contained in a specific confidence interval [i.e., a reliability diagram; 342 Bröcker and Smith, 2007]. We considered a forecast to be well-calibrated if its 90% confidence 343 intervals contained 90% of the observations over the 100 days of forecasts. The confidence 344 interval would be over-confident if fewer than 90% of observations were in this interval and 345 would be under-confident if more than 90% observations were in this interval. We calculated the 346 proportion of observations in the 10, 20, 30, 40, 50, 60, 70, 80, and 90th confidence intervals for 347 1-day, 7-day, and 16-day time horizons. 348 In our evaluation, we averaged RMSE, CRPS, bias, and reliability of confidence intervals 349 for each day within the 16-day forecast horizon over the 475 daily forecasts to calculate aggregated forecast performance. In addition, we classified each of the 475 days as either 351 thermally-stratified or mixed to compare relative forecast performance between the two thermal 352 regimes. We applied the turnover criterion of McClure et al. [2018]: days with water 353 temperatures at the near-surface (1.0 m) and near-sediments (8.0 m) at 12:00:00 UTC (the 354 beginning of each forecast's daily time step) that were <1℃ different were categorized as mixed, 355 and days with water temperatures at 1.0 m and 8.0 m that were ≥1℃ different were categorized 356 as thermally-stratified. Given that there was only intermittent ice cover on the reservoir for N = 357 11 days total during the forecasting period and stable inverse thermal stratification never 358 occurred [Carey, 2019], days with ice were classified as mixed. 359 Finally, we evaluated the ability of the forecasting system to predict the day that fall 360 turnover was first observed at the reservoir in both years. In the forecasts prior to turnover, we 361 calculated the proportion of ensemble members that predicted the 1.0 m and 8.0 m water 362 temperatures to be <1℃ different on each day in the forecast. We expected this probability to 363 increase for the day of turnover relative to other days as the onset of turnover approached. 364 365

Partitioning of uncertainty 366
We isolated the contribution of each uncertainty source by removing the contributions of 367 all other uncertainty sources in a set of scenarios run for each day over the forecasting period.  (Table 1). 441 lower CRPSforecast skill score of 0.14 at 8.0 m when averaged across all forecasts (Table 1). 448 Forecast confidence intervals at 1.0 m depth were well-calibrated (exactly 90% of the 449 observations were in the 90% confidence intervals; Table 1)   were within the 90% interval) and underestimated the uncertainty across all time horizons (Table  453 1; Figure 5D). Averaged across all depths, the forecast confidence intervals were smaller than 454 expected (79-85% of observations fell within the 90% confidence intervals; Table 1)     Forecasting performance was generally similar between the thermally-stratified and 486 mixed periods, with a few key exceptions. At 1.0 m, the CRPS forecast skill was consistently 487 greater (up to 0.3 skill score) and bias was lower in stratified than mixed conditions, regardless 488 of forecast time horizon (Table 1). In contrast, at 8.0 m, CRPS forecast skill was slightly greater 489 at 1 and 7-day horizons but there was no difference in forecast skill between mixed and stratified 490 conditions at a 16-day horizon (Table 1). Finally, the confidence intervals were better calibrated 491 at 8.0 m for the mixed period (85% of observations in the 90% confidence interval at the 16-day 492 time forecast horizon) than the stratified period (36% of observations in the 90% confidence 493 interval; Table 1). 494 The forecasts successfully predicted the onset of fall turnover that occurred on 22 495 October 2018 up to 14 days in advance (Figure 6a). At 14 days prior to 22 October 2018, the 496 predicted chance of turnover occurring on that day emerged above 50% for the first time ( Figure  497 6a). Confidence that turnover would occur on 22 October increased eight days prior to turnover, 498 when its predicted chance exceeded 75%. By 19 October (three days prior to turnover), the 499 predicted chance of turnover on 22 October increased up to 81%, 31% higher than any of the 500 other potential days. 501 The forecasts successfully predicted the onset of fall turnover that occurred on 24 502 October 2019 up to four days in advance (Figure 6b). The predicted chance of turnover occurring 503 that day increased from 43% five days prior to turnover to 63% four days prior to turnover. The 504 predicted chance of turnover occurring on 24 October remained above 50% as the day of 505 turnover approached. However, the confidence in turnover date was lower in 2019 than 2018, 506 potentially because the reservoir temporarily re-stratified immediately after the initial onset of 507

Uncertainty partitioning 516
Forecast uncertainty varied over time and depth. In the thermally-stratified period, the 517 total forecast uncertainty was approximately four times higher for forecasts at 1.0 m than 8.0 m, 518 with striking differences in the relative importance of different sources to total forecast 519 uncertainty (Figure 7). For both depths, model process uncertainty was the most important 520 contributor to total uncertainty over the first three days in a 16-day forecast horizon during 521 stratified conditions. After three days, the contribution of meteorological driver uncertainty 522 dominated at 1.0 m depth (>50% of uncertainty) while process uncertainty remained the 523 dominant source of uncertainty at 8.0 m depth throughout the 16-day forecast horizon (>95% of 524 uncertainty). At the near-surface, the statistical downscaling of the meteorological driver data 525 was a more important uncertainty source than the uncertainty due to the NOAA meteorological 526 forecast itself throughout the 16-day horizon. Parameters, inflow driver data, and initial 527 conditions were not important uncertainty sources at either depth. 528 In contrast to the thermally-stratified period, the total uncertainty and the relative 529 Overall, FLARE was able to forecast water temperature on average within 1.4°C (RMSE) 545 at all depths of the reservoir over a 16-day horizon during a 475-day period that encompassed 546 both stratified and mixed thermal conditions (Table 1). Importantly, the forecasting system was 547 able to predict both observed temperatures and identify the date of fall turnover at least four, and 548 up to 14, days in advance. In general, forecasting system performance was similar between 549 stratified and mixed periods ( We found that process uncertainty was the most important source of uncertainty early in 556 the 16-day forecast but that meteorological driver data uncertainty dominated by the end of the 557 forecasting period for all but near-sediment depths during the thermally-stratified period ( Figure  558 7). This finding is intuitive, as the epilimnion is more sensitive to meteorological forcing than the 559 hypolimnion, and during mixed conditions, both surface and near-sediment depths exhibit 560 similar dynamics. We note that process uncertainty did not decline through the forecast horizon, 561 rather, the total uncertainty grew while the relative importance of process uncertainty decreased. 562 By definition, process uncertainty is not attributed to an individual component of a model but a 563 model's overall inability to represent complex interactions occurring in the natural environment 564 [Dietze, 2017a]. Improvements to the GLM model structure, such as improved sediment heating 565 formulations, could decrease this uncertainty source; however, there is likely an irreducible 566 component of process uncertainty inherent to using a 1-D model to simulate a 3-D waterbody. 567 The contribution of the NOAA ensemble forecast uncertainty to total forecast uncertainty 568 was overall less than the contribution of uncertainty from downscaling the coarse-scale NOAA 569 forecast to the local site. This downscaling was based on relationships between the NOAA GEFS 570 forecasted variables and the meteorological data measured at the reservoir (see Supporting 571 Information C, D). This highlights that future work should focus on applying more advanced 572 meteorological downscaling methods, such as neural networks [Kumar et al., 2012], to build 573 better relationships between the NOAA forecasts and the local meteorological station. Our 574 uncertainty results are comparable to [Dietze, 2017b], who partitioned the uncertainty in forest 575 carbon flux forecasts over 16-day horizons and similarly found that NOAA ensemble driver data 576 uncertainty dominated total forecast uncertainty at the end of the 16-day horizon, though that 577 study did not include NOAA forecast downscaling uncertainty. Dietze [2017b] also found that 578 process uncertainty was more important than the meteorological driver data uncertainty early in 579 the 16-day forecast horizon. Overall, while that study and ours are just two examples of forecast 580 uncertainty partitioning, they provide insight from very different ecosystems (reservoir water 581 temperatures vs. forest carbon fluxes) and together suggest that meteorological driver data are an 582 important contributor of uncertainty in near-term iterative forecasts. Moreover, our study 583 indicates that future forecasting applications should investigate the relative role of 584 meteorological forecasts vs. their downscaling when quantifying total forecast uncertainty. 585 Throughout the forecasting period, inflow driver data, parameters, and initial conditions 586 were minimal sources of uncertainty, regardless of forecast horizon. During the forecasting 587 period, the reservoir exhibited a median water residence time of 145 days [Carey et al., 2020c;588 Gerling et al., 2014], so it was surprising that the inflow driver data did not contribute more to 589 total uncertainty. However, the low residual error in our inflow forecast model (eqn. 2; R2 = 590 0.98) suggests that our simple first-order autoregressive model was able to successfully capture 591 variability in inflow dynamics throughout the forecasting period. Inflow driver data may 592 contribute more uncertainty for waterbodies with multiple inflows and higher discharge than we 593 observed in our case study. Parameter uncertainty and initial conditions uncertainty were 594 similarly minor, indicating that the data assimilation with state updating and automated 595 parameter tuning that was built into the iterative forecasting cycle was able to constrain these 596 sources of forecast uncertainty in our application. 597 FLARE was able to identify the onset of fall turnover ~14 days in advance in 2018 and 598 approximately four days in advance in 2019. This determination was made by identifying the 599 first day when the probability of turnover exceeded 50% (Figure 6). This difference in 600 performance between years may be due to differing underlying hydrodynamic and 601 meteorological drivers of fall turnover. In 2018, the surface waters of the reservoir had rapidly 602 cooled (by ~8°C) in the 14 days prior to the onset of turnover in response to cold night air 603 temperatures (down to 3°C) that continued after turnover. The day prior to turnover onset in 604 2018 coincided with high wind speeds, up to 9.8 m s-1 [Carey et al., 2020a], which persisted after 605 turnover and promoted continuous mixing once turnover started. In contrast, in 2019, the 606 reservoir re-stratified later on the same day that the turnover criterion was first met due to warm extremely challenging for the forecasting system to correctly predict turnover on that particular 612 day. Consequently, it is likely that using a more conservative fall turnover criterion (e.g., the 613 temperature difference between 1 and 8 m depth must be <1oC and maintained for 48 614 consecutive hours, following Woolway et al. learning-based methods must be able to fully quantify forecast uncertainty to be comparable to 648 our process model-based approach [see Daw et al., 2020]. Second, our parameter uncertainty 649 may be underestimated because we only included three parameters in the EnKF data 650 assimilation, though we note that our sensitivity analysis determined those three parameters to be 651 the most important for water temperature predictions within the GLM (Supporting Information 652 A). While the ability to estimate parameters using EnKF is well-established, the EnKF method is 653 not specifically designed to estimate parameter distributions like Bayesian Monte-Carlo Markov 654 Chain methods [Dietze, 2017a]. A current limitation to implementing a Bayesian Monte-Carlo 655 Markov Chain approach is the computation time to execute the GLM simulation within a daily 656 iterative forecasting cycle. Future work that uses emulators of GLM may be able to speed 657 computation and allow for more robust estimation of the joint distribution model of parameters 658 that represent both prior knowledge and observed data [e.g., Fer et al., 2018]. 659 We note four limitations of our application of FLARE at FCR. First, our implementation 660 included only one inflow, in which we used a simple first order auto-regressive model to forecast 661 future inflow discharge and water temperature. Further work is needed to evaluate FLARE in 662 waterbodies with more inflows. For those waterbodies, FLARE could be linked to a watershed 663 hydrology model to mechanistically link the precipitation and temperature forecasts to inflow 664 driver data. Second, we set outflow equal to inflow to maintain a constant water level based on 665 observations, which does not occur in many lakes and reservoirs. Future development of FLARE 666 could explicitly assimilate observations of water level to constrain forecasts in more dynamic 667 systems. Third, our data assimilation in FLARE used high-frequency sensors measuring water 668 temperature at 1-meter depth intervals at 10-minute resolution, an observational capacity at FCR 669 that may not be available in all waterbodies. Future evaluation of FLARE should focus on 670 determining how forecast quality degrades as the vertical and temporal resolution of observations 671 are reduced to reflect other common temperature monitoring approaches (e.g., weekly 672 temperature depth profile measurements with a hand-held sensor, minute to hourly temporal 673 observation at only one depth, remote sensing measurements of surface temperature, etc.). 674 Finally, our application was limited to a period with minimal ice cover; further work is needed to 675 determine FLARE performance for FCR during longer ice-covered periods and waterbodies that 676 experience ice for many consecutive months. 677 Overall, our study demonstrates the utility of the daily iterative forecast workflow ( Figure  678 1) for forecasting lake and reservoir water temperatures with fully-specified uncertainties that 679 can be applied to other waterbodies. FLARE builds the foundation for future water quality data 680