Making Ecosystem Modeling Operational–A Novel Distributed Execution Framework to Systematically Explore Ecological Responses to Divergent Climate Trajectories

Marine Ecosystem Models (MEMs) are increasingly driven by Earth System Models (ESMs) to better understand marine ecosystem dynamics, and to analyze the effects of alternative management efforts for marine ecosystems under potential scenarios of climate change. However, policy and commercial activities typically occur on seasonal‐to‐decadal time scales, a time span widely used in the global climate modeling community but where the skill level assessments of MEMs are in their infancy. This is mostly due to technical hurdles that prevent the global MEM community from performing large ensemble simulations with which to undergo systematic skill assessments. Here, we developed a novel distributed execution framework constructed of low‐tech and freely available technologies to enable the systematic execution and analysis of linked ESM/MEM prediction ensembles. We apply this framework on the seasonal‐to‐decadal time scale, and assess how retrospective forecast uncertainty in an ensemble of initialized decadal ESM predictions affects a mechanistic and spatiotemporal explicit global trophodynamic MEM. Our results indicate that ESM internal variability has a relatively low impact on the MEM variability in comparison to the broad assumptions related to reconstructed fisheries. We also observe that the results are also sensitive to the ESM specificities. Our case study warrants further systematic explorations to disentangle the impacts of climate change, fisheries scenarios, MEM internal ecological hypotheses, and ESM variability. Most importantly, our case study demonstrates that a simple and free distributed execution framework has the potential to empower any modeling group with the fundamental capabilities to operationalize marine ecosystem modeling.


Introduction
Climate change and anthropogenic activities such as fishing are having far-reaching consequences for the functioning and stability of marine food webs and the ecosystem services that humanity relies on (e.g., Halpern et al., 2019;Pörtner et al., 2014).To better understand such impacts and their consequences for ocean life and ecosystem services, the global ocean science community increasingly deploys modeling systems that incorporate climate, ocean circulation, biochemistry and marine life under multiple stressors (e.g., Stock et al., 2023).Marine Ecosystem Models (MEMs) forced with Earth System Models (ESMs) are such modeling systems, where ESMs represent the fundamental physical, chemical and biological processes governing the evolution of the Earth system and the interactions within its major components (i.e., atmosphere, ocean, cryosphere and land), while MEMs represent mechanistically the non-linear dynamics between marine species and within marine food webs (Steenbeek et al., 2021;Tittensor et al., 2018).
At present, the scientific agenda on future climate change largely focuses on the decadal to century time scales (Coll et al., 2020;Lotze et al., 2019;Pörtner et al., 2022).Although this long term time scale is valuable for strategic planning, the majority of immediate political and commercial decisions are made on shorter time scales, the seasonal-to-decadal scale (Figure 1; Meehl et al., 2009;Payne et al., 2022).
At short time scales, from days up to a month, the predictive capacity of ocean and atmosphere models is firmly limited by the chaotic nature of the Earth system.Infinitesimal perturbations applied to a given set of initial conditions (the "initial value" problem, Collins, 2002;Meehl et al., 2009) lead to diverging trajectories in rather short temporal windows.On the other hand, at long time scales from decades to centuries, slow changes in external radiative forcings such as solar irradiance, aerosols and greenhouse gases (Meehl et al., 2009, the "boundary conditions problem," 2021) induce long-term trends that emerge over the chaotic variability.Since the pioneering studies of Smith et al. (2007), Keenlyside et al. (2008), andPohlmann et al. (2009), the climate modeling community has been largely investing in improving the predictability on intermediate time scales, from months up to a decade, where climate models are both sensitive to initial value constraints and boundary conditions (Figure 1).This exercise has been underpinned by multi-model coordinated initiatives like the Decadal Climate Prediction Project (DCCP; Boer et al., 2016) and has been recently replicated with more complex ESMs (e.g., Ilyina et al., 2021;Li et al., 2016;Sospedra-Alfonso et al., 2021) capable of simulating, among other things, atmospheric chemistry and ocean biogeochemistry.These predictions rely on the initialization of the models with conditions that describe the best knowledge of a given observed state, a process that allows leveraging the predictability that arises from slow-paced internal variability processes, and are additionally driven with the historical and projected evolution of the main radiative forcing factors (e.g., solar irradiance, volcanic aerosols, concentrations of greenhouse gases) to capture the externally forced variability.The performance of these ESMbased predictions is evaluated by performing large sets of retrospective ensemble forecasts that are evaluated in terms of their ability to reproduce the observed variability.These predictions are typically initialized every year, and contain several ensemble members that are run forward for up to 10 years (Figure 2; Boer et al., 2016).
A next logical step is to assess whether/how predictive capacity of key ecosystem drivers within ESMs can significantly enhance the predictive skill in ecological models-a core scientific objective of EU Horizon 2020 project TRIATLAS (Tropical and South Atlantic Climate-Based Marine Ecosystem Prediction for Sustainable Management).To date, impacts of uncertainty related to the internal variability of ESMs on decadal time scales has been investigated for a handful of ecological hypotheses with encouraging results.For example, Årthun et al. (2018), Thorson (2019), and Payne et al. (2022) demonstrated improved confidence in predicting habitat suitability and species distribution shifts related to changes in ocean temperatures.Park et al. (2019) demonstrated that inter-annual variations in fish catches can be anticipated from ESM-based skilful predictions of phytoplankton and SSTs.
However, to our best knowledge, a systematic quantification of how ESM variability on decadal scales could cascade through complete marine food webs, and an evaluation of whether this variability has the potential to significantly change MEMs trajectories with the aim to improve the predictability of a MEM, have not yet been performed.Such an exercise would require systematically executing a MEM for potentially hundreds of retrospective forecasts, and analyzing large volumes of spatial-temporal model output.This would require computing power far beyond a single workstation, and although the concept of using the combined power of a network of computers to solve demanding computational tasks dates at least back to the 1970s (e.g., Farber, 1970;Jones & Schwans, 1979;Vouk, 2008), the MEM community is mostly unable to utilize distributed computing power due to compounding challenges.Inherent limitations related to their computational complexity and structure, with long run times to represent non-linear processes at different temporal and spatial scales that cascade through food webs make MEMs incompatible with common high-performance computing technologies and computing scientific software execution infrastructures (Steenbeek et al., 2021).Scientific workflow management systems (Curcin et al., 2010;Wang et al., 2008), code execution frameworks (e.g., Ludescher et al., 2013), and commercial cloud computing solutions tend to require that hosted applications execute cleanly, safely, orderly and optimized by abiding to strict guidelines regarding programming languages and code architecture, execution efficiency, resource use, and scalability (e.g., Rimal et al., 2011).As MEMs are mostly developed on limited academic budgets with little involvement of IT staff, re-coding a MEM to match such requirements is too costly and perhaps even undesirable in order not to get locked into proprietary technological execution frameworks (Steenbeek et al., 2021).On the other hand, distributed computing via networked computers, virtual machines and virtualization technologies such as Kubernetes (Jeffery et al., 2021) and workload managers such as SLURM (Yoo et al., 2003) could certainly carry the systematic execution of ESM/MEM complexes in their original form, but require dedicated funding and technical support to operate and maintain.Whereas a few fortunate modellers may have access to institutional distributed computing environments and the dedicated staff to assist in the operation, the majority of the MEM community is left without practical solutions to systematically and comprehensively assess their models (Steenbeek et al., 2021).
The global MEM community needs a simple, generic and open-access framework that uses low-tech and free software to support the systematic mass-execution and mass-analysis of data-and computationally demanding  Earth's Future 10.1029/2023EF004295 scientific tools.Such a framework must allow the execution of software written in any language, as MEMs have been implemented in a broad range of platforms such as .NET, C, Fortran, Matlab, Python and R (e.g., Audzijonyte et al., 2019;Pal et al., 2020;Steenbeek et al., 2016).Such a framework must also support ecosystem modellers in deploying their workflows and toolkits in their original form.Ecosystem modeling is a complex field that combines understanding of marine biology and ecology, biochemistry, hydrology, fisheries dynamics and socio-economics, and that relies on the operation of a wide range of complex software tools to process, generate and analyze data.Thus, rather than requiring that analytical processes are translated into a common annotation, a scientific framework must acknowledge this diversity in software tools and support the execution of scientific workflows as they are.And last, to facilitate ease of use, the framework must seamlessly scale up desktop workflows across available hardware.
With these constraints met, such a framework would form the scaffolding for executing computationally demanding applications such as MEM validation, calibration and uncertainty assessments (Figure 3).
Here we present a prototype MEM run framework that we constructed to facilitate the systematic execution of MEMs.We apply this prototype framework to systematically run the seasonal-to-decadal retrospective predictions obtained from two different ESMs, EC-Earth3-CC and NorCPM1, through the mechistic, spatiotemporal explicit trophodynamic MEM EcoOcean.Through this, we demonstrate the feasibility of the approach as an important step toward making Marine Ecosystem Modeling operational.

Materials and Methods
Here we describe the main design considerations in developing the framework, and we present a case study to demonstrate that the framework can be used to systematically mass-execute MEMs.We then perform an indicative analysis to quantify whether ESM uncertainty has the potential to significantly affect the output of a complex and mechanistic global MEM, examining relevant functional groups within the food web in selected subregions of the global ocean.We outline the ESMs, the MEM and the runtime environment that we used, the application of the framework to perform the simulations, and a cursory analysis of modeling results.

Framework
The aim of the prototype MEM multi-run framework is to demonstrate that computationally heavy mechanistic and spatiotemporal MEMs can be systematically executed and analyzed.Following the recommendations of Steenbeek et al. (2021), one should be able to operate the framework with minimal reliance on technical expertise, funding and specialized hardware to facilitate global uptake.Thus, instead of adopting an existing workflow management system, we opted to develop a framework from the ground-up to solely focus on the needed functionality without any additional complexity or restrictions related to funding and intellectual property.
The conceptual structure of the prototype MEM multi-run framework, henceforth referred to as "the framework," is outlined in Figure 4.
Four independent and loosely connected layers (hardware, framework, application and shared storage) interplay as follows: 1. Hardware: The hardware layer can consist of any computing hardware able to run a particular MEM. 2. Framework: The framework layer handles the execution of scientific work across a computing network, and consists of the following components: • A workload, which is a text file that describes the scientific work that the framework needs to execute.
A workload consists of a number of independent computational experiments (Jobs), each in turn consisting of one or more executions of specific modeling scripts (Tasks).The workload also states which wrap-up job should be executed if the workload execution succeeds or fails.The wrap-up jobs provide the scientific application to decide on next execution steps such as dispatching a new workload.For a conceptual example of what a workload could look like, refer to Supporting Information S2, Text S1, inset 1. • A server, which is a small piece of software that maintains an active connection to available clients, with whom it can exchange information.Jobs in a workload are dispatched to clients; • One or more clients, where the jobs are executed.Clients maintain an active connection to the central server and exchange information with it.Set up as a server-client structure, the framework (2) dispatches the jobs that are defined within a scientific workload across available hardware (1).The framework loosely interacts with scientific software to execute the tasks within a job (3) and relies on available shared storage solutions (4) to distribute input data to clients, and collate resulting output on the server.The "eye" icon reflects the loose interactions where the framework checks upon the state of external software and data without any form of technical integration and dependency.When a workload has been processed, scientific software is notified, which can dispatch a new scientific workload if desired.
• Clients and the server can exchange information through a range of communication protocols built into the framework, each catering to different usage scenarios but that may require varying levels of IT expertise to deploy.• A server-side work dispatcher handles and monitors workload execution: the dispatcher sends Jobs to clients, tracks their execution based on feedback from the clients, keeps track of the overall status of the workload execution, and upon completion, orders the server-side execution of the wrap-up job.• A client-side job and task runner handles the sequential execution of the tasks within a job.Task execution involves starting scientific software, monitoring its progress, and waiting for its termination (or actively terminating it if scientific software has become unresponsive).Job and task execution status updates are sent back to the server.Security measures are in place to ensure that the framework only operates on preauthorized folders and executables.3. Application: The application layer consists of the scientific software that has been made available to the framework.Any software can be included as long as it can be parameterized and executed via a command line.4. Shared storage: The shared storage layer makes sure that server-side input data is made available to client processes, and that scientific output generated at the client side is collated on the server.The prototype framework does not contain facilities to synchronize data, as there are plenty of viable solutions in the form of shared (network) storage, cloud storage providers, and file-sharing services.
Specific considerations: For ease of deployment and to demonstrate versatility, the prototype framework and any scientific application deployed across it are kept fully independent.Server-side scientific applications place workload text files in a predestined location for the work dispatcher to find.At workload execution completion, server-side wrap-up jobs can be used to activate the scientific applications once again to analyze execution results, and to dispatch a follow-up workload if desired.
Running jobs and tasks: The framework was made to launch two types of scientific applications.
The first category comprises the execution of stand-alone executables whose runtime behavior can be controlled through the command line and that may execute a programmed script.The framework launches the executable and monitors its progress while capturing standard output and error information (Ritchie, 1984) to aid troubleshooting, and process exit codes (Maleki, 2022) to know whether a task succeeded or failed.If a stand-alone executable becomes unresponsive it can be terminated after a specified time-out.When the stand-alone executable terminates, exit codes with a value of zero indicate that the execution succeeded.
The second category includes internal executions-code that resides within the execution client-via a software engineering mechanism known as "runtime reflection" (Redondo et al., 2008;Schmidt et al., 2000, p. 134).Instead of indicating a physical separate executable, a task alias refers to the name of a specifically formed and recognizable piece of code that resides in the client code base and that implements a task.This code is dynamically looked up, executed and monitored for completion while standard output and error information (Ritchie, 1984) and the exit code are collected by the framework.In-client code is intended to facilitate running the framework on environments where clients cannot launch separate executables such as HPC clusters.
Information exchange: Because the framework can be deployed over operating systems (OS-es) that may be configured differently, scientific software deployed over the framework should use consistent natural language specific formatting of numbers, date-and time fields, and OS specifics such as text file line endings, etc.
Extensibility: In the particular case study outlined here, we implemented the framework in Microsoft Visual Basic.NET, compiled to.NET Standard 6.0 which produces executables that can be installed and natively executed on Windows and Linux OS-es.In order to customize the framework to future needs and to change and improve its functioning, the framework source code is organized as an open-source Application Programming Interface that is open to modifications and extensions.

Installation and deployment:
In order to use the framework, operators will need to prepare target computers with the framework software, cloud storage provider software, and the programs needed for the execution of a scientific workload.This is an unavoidable and possibly challenging task, but for use cases such as we present here where we interconnect regular desktop computers via cloud storage providers, this task should not be any more challenging than configuring a desktop computer for regular use.
For additional framework design considerations refer to Supporting Information S2, Text S1.

Earth System Models
For this case study, two contrasting ESMs participating in TRIATLAS, EC-Earth3-CC and NorCPM, delivered estimates in photoplankton biomasses and sea water temperatures for the years 1950-2015.ESM variable names and units were standardized to the Climate Model Operator Rewriter standard 3.3 (Nadeau et al., 2018).Both models delivered a single continuous simulation reconstructing the evolution of the global biophysical system, and an ensemble of yearly starting retrospective predictions characterized by three arbitrarily selected members of their full ensemble.We used three members as a good trade-off to reasonably sample the ESM forecast uncertainty, while limiting the computational burden for the MEM simulations.Here we provide a brief technical summary of the two contrasting ESMs and their contribution to the case study.Considering two ESM models with different physical and biogeochemical ocean components, and for which the decadal predictions are also initialized in a different manner, offered the ability to explore the sensitivity of the MEM predictions to the uncertainties in the state variables used as boundary conditions.
EC-Earth3-CC (Döscher et al., 2022) is the ESM version of the global climate model EC-Earth that includes a description of the carbon cycle at its standard resolution.Its atmospheric component is the Integrated Forecast System from the European Center for Medium-Range Weather Forecasts (ECMWF) and uses a T255 horizontal resolution and 91 vertical levels.The ocean component is NEMO3.6 (Madec & the NEMO team, 2023), which includes the sea ice model LIM3 (Rousset et al., 2015) and the ocean biogeochemistry model PISCES (Aumont et al., 2015) integrated in the code.NEMO3.6 is run with an ORCA1 horizontal grid (i.e., nominal one-degree horizontal resolution) and 75 vertical levels.Dynamical vegetation, land use, and terrestrial biogeochemistry are provided by LPJ-GUESS (B.Smith et al., 2014).The library OASIS3-MCT (A.Craig et al., 2017) is used for the coupling of most of the model's components.More detailed information on EC-Earth3-CC and its different components can be found in Döscher et al. (2022).
The predictions from EC-Earth3-CC were performed following the experimental protocol for the DCPP experiments DCPP-A (Boer et al., 2016), with start dates for every 1st of November in the period 1980 to 2019.Start dates prior to 1980 were not included as the quality of the atmospheric/oceanic reanalysis used to initialize the ESM model cannot be properly validated for the pre-satellite era (i.e., before 1980) due to the lack of widespread biogeochemical observations.A total of 15 members were produced for each start date, with a forecast length of 7 years, instead of 10, to save computational resources.
The initialization protocol is a precursor of the methodology applied for the climate predictions of Bilbao et al. (2021).The ocean physical and biogeochemical conditions come from a reconstruction performed with the ocean component of EC-Earth3-CC (hereafter referred to as RECON) forced at the surface using an atmospheric reanalysis.In this reconstruction, observations for temperature and salinity are assimilated at the surface by adding fluxes for heat and freshwater to the energy and salinity conservation equations.At the same time, the interior of the ocean is also nudged toward a reference re-analysis product for both temperature and salinity.It is important to notice that no observations of ocean biogeochemistry or sea-ice are assimilated such that these fields are left free to evolve in response to ocean physics.More details about the EC-Earth3-CC initialization procedure as well as about the reference observation products used can be found in Supporting Information S1, Text S2.
For this application, EC-Earth3-CC delivered monthly vertically integrated large and small phytoplankton carbon concentrations (lphyc and sphyc), and mean potential sea water temperatures (thetao) for the top 150 m, the entire water column, and the bottom.These variables were delivered for a 1980-2015 continuous historical run (RECON) and for an ensemble of three 7-year retrospective predictions (i.e., r6i1p1f1, r7i1p1f1 and r8i1p1f1 DCPP-A members) with yearly start dates for the whole period 1980-2013.
NorCMP1, short for the Norwegian Climate Prediction Model version 1 (Bethke et al., 2021), is based on the Norwegian ESM version 1 (NorESM1; Bentsen et al., 2013;Iversen et al., 2013) which is in turn based on the Community Climate System Model version 4 (CCSN4; Gent et al., 2010;Vertenstein et al., 2010) after important modifications.Its ocean component uses a standard horizontal grid (gx1v6) with 53 layers in an isopycnic vertical coordinate, which includes prognostic biogeochemical cycling in the form of the HAMburg Ocean Carbon Cycle (HAMOCC; Maier-Reimer, 1993;Maier-Reimer et al., 2005) adapted to this isopycnic ocean model framework (Tjiputra et al., 2010).The atmospheric component consists of the Oslo version of the Community Atmosphere Model (CAM4-OSLO; Kirkevåg et al., 2013), that has specialized chemistry-aerosol-cloud-radiation interaction schemes, with a two degree horizontal resolution and 26 levels in the vertical with a hybrid sigma-pressure Earth's Future 10.1029/2023EF004295 coordinate.The land (same grid as the atmospheric component) and the sea ice (same grid as the ocean component) components are basically the same as in CCSM4, except for a scheme for dust deposition on snow/ sea ice.The overarching execution control of the coupled system and the exchange of information between model components is handled by the CCSM4 coupler CPL7 (A.P. Craig et al., 2012).Detailed descriptions of the NorESM components and its biogeochemical ocean module can be found in Bentsen et al. (2013) and Tjiputra et al. (2010), respectively.
NorCPM1's DCPP-A simulations have start dates for every 15th of October in the period 1960 to 2018.A total of 10 members were produced for each start date, with a forecast length of 10 years (Bethke et al., 2021).Each member of these hindcast experiments ("hindcast-i2") are initialized by the 15 October states of the first 10 members of a data assimilation (DA) simulation ("assim-i2"), which uses oceanic observations to update ocean and sea ice components.This DA simulation uses a 1950-2010 Sea Surface Temperature (SST) reference climatology for computing anomalies, replacing the climatology of the observations by the model climatology calculated from the NorCPM1's 30-member no-assimilation historical experiment, and additionally updates the sea ice state via strongly coupled DA of the observations (Bethke et al., 2021).The DA scheme updates all ocean physical state variables but not the biogeochemical state variables.However, Fransner et al. (2020) showed that the initialization has no important effect on the predictability of ocean biogeochemistry beyond lead year 1, but also showed that assimilating SST can potentially constrain the near-surface primary production and hence the biogeochemical variability.
For this application, NorCPM1 delivered monthly mean integrated phytoplankton carbon (phyc), and mean potential sea water temperatures (thetao) for the top 150 m, the entire water column, and the bottom.These variables were delivered for a 1980-2015 continuous historical run (HIST) and for an ensemble of three10-year retrospective predictions initialized every year in 1980-2008, which corresponds to members r1i2p1f1, r5i2p1f1 and r10i2p1f1 of the DCPP-A ensemble.

Marine Ecosystem Model
The MEM deployed in this case study is EcoOcean, a mechanistic, spatiotemporal ecosystem modeling complex of the global ocean that includes food-web dynamics from primary producers to top predators under influence of anthropogenic activities and climate change.EcoOcean has at its core the Ecopath with Ecosim modeling approach (Christensen & Walters, 2004), where the spatial-temporal module Ecospace has been heavily modified to represent spatial heterogenity in fishing and the behavior, growth and movement of functional groups across the worlds' oceans (Christensen et al., 2015;Coll et al., 2020).
EcoOcean was parameterized and calibrated as described in Coll et al. (2020), as used for the Inter-Sectoral Impact Model Intercomparison Project simulation round 2b to explore how projected climate change might affect future (2016-2100) ocean ecosystems (Tittensor et al., 2021).The EcoOcean MEM operates on a spatial grid of one decimal degree at monthly time steps with a food web that consists of 52 interconnected functional groups.Functional groups are represented spatially accounting for approximately 3,400 species that underpin the functional groups.Functional groups disperse, gravitating toward cells with more suitable feeding conditions and lower risks of depredation, where feeding suitability is determined by the Ecospace habitat foraging capacity model (Christensen et al., 2014) modified by cell-specific responses, temperature-adjusted metabolic rates, and species' native ranges to constrain the initial distribution of functional groups to observed occurrences (Coll et al., 2020).Fishing is driven by historical effort  for 14 fleets (Rousseau et al., 2019).Historical fishing effort is introduced as a total per each of the 66 Large Marine Ecosystems, within each fishing effort is distributed via a simple gravity model that considers the distributions and market value of targeted functional groups versus the cost of fishing in any given location that is not closed to fishing (Christensen et al., 2015).
Relevant to this case study is how EcoOcean utilizes the ESM output to drive its global ecosystem dynamics.EcoOcean contains three functional groups of phytoplankton: large, small and diazotrophs, that alongside benthic producers and bacteria act as the nutritional foundation for the food web.When connected to global ESMs, EcoOcean typically overwrites its spatially distributed phytoplankton biomasses with ESM-delivered phytoplankton biomass for matching timesteps (Coll et al., 2020;Steenbeek et al., 2013;Tittensor et al., 2018), scaled to 1950 biomass estimates EcoOcean was calibrated to.Furthermore, EcoOcean v2 (Coll et al., 2020)  benthopelagic and demersal functional groups with mean temperatures for the top 150 m, entire water column and bottom, respectively.This refinement was made to capture temperature fluctuations at depth as delivered by the ESM retrospective predictions.

Runtime Environment
The framework was deployed across a network of computers with varying specifications as shown in Table 1.All computers hosted 64-bit Operating Systems and were powerful enough to execute EcoOcean.Machines were located in two physical locations, interlinked via a Dropbox (www.dropbox.com) professional plan with 2 TB of storage space for mass data transfer, and a free Sync (www.sync.com)account for framework communication.For every four threads or fewer, a separate framework client was created, which meant that the runtime environment was able to simultaneously perform 24 executions of EcoOcean (Table 1, Nº clients).

Application
The EcoOcean executions were encapsulated in a custom developed command-line utility, henceforth referred to as the "EcoOcean wrapper," that configured the EcoOcean model for executing a specific simulation, executed the simulation, intercepted and condensed EcoOcean maps over time into time series, and saved these time series into one ZIP file per run.By specifying a ZIP output file name adhering to a simple and strict naming protocol, the command line utility understood exactly how to configure and run EcoOcean, how to name the output ZIP file, and how to finally place these output ZIP files directly into a Dropbox folder dedicated to multi-run framework server-side data collation.
All EcoOcean simulations started in the year 1950 after a 10-year spin-up (or burn-in) period, and were executed through 2015.EcoOcean output was collated for the period 1980-2015.For both ESMs, EcoOcean was executed with and without fishing following Coll et al. (2020).ESM data were delivered to EcoOcean in the form of monthly varying maps.The maps representing mean temperatures for the top 150 m, water column and bottom were fed into the EcoOcean habitat foraging capacity model (Christensen et al., 2014), and the maps for large and small phytoplankton were used to force the magnitudes and distributions of the corresponding phytoplankton groups within EcoOcean (Coll et al., 2020).
For the two ESMs and the two fishing scenarios, EcoOcean was driven by ESM historical data to gather simulation baseline output.Then, for the two ESMs, two fishing scenarios and every retrospective prediction start year for the three members, EcoOcean was executed with historical data up to the start year of a retrospective prediction, after which EcoOcean was executed until the end of the retrospective predictions while being driven by the ESM data for that retrospective prediction.For the retrospectice prediction experiments, output was only collected for the period covered by the 7-(EC-Earth3-CC) or 10 (NorCPM1) year retrospective predictions.
As EC-Earth3-CC data started at 1980, historical data for the year 1980 were repeated during the EcoOcean spinup period and for the period from 1950 through 1980.NorCPM1 did not distinguish explicitly between small and large phytoplankton; therefore, the total phytoplankton biomass data was used to proportionally drive large and small phytoplankton dynamics in EcoOcean.

Analysis
EcoOcean produced global 1°gridded maps of biomass and catch (where applicable) by functional group at monthly time steps, which can produce a file volume upward of 50 GB per simulation.To save storage space while retaining important signals, we condensed EcoOcean output into time series for the hydrological basins of the world (Figure 5; FAO, 2020) and the major fishing areas for statistical purposes (Figure 6; FAO, 2015) as defined by the Food and Agricultural Organization (FAO).Each time series described the mean biomass and catch, per functional group and per region, weighted by cell area.The use of regional time series was decided on as an effort to capture regional variability in ecosystem dynamics for MEM run comparison whilst significantly reducing the volume of model output transferred and analyzed.Although EcoOcean produced global results for 51 functional groups, this prototype case study focused on trends in biomass for only six functional groups: small, medium and large pelagic fish, and small, medium and large demersal fish.The choice of small, medium and large fish would allow for detecting direct changes induced by phytoplankton variability (small fish) and trophic cascades (medium and large fish).Different vertical positioning of selected functional groups could reveal relevant effects at depth.All comparisons were made for fished and non-fished MEM executions.
Results were analyzed for three FAO sub basins: the north, central and south Atlantic, in line with the aims of EU Horizon 2020 project TRIATLAS (Figure 5).

Statistical Measures
We explored the utility of a number of simple statistical measures to quantify how ESM uncertainty affects the output of EcoOcean when compared to output generated via the ESM baseline runs.In the formulae below, n = number of observations y i ; y i = observations (EcoOcean output driven by the ESM baselines); ŷi = estimations (EcoOcean output-driven by ESM retrospective predictions).
All statistical measures were calculated for three pelagic and three demersal fish functional groups, for both ESMs, for the three TRIATLAS regions (North, Central and South Atlantic), under fished and non-fished scenarios.
1. Root Mean Squared Error or RMSE (Equation 1) measures the average magnitude (t/km 2 ) of the differences between predicted values and observed values.A lower RMSE indicates better predictive performance.It penalizes larger errors more heavily than smaller errors due to the squaring of the errors.RMSE is sensitive to outliers since it squares the errors.RMSE is therefore a useful metric to quantify for which ecosystem components, and in which regions, ESM uncertainty mostly affects the marine ecosystem.Such outliers could indicate direct sensitivities to small perturbations, or could indicate ecosystem cascades.
2. Mean Absolute Error or MAE (Equation 2) measures the average magnitude (t/km 2 ) of the absolute differences between predicted values and observed values.Like RMSE, a lower MAE indicates better predictive performance.MAE treats all errors equally and is not as sensitive to outliers as RMSE.MAE is a useful metric to quantify where ESM uncertainty has less impact on MEM predictions.
3. Symmetric Mean Absolute Percentage Error or SMAPE (Equation 3) measures the percentage difference between predicted values and observed values, averaged across all observations.It is symmetric because it considers both overestimations and underestimations equally.SMAPE is easy to interpret in percentage terms and is suitable when dealing with data with varying scales.Because SMAPE ignores scale and direction, it is a useful metric to directly compare the relative error, directly or indirectly caused by ESM uncertainty, between functional group predictions for the historical runs and for the runs executed with ESM uncertainty for all regions.
4. Pearson's Correlation Coefficient (Equation 4) measures the linear relationship between predicted values and observed values.It ranges from 1 to 1, where 1 indicates a perfect positive linear correlation, 1 indicates a perfect negative linear correlation, and 0 indicates no linear correlation.A higher absolute value of the correlation coefficient suggests a stronger linear relationship between predictions and observations.Additionally, the Pearson coefficient can reveal hidden correlations for data that are not normally distributed.This coefficient is thus useful in correlating the linearity between historically-and uncertainty-driven MEM simulations, indicating where significant deviations may require further study.
Earth's Future 10.1029/2023EF004295 5. Directional Symmetry or DS (Equation 5) measures the percentage of occurrences where the sign, positive or negative, of an observed and a predicted time series is the same.This coefficient is useful to correlate the direction of change between historically-and uncertainty-driven MEM simulations.

Framework Performance
The prediction experiments resulted in a workload of 384 jobs, each job containing only one task: the invocation of the EcoOcean wrapper command-line utility.The driver data delivered by both ESMs comprised approximately 1 million time-tagged maps at a volume just over 415 GB.EcoOcean produced an estimated volume of 5 TB in output maps that were condensed into time series CSV files by the EcoOcean execution wrapper on the framework clients.The EcoOcean wrapper then compressed the time series CSV files and placed them in the Dropbox output folder for automatic transport to the framework server computer.By using time series, the framework produced a more manageable output volume of 50 GB, which was compressed to 3 GB for file transfer to the server for analysis.The full set of EcoOcean simulations required approximately 2,600 hr of CPU time, but via the framework used here-with a total of 164 computational cores (Table 1)-the complete set of simulations was performed in just under 30 hr.
The stability of the framework was assessed by randomly stopping and starting, and randomly adding and removing, computational clients during extensive test runs.The framework recovered from the resulting communication failures within a few minutes, rescheduling interrupted model executions or dispatching work to newly available clients.The use of cloud storage providers for main communication transport was slow but quite reliable.On a few rare occasions, the cloud storage providers stopped synchronizing information entirely, which is an acknowledged remote possibility for both Dropbox (Dropbox.com, 2023) and Sync (Sync.com, 2023).In such cases, the affected client computers were no longer able to participate in a particular simulation run until their local cloud daemons were manually restarted.In one particular simulation test run, the framework server daemon stopped synchronising, which effectively terminated the entire experiment since the framework does not (yet) feature server redundancy.
The 384 ecosystem model executions functioned as expected, without errors in accessing and integrating ESM data into the running model, executing the model, extracting and collating output, and placing the output in the desired, pre-configured output locations.

Simulations
The simulations provided four sets of output-for the two different ESMs under fished and non-fished oceans, each featuring time series trends for the 11 ocean sub-basins and 19 ocean statistical areas for fisheries purposes, for 52 functional groups.Figure 7 shows what these data look like when plotted.
Overall, results show that across the food web and the observed regions, the EcoOcean biomass trajectories displayed varying degrees of responsiveness to ESM uncertainty, depending on position of the selected functional group in the EcoOcean food web, the presence of fishing, the region analyzed, and choice of ESM linked to EcoOcean.For instance, Figures 8 and 9 show EcoOcean estimates when driven by EC-Earth3-CC and NorCPM1 respectively, for the same functional group, large pelagic fish, which encompasses dolphinfish, sailfish, tuna, mackerel, marlin, swordfish and others.From these plots, a few things become clear.
• The impact of retrospective predictions for EC-Earth3-CC (Figure 8) tends to deviate from the observationally-constrained reference, while the impact of retrospective predictions for NorCPM1 (Figure 9) centers around the baseline r1i1p1f1 simulation.
• Fishing severely impacts large pelagic fish, regardless of ESM selected.
• Although the overarching trends are similar between the two ESMs, fishing has a much stronger relative impact on large pelagic fish in the Central Atlantic when EcoOcean is driven with EC-Earth3-CC output than with NorCPM1.• NorCPM1 appears to introduce higher seasonal variability than EC-Earth3-CC, but this is probably an artifact of driving both small and large phytoplankton with the same single NorCPM1 phytoplankton estimates.
Having both large and small phytoplankton follow exactly the same trend is expected to exaggerate the impact of phytoplankton fluctuations onto the EcoOcean food web, which in the case of EC-Earth does not happen as large and small plankton compete for the same nutrients.
The statistical measures (Table 2 and Table 3) captured these differences, comparing the last 5 years of the baseline simulation ("observations") against the mean model output for the retrospective predictions ("predictions"): • For both Pelagic and Demersal components, the RMSE and MAE were lower for EC-Earth3-CC than NorCPM1, indicating that in absolute terms, EcoOcean output was less affected by internal sensitivity of EC-Earth3-CC than NorCPM1; • On the other hand, the Symmetric Mean Absolute Percentage Error (SMAPE) was generally lower for NorCPM1 than for the simulations driven by EC-Earth-CC, indicating that the trends produced by EcoOcean were less sensitive to internal uncertainty in the scenarios driven by NorCPM1 than EC-Earth3-CC; • The Pearson's Correlation Coefficient was higher under fishing scenarios, regardless of ESM used.This indicates that fishing has a much stronger impact on EcoOcean output than ESM internal variability; • Last, Directional Symmetry was higher for NorCPM1 than for EC-Earth3-CC, indicating that observations and predictions were generally more directionally aligned for NorCPM1 than EC-Earth3-CC.Please note that RMSE and MAE measure absolute errors while SMAPE measures relative errors, which is reflected in Tables 1 and 2 in the range differences in all the categories.
Efforts to relate changes in ESM drivers to the various MEM outputs did not yield any useful signals, and will require a systematic attribution investigation.
A side-by-side comparison of ecosystem trends for regions at different scale shows how different aggregation regions may reveal quite different trends (Figure 10).All side-by-side comparison plots (Figures S1-S36 in Supporting Information S3) are included in the supplementary material to indicate the vast spread of variation that emerges when aggregating MEM output over spatial areas with different sizes.

Conclusions
In this study, we demonstrated that a distributed run framework built from simple technologies can be used to systematically run a MEM, paving the way for systematic assessments that, prior, were deemed impossible to those without access to well supported powerful hardware and programming experience.

Experience Using the Framework
The framework performed well, despite its conceptual simplicity and reliance on the most basic technologies.The use of cloud storage providers for framework communication was not without caveats.We started out by using   one storage provider, Dropbox, to handle all server-client data transfer, but we observed a high number of canceled and restarted EcoOcean runs.Status logs showed that the server often perceived remote clients as having become unresponsive and repeatedly rescheduled their jobs, which we traced back to crucial framework status messages getting intermixed with, and delayed by, the slow transfer of large input and output data files.Swapping over to two separate cloud storage providers (Sync for framework communication, and Dropbox for bulk data transfer), with each cloud storage provider operating on different folders, solved the issue.An important piece of advice was provided by the Dropbox development team, who recommended using the same cloud provider account on the server and all clients to avoid soaring data usage across accounts.Additionally, in rare cases cloud storage providers may stop synchronising which, if this were to occur at the server, stops the framework from working.Some clever coding in the future can detect a hanging cloud provider and restart it.Although the use of cloud storage providers does demonstrate that a framework can be constructed from the most basic technologies, if faster and more streamlined communication protocols can be used one should not hesitate to embrace those with fervor.For this, the framework is of modular design and already hosts a number of faster and more reliable data communication protocols that require some IT skills and network management authority to configure.To avoid any kind of unnecessary complexity, the use of cloud providers was therefore ideal to showcase the framework.
To demonstrate that the run framework can be OS agnostic, our setup included one Linux computer among five Windows computers.We were able to make this setup work as (a) both the framework and the EcoOcean execution wrapper were written in.NET Standard which natively run on both OS-es; and (b) we were able to fully handle typical OS incompatibilities in our code by enforcing strict data handing conventions.However, as the framework is intended dispatching workloads that rely on any scientific software, framework operators may find that mixed OS family deployments may be very complicated to setup and operate.We recommend that these should be avoided at all costs, and if mixed OS family setups cannot be avoided, we surmise that it may be technically easiest to containerize (Bentaleb et al., 2022) the framework server and clients to the same operating system across available hardware.
We consider the framework that we present here a rough proof of concept that needs to improve in terms of usability, stability and security.In terms of usability, the framework currently offers only bare basic troubleshooting features, collecting execution and error logs in the formats produced natively by the software executed by the user.
There are plenty of inspirational methodologies in existence that can be easily adopted (Kandan et al., 2020) to find and understand errors.Additionally, by only collecting software logs but not observing the state of the ecosystem in conjunction, faults caused by the operating system may be presently very hard to identify.
In terms of stability, there is a significant risk of running a framework and launching tasks directly on a target operating system.During the execution of a workload, launched programs may allocate more than their fair share of available resources (such as available processors, runtime memory and disk space).In the worst case, badly behaving programs can crash an operating system.For those with the means and know-how, it would make sense to execute framework clients on virtual machines or containers.These technologies primarily shield the underlying operating system from badly behaving software.Containers increasingly replace virtual machines due to their ability to control resource allocation (Herbein et al., 2016) and to load-balance the use of system resources (e.g., Hota et al., 2019).Such features ensure smoothly flowing executions of hosted software.The framework-or the very concept of the framework-can be easily adapted to execute workloads across more stable environments.
In terms of security, the idea of remote execution of software is generally not encouraged in the world of computing.For this, framework activity must be shielded via secure user authentication and industry standard encryption of all data transferred (e.g., Papadogiannaki & Ioannidis, 2021).

Case Study Application
Aside from demonstrating the utility of the framework, the case study also aimed to investigate whether uncertainty within ESMs, in the form of retrospective predictions, has the potential to significantly affect the output of a MEM as a step toward improving the predictability of MEMs.The brief conclusion is: that depends.
For the functional groups and areas that were explored here, the impact of fishing overwhelmed the impact of ESM uncertainty on EcoOcean results; the natural variability represented in retrospective predictions played a lesser role in affecting EcoOcean outcomes than historical fisheries.For this case study, we did not re-validate EcoOcean's ability to replicate reconstructed catches when driven by ESMs EC-Earth3-CC and NorCPM1.A comprehensive re-validation will be the subject of the oncoming ISIMIP3a simulations (Blanchard et al., 2023).Follow-up work could even consider uncertainty in reconstructed fishing effort.However, these coarse results underscore that effectively managed oceans should prioritize sustainable fisheries practices (e.g., Maury et al., 2017).
The use of time series to reduce data volumes analyzed was computationally and storage-wise efficient, but this simplification risks losing important variability in heterogeneous and large areas.As our results showed, aggregating across the entire southern Atlantic obscured trends that became clear when assessing the western and eastern parts of the basin in separation.Future work should explore how to meaningfully measure the sensitivity and performance of a MEM with regards to selecting meaningful regions that are small enough to capture relevant dynamics, and large enough to facilitate speedy analysis.Regional analysis can focus on areas with ecological, Earth's Future 10.1029/2023EF004295 STEENBEEK ET AL.
geophysical or environmental similarity (e.g., marine ecoregions; Spalding et al., 2007) or other classes of ecoregions (see Rubbens et al., 2023 and references therein).Time periods for comparison should be carefully selected around known events and regime shifts, and possibly even known effects of seasonality (e.g., Lloret-Lloret et al., 2022) and time-delayed teleconnections (e.g., Gómara et al., 2021;Lehodey et al., 2020).We performed a limited time series analysis using only five simple statistics, but there are plenty of other MEMspecific skill metrics suggested in the literature (e.g., Bennett et al., 2013;Hipsey et al., 2020;Kempf et al., 2023;Stow et al., 2009) that could be put to the test.Additionally, advanced vectorization (Quislant et al., 2022) seems to offer significant potential for analyzing spatiotemporal MEM output to overcome the limitations of using predefined-and possibly poorly chosen-regions.
As Coll et al. (2020) already identified, driving MEM dynamics with alternative EMSs can come with huge uncertainty, too.The two ESMs included here differed significantly in their approach to representing past environmental conditions.EC-Earth3-CC historical environmental conditions were available from 1980 onwards, starting 30 years later than the NorCPM1 historical data.EcoOcean is calibrated for 1950; and to amend the gap in driver data for EC-Earth3-CC, we applied 1980 driver data for the 1950-1980 period, thus ensuring that EcoOcean had a much more stable spin-up period than when driven by NorCPM1 data.On the other hand, NorCPM1 offered only one phytoplankton group whereas EC-Earth3-CC offered two; different resolutions in the phytoplankton data also meant that both ESMs differently affected food availability to the global food web.For the three sub-ocean basins and six functional groups explored here, EcoOcean showed similar trends under fished and hypothetically non-fished oceans when driven by either ESM, but the trends greatly differed in magnitude depending which ESM was used to drive the environmental conditions for the MEM.
In order to better understand why EcoOcean behaves the way it does, and to quantify if ESM uncertainty has the potential to improve the predictability of EcoOcean, a systematic exploration of attribution is needed to quantify which MEM components are sensitive to which aspects of ESM uncertainty.This could be explored by running different ESM/MEM experiments where ESM internal variability is systematically applied to isolated drivers whilst measuring the impact on MEM output (e.g., Heneghan et al., 2021), and whilst properly validating MEM output against available observations (such as regional trends of species biomasses, regional catch statistics, and global reconstructed fisheries catches).This would also require quantifying the relative importance of other types of uncertainty related to, for instance, trophic structure of the food web and deployed ecological hypotheses (e.g., Coll et al., 2020).

Future Challenges
Up to now, understanding and improving the behavior of MEMs has been largely a manual process of tweaking model settings guided by intuition and analyzing model output (Pethybridge et al., 2019).The framework that we developed here will be the starting point for exploring the effectiveness of proposed skill metrics (Olsen, Fay, et al., 2016;Olsen, Key, et al., 2016;Payne et al., 2016), validation frameworks (Hipsey et al., 2020) and evaluation protocols (Planque et al., 2022); for assessing various types of uncertainty; and on the long-term, MEM calibration capabilities.
In terms of validation, complex spatial-temporal models are mostly validated by correlating model output with observations (Pethybridge et al., 2019;Spence et al., 2021).However, to ensure that a MEM produces results for the correct reasons, validation should also consider the internal state of a MEM while it executes (e.g., Hipsey et al., 2020;Steenbeek et al., 2021).Indicators of ecosystem dynamics (e.g., Network analysis; Ulanowicz, 2004) and measures of ecological expectations (PREBAL; Link, 2010) can be complemented with assessments of internal state variables related to species displacement, predator/prey overlap, changing environmental conditions and the presence of anthropogenic pressures can capture whether a MEM produces output for the correct reasons, and can provide modellers with valuable insight in the behavior of their MEMs.
Combining uncertainty assessments with validation strategies that also consider state variables, modellers can systematically disentangle a model's strengths and weaknesses in search of better model calibrations.Here lies the next big challenge for the global modeling community: to work toward (semi-)automated calibration of spatialtemporal MEMs (e.g., Vilas et al., 2023).By now allowing any modeling group to mass-execute their models systematically across available hardware, the framework can serve as a scaffolding for orchestrating the great number of runs required, which will involve some form of looped and MEM-specific sensitivity testing, parameter estimation and validation scheme.
Here may lie an opportunity for Machine Learning (ML) approaches that are increasingly applied to marine ecology (Rubbens et al., 2023).While properly designed statistical approaches can distinguish acceptable from unacceptable ecosystem trends for specific MEMs, ML approaches can perhaps expand this understanding to infer full food web dynamics from changing environmental conditions, species distributions and fisheries.Following promising work by Trifonova et al. (2017) and Uusitalo et al. (2018), we hope that ML approaches can, one day, assist in the search for more representative parameterizations of complex and mechanistic ecosystem models.A framework such as ours will be essential to mass-execute and perturb MEMs to generate the training data sets needed, and may be able to act as a foundation for ML-assisted MEM calibration.
Our multi-run framework is no silver bullet.With the expansion of computational capabilities also comes the responsibility of using these capabilities wisely.The paradigm "garbage in, garbage out" (A.J. Smith, 1994) is more relevant than ever when scaling up complex model simulations.It is equally useful to utilize the computational capabilities of distributed run frameworks wisely.To avoid wasteful brute-force approaches, one could turn to the use of short press perturbations to identify the most sensitive parameters (Pantus, 2007) and hence dramatically reduce the number of simulations that are really necessary to attain better insights in the workings of complex and mechanistic MEMs.

Wrapping Up
Remote execution frameworks are nothing new, and industry standards greatly surpass the framework described here in all aspects.Our framework shares a number of key principles with Slurm (Yoo et al., 2003), a much more robust and mature, but also much more complicated and technically demanding framework to install and operate.
Our framework achieves distributed computing capabilities with the simplest of software components, scaling up desktop workflows, across mundane hardware, without the need for IT skills or programming.That, in itself, is a breakthrough achievement that we hope the global modeling community will build on to make ecosystem modeling operational, for anyone.
The most significant benefit of the framework that we have built here is a full separation of technique (e.g., the technical challenges of repeatedly executing a MEM) from application (e.g., the purpose that the MEM is repeatedly executed for).This allows modellers to just focus on formulating large-scale scientific workloads that the framework then distributes across any available hardware.However, the most significant value of the framework prototype presented here are the ideas within.The simple client/server architecture can be deployed across any hardware configuration: across desktops, virtual machines, docker containers, web servers, and High-Performance Clusters.Any new deployment may require adapting or entirely rewriting framework components to fully utilize hardware capabilities, and to cater to related security and technical constraints.Tech-savvy users may opt to rewrite the multi-run framework in an existing workflow environment.We set out to breach the stigma that MEMs cannot be easily executed systematically; it is now up to the global modeling community to take our ideas further to make the process of ecosystem modeling operational.
The framework presented here is a mere first but important step toward making the process of marine ecosystem modeling more operational.By applying the framework to a global available MEM, we illustrate how it can be useful and how it can be applied to improve our understanding of uncertainty components of complex modeling frameworks, thus opening the door for scientific management breakthroughs.

Figure 2 .
Figure 2. Schematic of retrospective predictions to assess the impact of chaotic variability on the ability of Earth System Models (ESMs) to predict observations.The Y axis represents any dependant ESM variable included in retrospective predictions.

Figure 1 .
Figure1.Schematic of model-and decision time horizons, from short term forecasts to medium term predictions to long term projections.Short term forecasts are entirely dependent on starting conditions (the "initial value" problem), where long term projections are mostly affected by external drivers (the "forced boundary conditions" problem).Medium term predictions are affected by both problems (Adapted fromMeehl et al., 2009).

Figure 3 .
Figure 3.A schematic overview of the workflow needed to systematically assess Marine Ecosystem Models, here used to perform a hypothetical limited uncertainty assessment.The left panel shows this exercise deployed on a single desktop computer; the right panel shows this same exercise, transparently dispatched across any available hardware.

Figure 4 .
Figure 4.The conceptual structure of the Marine Ecosystem Model multi-run framework.Set up as a server-client structure, the framework (2) dispatches the jobs that are defined within a scientific workload across available hardware (1).The framework loosely interacts with scientific software to execute the tasks within a job (3) and relies on available shared storage solutions (4) to distribute input data to clients, and collate resulting output on the server.The "eye" icon reflects the loose interactions where the framework checks upon the state of external software and data without any form of technical integration and dependency.When a workload has been processed, scientific software is notified, which can dispatch a new scientific workload if desired.

Figure 5 .
Figure 5.The 11 ocean sub-basins as defined by Food and Agricultural Organization, used in the 384 Marine Ecosystem Model simulations to summarize trends in functional group catches and biomasses.In this manuscript, only areas 2, 3 and 4 (north-, central-and south Atlantic) are presented.

Figure 6 .
Figure 6.The fishing areas for statistical purposes as defined by Food and Agricultural Organization, used in the 384 Marine Ecosystem Model simulations to summarize trends in functional group catches and biomasses.In this manuscript, only areas 21 and 27 (northwest-and northeast-), 31 and 34 (centralwest-and centraleast-) and 41 and 47 (southwest-and southeast Atlantic) are presented.

Figure 7 .
Figure 7.An example of EcoOcean estimates for medium pelagic fish in the central Atlantic and Mediterranean, when the Marine Ecosystem Model is driven by output from EC-Earth3-CC under historical fishing pressure.The black line represents the EcoOcean output when driven by the continuous Earth System Model baseline, and the three colored lines represent EcoOcean estimates when deviating away from the baseline for 7-year retrospective predictions.Ecosystem output is plotted relative to the annual average 1980 value.

Figure 8 .
Figure 8. EcoOcean biomass trends for large pelagic fish in three selected Atlantic Food and Agricultural Organization sub-ocean regions, for the years 1980-2015, when the Marine Ecosystem Model is driven by EC-Earth-CC historical data r1i1p1f1 (black line, one continuous EcoOcean run) and realizations r6i1p1f1, r7i1p1f1, and r8i1p1f1 (colored lines).The left column shows EcoOcean biomass trends without fishing, the right column includes historical fishing.All plots scale relative to their 1980 annual mean to standardize axes and to highlight the relative trends.

Figure 9 .
Figure 9. Average EcoOcean biomass trends for large pelagic fish in three selected Atlantic Food and Agricultural Organization sub-ocean regions, for the years 1980-2015, when the Marine Ecosystem Model is driven by NorCPM1 historical data (black line, one continuous EcoOcean run) and realizations r1i2p1f1, r5i2p1f1, and r10i2p1f1 (colored lines).The left column shows EcoOcean biomass trends without fishing, the right column includes historical fishing.All plots scale relative to their 1980 annual mean, with standardized scales to highlight the relative trends.

Figure 10 .
Figure 10.Medium demersal fish biomass time series as predicted by EcoOcean when driven by NorCPM1.The plots show time series for Food and Agricultural Organization subocean south Atlantic (top row), and for the two subdivisions of that subocean, the southwest Atlantic (middle row) and the southeast Atlantic (bottom row).Time series are shown without fisheries (left column) and with historical fisheries (right column).

Table 1
The Computers Used to Perform the Case Study, With Key Characteristics STEENBEEK ET AL.

Table 2
Statistical Measures to Capture Pelagic Fish Temporal Biomass Dynamics for 2010-2015 STEENBEEK ET AL.

Table 3
Statistical Measures to Capture Demersal Fish Temporal Biomass Dynamics for 2010-2015 STEENBEEK ET AL.