Graph embedding and transfer learning can help predict potential species interaction networks despite data limitations

Metawebs (networks of potential interactions within a species pool) are a powerful abstraction to understand how large‐scale species interaction networks are structured. Because metawebs are typically expressed at large spatial and taxonomic scales, assembling them is a tedious and costly process; predictive methods can help circumvent the limitations in data deficiencies, by providing a first approximation of metawebs. One way to improve our ability to predict metawebs is to maximize available information by using graph embeddings, as opposed to an exhaustive list of species interactions. Graph embedding is an emerging field in machine learning that holds great potential for ecological problems. Here, we outline how the challenges associated with inferring metawebs line‐up with the advantages of graph embeddings; followed by a discussion as to how the choice of the species pool has consequences on the reconstructed network, specifically as to the role of human‐made (or arbitrarily assigned) boundaries and how these may influence ecological hypotheses.


| INTRODUC TI ON
The ability to infer potential biotic interactions could serve as a significant breakthrough in our ability to conceptualize networks over large spatial scales (Hortal et al., 2015).Reliable inferences would not only boost our understanding of the structure of species interaction networks, but also increase the amount of information that can be used for biodiversity management.In a recent overview of the field of ecological network prediction, Strydom, Catchen, et al. (2021) identified two challenges of interest to the prediction of interactions at large scales.First, there is a relative scarcity of relevant data in most places globally-which, due to the limitations in most predictive methods, restricts the ability to infer interactions to locations where it is least required (i.e.regions where we already have interaction data) leaving us unable to make inference in data scarce regions (where we most need it); second, accurate predictors are important for accurate predictions, and the lack of methods that can leverage a small amount of accurate data is a serious impediment to our predictive ability.In this contribution, we (i) highlight the power of viewing (and constructing) metawebs as probabilistic objects in the context of low-probability interactions, (ii) discuss how a family of machine learning tools (graph embeddings and transfer learning) can be used to overcome data limitations to metaweb inference and (iii) highlight how the use of metawebs introduces important questions for the field of network ecology.
In most places, our most reliable biodiversity knowledge is that of a species pool where a set of potentially interacting species in a given area could occur: through the analysis of databases like the Global Biodiversity Information Facility (GBIF) or the International Union for the Conservation of Nature (IUCN), it is possible to construct a list of species for a region of interest.Following the definition of Dunne (2006), a metaweb is the ecological network analogue to the species pool; specifically, it inventories all potential interactions between species for a spatially delimited area (and so captures the diversity of interactions as per Poisot et al. (2012)).However, inferring the potential interactions between these species still remains a challenge.And yet, the metaweb holds valuable ecological information: it represents the joint effect of functional, phylogenetic and macroecological processes (Carlson et al., 2022;Morales-Castilla et al., 2015, 2021).Specifically, it represents the 'upper bounds' on what the composition of the local networks, given a local species pool, can be (see e.g.McLeod et al., 2021); this information can help evaluate the ability of ecological assemblages to withstand the effects of, for example, climate change (Fricke et al., 2022).These local networks may be reconstructed given an appropriate knowledge of local species composition and provide information on the structure of networks at finer spatial scales.This has been done for example for tree-galler-parasitoid systems (Gravel et al., 2018), fish trophic interactions (Albouy et al., 2019), terrestrial tetrapod trophic interactions (Braga et al., 2019;O'Connor et al., 2020) and crop-pest networks (Grünig et al., 2020).
The metaweb itself is not a prediction of local networks at specific locations within the spatial area it covers: it will have a different structure, notably by having a larger connectance (see e.g.Wood et al., 2015) and complexity (see e.g.Galiana et al., 2022), than any of these local networks.Local networks (which capture the diversity of interactions) are a subset of the metaweb's species and its realized interactions, and have been called 'metaweb realizations' (Poisot et al., 2015).Differences between local networks and their metawebs are due to chance, species abundance and co-occurrence, local environmental conditions and local distribution of functional traits, among others.Specifically, although co-occurrence can be driven by interactions (Cazelles et al., 2016), co-occurrence alone is not a predictor of interactions (Blanchet et al., 2020;Thurman et al., 2019), and therefore the lack of co-occurrence cannot be used to infer the lack of a feasible interaction.Yet, recent results by Saravia et al. (2021) strongly suggested that local (metaweb) realizations only respond weakly to local conditions: instead, they reflect constraints inherited by the structure of their metaweb.This sets up the core goal of predictive network ecology as the prediction of metaweb structure, as it is required to accurately produce downscaled, local predictions.

| A ME TAWEB IS AN INHERENTLY PROBAB ILIS TIC OBJEC T
Treating interactions as probabilistic (as opposed to binary) events is a more nuanced and realistic way to represent them.Dallas et al. (2017) suggested that most interactions (links) in ecological networks are cryptic, that is, uncommon or hard to observe.This argument echoes Jordano (2016): sampling ecological interactions is difficult because it requires first the joint observation of two species, and then the observation of their interaction.In addition, it is generally expected that weak or rare interactions will be more prevalent in networks than common or strong interactions (Csermely, 2004); this is notably the case in food chains, wherein many weaker interactions are key to the stability of a system (Neutel et al., 2002).In the light of these observations, we expect to see an over-representation of low-probability (hereafter rare) interactions under a model that accurately predicts interaction probabilities.
Yet, the original metaweb definition, and indeed most past uses of metawebs, was based on the presence/absence of interactions.
Moving towards probabilistic metawebs, by representing interactions as Bernoulli events (see e.g.Poisot et al., 2016), offers the opportunity to weigh these rare interactions appropriately.The inherent plasticity of interactions is important to capture: there have been documented instances of food webs undergoing rapid collapse/recovery cycles over short periods of time (e.g.Pedersen et al., 2017).
Furthermore, because the structure of the metaweb cannot be known in advance, it is important to rely on predictive tools that do not assume a specific network topology for link prediction (Gaucher et al., 2021), but are able to work on generalizations of the network that capture statistical processes giving it its structure.These considerations emphasize why metaweb predictions should focus on quantitative (preferentially probabilistic) predictions, and this should constrain the suite of models that are appropriate for prediction.
Binary classifiers based on probabilities have an extremely robust methodology to validate them, and this applies naturally to the prediction of interactions (Poisot, 2023).
It is important to recall that a metaweb is intended as a catalogue of all potential (feasible) interactions, which is then filtered for a given application (Morales-Castilla et al., 2015).It is therefore important to separate the interactions that happen 'almost surely' (repeated observational data), 'almost never' (repeated lack of evidence or evidence that the link is forbidden through e.g.trait mis-match) and interactions with a probability that lays somewhere in between (Catchen et al., 2023).Although metawebs can (and in practice likely do) include false positives, these are statistically negligible compared to the false negatives.Furthermore, Strydom et al. (2022) shows that t-SVD embedding is extremely robust to (and able to detect) the presence of false positives.In a sense, because most ecological interactions are elusive, we should consider the direct consequences this has on sampling: once the common interactions are documented, the effort required in documenting each rare interaction will increase exponentially (Jordano, 2016).Recent proposals in other fields relying on machine learning approaches emphasize the idea that algorithms meant to predict, through the assumption that they approximate the process generating the data, can also act as data generators (Hoffmann et al., 2019).High-quality observational data can be used to infer core rules underpinning network structure, and be supplemented with synthetic data coming from predictive models trained on them, thereby increasing the volume of information available for analysis.Indeed, Strydom, Catchen, et al. (2021) suggested that knowing the metaweb may render the prediction of local networks easier, because it fixes an 'upper bound' on which interactions can exist.In this context, a probabilistic metaweb represents an aggregation of informative priors on the biological feasibility of interactions, which is usually hard to obtain yet has possibly the most potential to boost our predictive ability of local networks (Bartomeus, 2013;Bartomeus et al., 2016).This would represent a departure from simple rules expressed at the network scale (e.g.Williams & Martinez, 2000) to a view of network prediction based on learning the rules that underpin interactions and their variability (Gupta et al., 2022).

| G R APH EMB EDD ING OFFER S PROMIS E S FOR THE INFEREN CE OF P OTENTIAL INTER AC TI ON S
Graph (or network) embedding (Figure 1) is a family of machine learning techniques, whose main task is to learn a mapping function from a discrete graph to a continuous domain (Arsov & Mirceva, 2019;Chami et al., 2022).Their main goal is to learn a low-dimensional vector representation of the graph (embeddings), such that its key properties (e.g.local or global structures) are retained in the embedding space (Yan et al., 2005).The embedding space may, but will not necessarily, have lower dimensionality than the graph.Ecological networks are promising candidates for the routine application of embeddings, as they tend to possess a shared structural backbone (see e.g.Mora et al., 2018), which hints at structural invariants in empirical data.Assuming that these structural invariants are common enough, they would dominate the structure of networks, and therefore be adequately captured by the first (lower) dimensions of an embedding, without the need to measure derived aspects of their structure (e.g.motifs, paths, modularity, …).

| Graph embedding produces latent variables (but not traits)
Before moving further, it is important to clarify the epistemic status of node values derived from embeddings: specifically, they are not functional traits, and therefore should not be interpreted in terms of effects or responses.As per the framework of Malaterre et al. (2019), these values neither derive from, nor result in, changes in organismal performance, and should therefore not be used to quantify, for example, functional diversity.This holds true even when there are correlations between latent values and functional traits: although these enable an ecological discussion of how traits condition the structure of the network, the existence of a statistical relationship does not elevate the latent values to the status of functional traits.
Rather than directly predicting biological rules (see e.g.Pichler et al., 2020 for an overview), which may be confounded by the sparse nature of graph data, learning embeddings works in the lowdimensional space that maximizes information about the network structure.This approach is further justified by the observation, for example, that the macro-evolutionary history of a network is adequately represented by some graph embeddings (Random dot product graphs [RDPG]; see Riva et al., 2016).In a recent publication, Strydom et al. (2022) have used an embedding (based on RDPG) to project a metaweb of trophic interactions between European mammals, and transferred this information to mammals of Canada, using the phylogenetic distance between related clades to infer the values in the latent subspace into which the European metaweb was projected.By performing the RDPG step on reconstructed values, this approach yields a probabilistic trophic metaweb for mammals of Canada based on knowledge of European species, despite a limited (≈5%) taxonomic overlap, and illustrates how the values derived from an embedding can be used for prediction without being 'traits' of the species they represent.

| Ecological networks are good candidates for embedding
Ecological networks are inherently low-dimensional objects, and can be adequately represented with less than 10 dimensions (Braga et al., 2019(Braga et al., , 2021;;Eklöf et al., 2013).Simulation results by Botella et al. (2022) suggested that there is no dominant method to identify architectural similarities between networks: multiple approaches need to be tested and compared to the network descriptor of interest on a problem-specific basis.This matches previous results on graph embedding, wherein different embedding algorithms yield different network embeddings (Goyal & Ferrara, 2018), calling for a careful selection of the problem-specific approach to use.Additionally, Ghasemian et al. (2020) suggest that in some cases, nodes embeddings can be outperformed by other methods, reinforcing the need to thoroughly select the appropriate data analysis technique.In Table 1, we present a selection of common graph and node embedding methods, alongside examples of their use to predict interactions or statistical associations between species.These methods rely largely on linear algebra or pseudorandom walks on graphs.All forms of embeddings presented in Table 1 share the common property of summarizing their objects into (sets of) dense feature vectors, that capture the overall network structure, pairwise information on nodes and emergent aspects of the network, in a compressed way (i.e. with some information loss, as we later discuss in the illustration).Node embeddings tend to focus on maintaining pairwise relationships (i.e.species interactions), while graph embeddings focus on maintaining the network structure (i.e.emergent properties).Nevertheless, some graph embedding techniques (like RDPG, see e.g.Wu et al., 2021) will provide highquality node-level embeddings while also preserving network structure.
Graph embeddings can serve as a dimensionality reduction method.For example, RDPG (Strydom et al., 2022) and t-SVD (truncated singular value decomposition; Poisot et al., 2021) typically embed networks using fewer dimensions than the original network (the original network has as many dimensions as species, and as many informative dimensions as trophically unique species; Strydom, Dalla Riva, & Poisot, 2021).However, this is not necessarily the case-indeed, one may perform a PCA (a special case of SVD) to project the raw data into a subspace that improves the efficacy of t-SNE (t-distributed stochastic neighbour embedding; van der Maaten, 2009).There are many dimensionality reductions (Anowar et al., 2021) that can be applied to an embedded network should the need for dimensionality reduction (e.g. for data The embedding process (a) can help to identify links (interactions) that may have been missed within the original community (represented by the orange dashed arrows, (b).Transfer learning (d) allows for the prediction of links (interactions) even when novel species (c) are included alongside the original community.This is achieved with other ecologically relevant predictors (e.g.traits) in conjunction with the known interactions to infer latent values (e).Ultimately this allows us to predict links (interactions) for species external from the original sample (blue dashed arrows) as well as missing within sample links (f).Within this context the predicted (and original) networks as well as the ecological predictors used (green boxes) are products that can be quantified through measurements in the field, whereas the embedded as well as imputed matrices (purple box) are representative of a decomposition of the interaction matrices onto the embedding space.
TA B L E 1 Overview of some common graph embedding approaches, by type of embedded objects, alongside examples of their use in the prediction of species interactions.These methods have not yet been routinely used to predict species interactions; most examples that we identified were either statistical associations, or analogues to joint species distribution models.
(For an additional discussion the potential of graph neural networks for embedding see Box 1) a : application is concerned with statistical interactions, which are not necessarily direct biotic interactions; b : application is concerned with joint-SDM-like approach, which is also very close to statistical associations as opposed to direct biotic interactions.Given the need to evaluate different methods on a problem-specific basis, the fact that many methods have not been used on network problems is an opportunity for benchmarking and method development.Note that the row for PCA also applies to kernel/probabilistic PCA, which are variations on the more general method of SVD.Note further that t-SNE has been included because it is frequently used to embed graphs, including of species associations/interactions, despite not being strictly speaking, a graph embedding technique (see e.g.Chami et al., 2022).The popularity of graph embedding techniques in machine learning is more than the search for structural invariants: graphs are discrete objects, and machine learning techniques tend to handle continuous data better.Bringing a sparse graph into a continuous, dense vector space (Xu, 2021) opens up a broader variety of predictive algorithms, notably of the sort that are able to predict events as probabilities (Murphy, 2022).Furthermore, the projection of the graph itself is a representation that can be learned; Runghen et al. (2021), for example, used a neural network to learn the embedding of a network in which not all interactions were known, based on the nodes' metadata.This example has many parallels in ecology (see Figure 1c), in which node metadata can be represented by phylogeny, abundance or functional traits.Using phylogeny as a source of information assumes (or strives to capture) the action of evolutionary processes on network structure, which at least for networks have been well documented (Braga et al., 2021;Eklöf & Stouffer, 2016;Riva et al., 2016;Stouffer et al., 2007Stouffer et al., , 2012)); similarly, the use of functional traits assumes that interactions can be inferred from the knowledge of trait-matching rules, which is similarly well supported in the empirical literature (Bartomeus, 2013;Bartomeus et al., 2016;Goebel et al., 2023;Gravel et al., 2013).Relating this information to an embedding rather than a list of network measures would allow to capture their effect on the more fundamental aspects of network structure; conversely, the absence of a phylogenetic or functional signal may suggest that evolutionary/trait processes are not strong drivers of network structure, therefore opening a new way to perform hypothesis testing.

| AN ILLUS TR ATI ON OF ME TAWEB EMB EDD ING
In this section, we illustrate the embedding of a collection of bipartite networks collected by Hadfield et al. (2014) jl (Danisch & Krumbiegel, 2021) and EcologicalNetworks.jl(Poisot et al., 2019).
In Figure 2, we focus on some statistical checks of the embed-

BOX 1 Graph neural networks
One prominent family of approaches we do not discuss in the present manuscript is graph neural networks (GNNs; Zhou et al., 2020).GNNs are, in a sense, a method to embed a graph into a dense subspace, but belong to the family of deep learning methods, which has its own set of practices (see e.g.Goodfellow et al., 2016).An important issue with methods based on deep learning is that, because their parameter space is immense, the sample size of the data fed into them must be similarly large (typically thousands of instances).This is a requirement for the model to  et al., 2017), there is nevertheless the potential for GNN to become an applicable embedding/predictive technique in the coming years.
the interactions are observed, not-observed or unknown due to lack of co-occurrence in the original dataset.This reveals that the observed interactions have higher predicted weights, although there is some overlap; the usual approach to identify potential interactions based on this information would be a thresholding analysis, which is outside the scope of this manuscript (and is done in the papers cited in this illustration).Because the values returned from RDPG are not bound to the unit interval, we performed a clamping of the weights to the unit space, showing a one-inflation in documented interactions, and a zero-inflation in other species pairs.Panel (d) specifically shows that species pairs with no documented co-occurrence have weights that are not distinguishable from species pairs with no documented interactions; in other words, looking at the embedding, species that do not co-occur are not easily distinguished from species that do not interact.This suggests that (as befits a host-parasite model) the ability to interact is a strong predictor of co-occurrence.
In Figure 3, we relate the values of latent variables for hosts to different ecologically relevant data; we can perform this additional step, because the results presented in Figure 2 show that we can extract an embedding of the metaweb that captures enough variance to be relevant.Importantly, this is true for both L 2 loss (indicating that RDPG is able to capture pairwise processes) and the cumulative variance explained (indicating that RDPG is able to capture network-level structure), which suggests that these approaches may allow to predict interactions and network structure.In panel (a), we show that host with a higher value on the first dimension have fewer network embeddings, which can further be refined into the selection of predictors for transfer learning.

| THE ME TAWEB MERG E S ECOLOG I C AL HYP OTHE S E S AND PR AC TI CE S
Metaweb inference seeks to provide information about the interactions between species at a large spatial scale, typically a scale large enough to be considered of biogeographic relevance (indeed, many of the examples covered in the introduction span areas larger than a country, some of them global).But as Herbert (1965) rightfully pointed out, '[y]ou can't draw neat lines around planet-wide problems'; any inference of a metaweb must therefore contend with several novel, interwoven, families of problems.In this section, we outline three that we think are particularly important, and discuss how they may be addressed with subsequent data analysis or simulations, and how they emerge in the specific context of using embeddings; some of these issues are related to the application of these methods at the science-policy interface.Addressing these considerations as part of the methodological discussion is particularly important, as the construction of metawebs can perpetuate legacies of biases in data (Box 2).

| Identifying the properties of the network to embed
If the initial metaweb is too narrow in scope, notably from a taxonomic point of view, the chances decrease of finding another area with enough related species (through phylogenetic relatedness or similarity of functional traits) to make a reliable inference.This is because transfer requires similarity (Figure 1).A diagnostic for the lack of similar species would likely be large confidence intervals during estimation of the values in the low-rank space.In other words, the representation of the original graph is difficult to transfer to the new problem.Alternatively, if the initial metaweb is too large (taxonomically), then the resulting embeddings would need to represent interactions between taxonomic groups that are not present in the new location.This would lead to a much higher variance in the starting dataset, and to under-dispersion in the target dataset, resulting in the potential under or overestimation of the strength of new predicted interactions.Llewelyn et al. (2022) provided compelling evidence for these situations by showing that, even at small spatial scales, the transfer of information about interactions becomes more challenging when areas rich with endemic species are considered.
The lack of well-documented metawebs is currently preventing the development of more concrete guidelines.The question of phylogenetic relatedness and distribution is notably relevant if the metaweb is assembled in an area with mostly endemic species (e.g. a system that has undergone recent radiation or that has remained in isolation for a long period of time might not have an analogous system with which to draw knowledge from), and as with every predictive algorithm, there is room for the application of our best ecological judgement.Because this problem relates to distribution of species in the geographic or phylogenetic space, it can certainly be approached through assessing the performance of embedding transfer in simulated starting/target species pools.

| Identifying the scope of the prediction to perform
The area for which we seek to predict the metaweb should determine the species pool on which the embedding is performed.
Metawebs can be constructed by assigning interactions in a list of species within specific regions.The upside of this approach is that

BOX 2 Minding legacies shaping ecological datasets
In large parts of the world, boundaries that delineate geographic regions are a legacy of settler colonialism, which drives global disparity in capacity to collect and publish ecological data.Applying any embedding to biased data does not debias them, but rather embeds these biases, propagating them to the models using embeddings to make predictions.Furthermore, the use of ecological data itself is not an apolitical act (Nost & Goldstein, 2021): data infrastructures tend to be designed to answer questions within national boundaries (therefore placing contingencies on what is available to be embedded), their use often drawing upon, and reinforcing, territorial statecraft (see e.g.Barrett, 2005).As per Machen and Nost (2021), these biases are particularly important to consider when knowledge generated algorithmically is used to supplement or replace human decision-making, especially for governance (e.g.enacting conservation decisions on the basis of model prediction).As information on networks is increasingly leveraged for conservation actions (see e.g.Eero et al., 2021;Naman et al., 2022;Stier et al., 2017), the need to appraise and correct biases that are unwittingly propagated to algorithms when embedded from the original data is immense.These considerations are even more urgent in the specific context of biodiversity data.Long-term colonial legacies still shape taxonomic composition to this day (Lenzner et al., 2022;Raja, 2022), and much shorter-term changes in taxonomic and genetic richness of wildlife emerged through environmental racism (Schmidt & Garroway, 2022).Thus, the set of species found at a specific location is not only as the result of a response to ecological processes separate from human influence, but also the result of human-environment interaction as well as the results of legislative/political histories.
information relevant for the construction of this dataset is likely to exist, as countries usually set conservation goals at the national level (Buxton et al., 2021), and as quantitative instruments are consequently designed to work at these scales (Turak et al., 2017); specific strategies are often enacted at smaller scales, nested within a specific country (Ray et al., 2021).However, there is no guarantee that these arbitrary boundaries are meaningful.In fact, we do not have a satisfying answer to the question of 'where does an ecological network stop?', the answer to which would dictate the spatial span to embed/predict.Recent results by Martins et al. (2022)

| Putting models in their context
Predictive approaches in ecology, regardless of the scale at which they are deployed and the intent of their deployment, originate in the framework that contributed to the ongoing biodiversity crisis (Adam, 2014) and reinforced environmental injustice (Choudry, 2013;Domínguez & Luoma, 2020).The risk of embedding this legacy in our models is real, especially when the impact of this legacy on species pools is being increasingly documented.This problem can be addressed by re-framing the way we interact with models, especially when models are intended to support conservation actions.Particularly on territories that were traditionally stewarded by Indigenous people, we must interrogate how predictive approaches and the biases that underpin them can be put to task in accompanying Indigenous principles of land management (Eichhorn et al., 2019;No'kmaq et al., 2021).The discussion of 'algorithm-in-the-loop' approaches that is now pervasive in the machine learning community provides examples of why this is important.Human-algorithm interactions are notoriously difficult and can yield adverse effects (Green & Chen, 2019;Stevenson & Doleac, 2021), suggesting the need to systematically study them for the specific purpose of, here, biodiversity governance.Improving the algorithmic literacy of decision-makers is part of the solution (e.g.Fernandes et al., 2020;Lamba et al., 2019), as we can reasonably expect that model outputs will be increasingly used to drive policy decisions (Weiskopf et al., 2022).Our discussion of these approaches needs to go beyond the technical and statistical, and into the governance consequences they can have.To embed data also embeds historical and contemporary biases that acted on these data, both because they shaped the ecological processes generating them (see Box 2), and the global processes leading to their measurement and publication.For a domain as vast as species interaction networks, these biases exist at multiple scales along the way, and a challenge for prediction is not only to develop (or adopt) new quantitative tools, but also to assess the behaviour of these tools in the proper context.

| CON CLUS ION
Although promising, the application of embeddings to metaweb prediction still involved several challenges.First, there is a need to understand how to define a metaweb as a single, cohesive, unit of ecological organization.This is likely to have very different answers based on the specific taxonomic group, temporal and spatial resolution and question being investigated.Second, there is a need to understand the scale at which these predictions are relevant.
Although we have documented many cases of using embedding to fill gaps in the metaweb, these techniques can likely be brought into a spatial (and possibly temporal) context.The validation of these predictions will have to proceed jointly not only with empirical sampling of interactions but also with the design of downsampling methods.Finally, there is a need for a greater understanding of how biases in the data propagate to the predictions.Because the volume of metawebs is currently low, and because graph embeddings have not been commonly applied, we anticipate that this discussion will take place organically in the coming years.
, et al. (2017)(Chen, Xue, et al., 2017, species-environment interactions b   al., 2016, trophic interactions; Poisot et al., 2021,  host-virus network prediction)    .In brief, many graph embeddings can serve as dimensionality reduction steps, but not all do, neither do all dimensionality reduction methods provide adequate graph embedding capacities.In the next section (and Figure1), we show how the amount of dimensionality reduction can affect the quality of the embedding.
ding.In panel (a), we show that the averaged L 2 loss (i.e. the mean of squared errors) between the empirical and reconstructed metaweb decreases when the number of dimensions (rank) of the subspace increases, with an inflection at 39 dimensions (out of 120 initially) according to the finite difference method.As discussed byRunghen et al. (2021), there is often a trade-off between the number of dimensions to use (more dimensions are more computationally demanding) and the quality of the representation.In panel (b), we show the increase in cumulative variance explained at each rank, and visualize that using 39 ranks explains about 70% of the variance in the empirical metaweb.This provides different information from the L 2 loss (which is averaged across interactions), as it works on the eigenvalues of the embedding, and therefore captures higher-level features of the network.In panel (c), we show positions of hosts and parasites on the first two dimensions of the left and right subspaces.Note that these values largely skew negative, because the first dimensions capture the coarse structure of the network: most pairs of species do not interact, and therefore have negative values.Finally in panel (d), we show the predicted weight (i.e. the result of the multiplication of the RDGP subspaces at a rank of 39) as a function of whether parasites.This relates to the body size of hosts in the PanTHERIA database(Jones et al., 2009), as shown in panel (b): interestingly, the position on the first axis is only weakly correlated with body mass of the host; this matches well-established results showing that body size/mass is not always a direct predictor of parasite richness in terrestrial mammals(Morand & Poulin, 1998), a result we observe in panel (c).Finally, in panel (d), we can see how different taxonomic families occupy different positions on the first axis, with, for example, Sciuridae being biased towards higher values.These results show how we can look for ecological informations in the output of F I G U R E 2 Validation of an embedding for a host-parasite metaweb, using Random Dot Product Graphs.(a) Decrease in approximation error as the number of dimensions in the subspaces increases.(b) Increase in cumulative variance explained as the number of ranks considered increases; in (a and b), the dot represents the point of inflexion in the curve (at rank 39) estimated using the finite differences method.(c) Position of hosts and parasites in the space of latent variables on the first and second dimensions of their respective subspaces (the results have been clamped to the unit interval).(d) Predicted interaction weight from the RDPG based on the status of the species pair in the metaweb.Source: Demonstration of metaweb embedding using RDPG.

F
Ecological analysis of an embedding for a host-parasite metaweb, using Random Dot Product Graphs.(a) Relationship between the number of parasites and position along the first axis of the right-subspace for all hosts, showing that the embedding captures elements of network structure at the species scale.(b) Weak relationship between the body mass of hosts (in grams) and the position alongside the same dimension.(c) Weak relationship between body mass of hosts and parasite richness.(d) Distribution of positions alongside the same axis for hosts grouped by taxonomic family.Source: Demonstration of metaweb embedding using RDPG.