Bayesian Federated Inference for estimating Statistical Models based on Non-shared Multicenter Data sets

Identifying predictive factors for an outcome of interest via a multivariable analysis is often difficult when the data set is small. Combining data from different medical centers into a single (larger) database would alleviate this problem, but is in practice challenging due to regulatory and logistic problems. Federated Learning (FL) is a machine learning approach that aims to construct from local inferences in separate data centers what would have been inferred had the data sets been merged. It seeks to harvest the statistical power of larger data sets without actually creating them. The FL strategy is not always efficient and precise. Therefore, in this paper we refine and implement an alternative Bayesian Federated Inference (BFI) framework for multicenter data with the same aim as FL. The BFI framework is designed to cope with small data sets by inferring locally not only the optimal parameter values, but also additional features of the posterior parameter distribution, capturing information beyond what is used in FL. BFI has the additional benefit that a single inference cycle across the centers is sufficient, whereas FL needs multiple cycles. We quantify the performance of the proposed methodology on simulated and real life data.


Introduction
Medical data are only occasionally easily shared for research by those who hold them (companies, hospitals, research centers, universities, and consortia).The lack of sharing hampers more accurate estimation and inference.Often, in existing data sets the number of patients is small compared to the number of covariates available for prediction, which leads to misidentification of prognostic factors, overfitting of the statistical model and finally to inaccurate predictions for future patients.This is often the case for rare diseases, for which only small data sets are typically available for research.To ease the problem, we must either create more effective mechanisms and incentives for data sharing between institutions or focus on technology to integrate individual analysis outcomes obtained on localized data sets.If we go for the second route, the scientific problem is how to extract and combine information from different analyses on different non-overlapping and exhausting subsets to obtain the estimates that would have been found had the subsets been combined into a single data set.
Federated Learning (FL) is a machine learning approach, developed several years ago in the context of interacting mobile devices [1], in which local centers use local data for training machine learning systems on site by optimizing a pre-specified loss function, and only the estimated parameters of the trained systems are sent out for integration at a central server.The data, in contrast, stay at their owners' institutions.This procedure is 'cycled' around the centers iteratively; all centers update their estimates until a convergence criterion is met (see Figure 1, left plot).Typically, deep learning neural networks are employed, where the circulated parameters are strengths of interactions between nodes and activation thresholds of nodes.While FL performs excellently in image analysis [2,3,4], it also has limitations in application.First, in its standard form FL still requires relatively large data sets (given the large numbers of deep learning parameters).Second, FL does not generate rigorous error bars, since only the most probable parameter values are iterated (as opposed to posterior distributions).Third, it is not clear that this process will always converge to a satisfactory final state, since the latent heterogeneity across centers may generate prohibitively conflicting gradients for the model parameters in the maximization procedure, leading to an end result that works for none of the local data sets.
In recent years much progress has been made in FL (e.g., improved optimization in each center, aggregation of the local models at the central server and dealing with heterogeneity of the local populations [5,6,7] ).An overview of the recent developments including references is given in Liu et al. [8].Multiple researchers have proposed FL in a Bayesian setting for deep learning models [9,10].In each local center the posterior distribution is estimated and communicated to the central server, where aggregation takes place.However, this Bayesian procedure turns out to be challenging, especially for deep learning models in which the parameter dimensionality is high.Proposals to address this include approximating the estimated local and global posterior distribution with MCMC [11,12], variational inference [13], or the use of multivariate Gaussian distributions with a Laplace approximation [9,10].For a linear model (as an example) Al-Shedivat et al. [10] approximated the local posterior by a multivariate Gaussian.Under the assumption of a flat prior for the model parameters in the local centers, the global posterior distribution in the central server is multivariate Gaussian as well, and estimates of its parameters can be computed from the local sample means and covariance matrices.Although this seems to be an interesting idea on paper, the authors directly note that this way of estimating the global posterior distribution is not feasible in a general setting where models are neural networks with millions of parameters, as it requires estimating and computing the inverse of an d × d matrix, where d is the number of parameters.
In the field of statistical modelling (e.g., via generalized linear models) the challenges in FL are not about the computational burden due to the high dimensionality of the model parameters, but rather about the complexities of (medical) data, such as heterogeneity of populations, random or structural missingness of covariates, presence of confounding factors, overfitting, and the interpretation of the parameters in the estimated models.In this paper a Bayesian Federated Inference (BFI) framework is proposed and developed for arbitrary generalized linear regression models, in which the challenges described above are addressed.More specifically, the posterior distributions in the local centers and for the fictive combined data are approximated by multivariate Gaussian distributions around the maximum a posteriori (MAP) estimate.By choosing either a multivariate Gaussian or uninformative distribution for the prior distribution, the outcome of the inference on the combined set can be expressed directly in terms of the outcomes of the separate inferences on the local data sets.With the proposed methodology, there is no need to do inference on the full data set, as its inference results can be computed a posteriori from the inference In all local centers a Bayesian analysis is performed and the results are sent to a server where the results of the local centers are aggregated.The word "Learning" in the name "Federated Learning" refers to the repeated cycling mechanism around different centers.In contrast, for the proposed method one cycle is enough; inference is performed only once.We therefore chose to name the proposed method Bayesian Federated Inference and not Bayesian Federated Learning.results in the subsets in only one round, in contrast to traditional FL where very many iterative inference cycles across the centers are needed (see plot on the right in Figure 1).The proposed BFI method is also accurate when the sample sizes are small compared to the dimension of the parameter space, by choosing a more informative prior to overcome overfitting of the model.This paper is organized as follows.In Section 2 the framework for the BFI strategy is explained in a general setting.Next, in Section 3 we focus on BFI for generalized linear models.Here we also address population heterogeneity and missing covariates.Simulation studies based on real life data have been performed to quantify the efficiency in statistical inference if the data are only locally available for analysis, and the impact of complexities related to heterogeneity and (relative) small sample sizes in the local centers (Section 4).The paper ends with a discussion in Section 5.

Problem definition and setting
Suppose data from L different centers (e.g., hospitals) are available.Let random variable Y ℓi be the i th observation from the ℓ th center, where i = 1, . . ., n ℓ with n ℓ defined as the number of observations in center ℓ.We assume that Y ℓi , i = 1, . . ., n ℓ , ℓ = 1, . . ., L, are stochastically independent and identically distributed.Specifically, we assume that Y ℓi |θ is distributed with parametric density function y|θ → p(y|θ ) that is known up to the parameter θ , which itself has a density function θ → p(θ ).The realisation of Y ℓi , i.e., the actual observation, is denoted as y ℓi .The L data sets D 1 , . . ., D L and their union are defined as: We imagine the scenario where the data from the L sets are prohibited from being combined into one data set D; inference can only be performed for the L sets separately, and only the inference results can be combined.In the next subsection we aim to express the outcome of inference on the combined set D in terms of the outcomes of the L separate inferences on the constituent sets D ℓ .

Formulae for Bayesian subset inference
In a Bayesian analysis based on any data set D = {y 1 , . . ., y n } from a conditional density function y|θ → p(y|θ ) with prior density θ → p(θ ), the posterior density equals: where Z(D) denotes the normalizing constant of the posterior distribution.The maximum a posteriori (MAP) estimator [14] for θ is obtained by maximizing the function θ |D → p(θ |D) with respect to θ .Since the complete data set D is the union of L data sets from different centers, we can write the posterior density function as The expressions in (1) hold for any data set, so they apply also to the sets D ℓ , ℓ = 1, . . ., L. By rewriting the expression in (2) with D replaced by D ℓ , and with the subset priors denoted by p ℓ (θ ), we find that By substituting this expression into (2), we obtain: The constant C can always be recovered via normalization.Hence we can in a relatively simple way express the posterior parameter density p(θ |D) in terms of the L local densities p ℓ (θ |D ℓ ).The next question is under which conditions we can obtain accurate (possibly approximate) expressions for the MAP estimator θ of θ (and its accuracy), which is based on data set D, from the L MAP estimators θ ℓ , ℓ = 1, . . ., L (and their accuracy), which are solely inferred from the data subsets D ℓ .This is the topic for the next subsection.

Bayesian Federated Inference
If the distribution of the data comes from an exponential family, BFI is straightforward; the MAP estimator based on the data D can be obtained directly from summary statistics in the L subsets.Once models become more complex approximating the posterior density p(θ |D) by a multivariate Gaussian density may be needed.The Bernstein-von Mises theorem states that under certain regularity conditions and for sufficiently large sample size n, the posterior distribution can be approximated well by a multivariate Gaussian distribution centered at the maximum likelihood estimator of θ .[15] In practice, the sample sizes in the centers are finite and may even be small compared to the number of covariates.We, therefore, center around the MAP estimator.We approximate the logarithm of the posterior density (given the combined data set D) by a Taylor expansion up to a quadratic order in θ around its MAP parameter estimator θ which is based on the combined data set D: where A is equal to minus the curvature matrix of log{p(θ |D)} in θ , i.e., the matrix of minus second order derivatives with respect to θ , evaluated at θ .The last term in (4 which is equal to ∥θ − θ ∥ 3 times the term O p (1), where O p (1) represents a term that is bounded in probability for the sample size to infinity [15].Note that there is no linear term in the Taylor expansion (4), because the gradient of log{p(θ |D)} at θ equals zero by definition (as θ is the MAP estimator).From (4) it follows that (seen as a function of θ ).For θ in a sufficiently small neighbourhood of θ , the term O p (∥θ − θ ∥ 3 ) will be close to zero compared to the quadratic term.Then, the posterior density in a small neighbourhood of θ can be approximated by a Gaussian density: where d = dim(θ ) and det( A) denotes the determinant of the matrix A. In case of high dimensional models, this approximation may not be sufficiently accurate.A higher order Taylor expansion might be needed.This is discussed in Section 5.In a similar way, each of the L posterior distributions for the subsets ℓ = 1 . . .L is approximately Gaussian around θ ℓ (the MAP estimator based on data set D ℓ ) and with covariance matrix equal to the inverse of A ℓ .By substituting expression (5) for p(θ |D), and the equivalent expressions for p ℓ (θ |D ℓ ), ℓ = 1, . . ., L, into relation (3), we obtain: If the prior densities θ → p(θ ) and θ → p ℓ (θ ) are chosen to be Gaussian, with mean zero and covariance matrices Λ −1 and Taking the logarithm on both sides yields, when viewed as a function of θ : with B representing a term that is a function of the data, but does not depend on θ .The expressions on both sides are now considered as functions of θ .Both sides are quadratic functions of θ , and must have the same expansion coefficients for the linear and quadratic terms in θ .This implies that we can find estimators θ and A by solving the equations: quadratic terms : Solving these equations by using the fact that they must hold for any value of θ (in a neighbourhood of θ ), yields: The expression for A follows from the equality with the quadratic terms in θ , and the expression for θ from the equality with the linear terms.There is no need to do inference on the combined data set D to find (an approximation of) the estimator ( θ , A) from the combined set; we can compute approximations a posteriori from the inference results on the subsets.The MAP estimators θ ℓ and the matrices A ℓ are found analytically or numerically, depending on the form of the likelihood functions in the local centers.The matrix A is positive definite by definition, so its approximation in (7) as well (if the approximation is sufficiently accurate) and, thus, is invertible.By its definition as minus the second derivatives of log{p(θ Inserting the expression of A ℓ into the expression for A in (7), gives where the dependence on Λ ℓ is via the MAP estimator θ ℓ only.If θ 1 = . . .= θ L , the expression for A equals minus the second derivative of log{p(θ |D)}, the log posterior distribution after combining all data.
Even if one only wishes to do MAP inference, one would still need the matrix A to compute error bars or regions for the MAP estimator θ .More specifically, for the k th element of θ its approximate k,k , for ξ α the upper α-quantile of the standard Gaussian distribution.Hypothesis testing is also straightforward by the asymptotic normality of θ .
The formulae in (7) do not say anything about the plausibility of the subsets describing similar subpopulations.However, once we have computed the estimates ( θ , A) we should find that θ is compatible with each 'local' estimate θ ℓ , given the error bars coded in the matrices A and A ℓ .One way to address this is to consider the coordinate wise approximate (1 − 2α)100% credible intervals for the difference between the true θ -value in all subsets except subset ℓ and the true parameter value in subset ℓ: where subscript −ℓ indicates that the BFI estimator excluded the estimator from subset ℓ.

Nuisance parameters differ across centers
Suppose that the vector θ can be decomposed as θ t = (θ t a , θ t b ), where θ a denotes the core parameters of interest of dimension d 1 , and θ b the nuisance parameters of dimension d 2 , with d 1 + d 2 = d.The parameter vector of interest θ a is assumed to be equal in the L different sub-populations, but θ b may vary across the sub-populations.Let θ b,ℓ be the vector of nuisance parameters in population ℓ.Then, the parameter vector in the model for the combined data set is equal to θ t = (θ t a ; θ t b,1 , . . ., θ t b,L ), and is of dimension Let θ b,ℓ be the MAP estimate of θ b,ℓ based solely on data in the ℓ th subset, and θ b,ℓ be the MAP estimate based on the full data set.We choose simple priors of the form p(θ ) = p(θ a ) ∏ L ℓ=1 p(θ b,ℓ ) for the combined data set, and and hence, like in (3), The quadratic Taylor approximations of the log posterior densities log{p θ |D } and log{p ℓ θ a , θ b,ℓ |D ℓ } for θ in a neighbourhood of θ = ( θ a , θ b,1 , . . ., θ b,L ) and ( θ a,ℓ , θ b,ℓ ) respectively, are now where we left out the third and higher order approximations, and where A b,ℓ and A ab,ℓ denote the matrices of minus the second derivatives of log{p(θ |D)} with respect to the components of θ b,ℓ and with respect to both θ a and θ b,ℓ , respectively.As before we assume the usual zero-mean Gaussian priors for θ a , and zero-mean Gaussian priors for θ b,ℓ with inverse covariance matrices Λ b,ℓ in p ℓ (θ b,ℓ ) and Λ b in p(θ b,ℓ ) for every ℓ, so independent of ℓ (this assumption is not required, but makes notation simpler).Insertion of the above approximations for the log posterior densities and the prior densities of the parameters into (8), gives with B representing a term that does not depend on θ .Both sides in the above equation are quadratic functions of θ , and this equation must hold for all θ in a neighborhood of θ .We must therefore equate the coefficients on either side of the specific linear and quadratic terms.Identification of the quadratic terms in θ leads to the following equations: whereas identification of the linear terms in θ gives the two equations These equations can again be solved, giving formulae for the estimators θ a and θ b,ℓ that would be obtained based on the combined data in terms of estimators in the separate centers.For the core parameters one finds: with 1I the d 2 × d 2 unit matrix and the expressions for the matrices A a and A b,ℓ as given in (9).If required, the full set estimators for the nuisance parameters can subsequently be calculated from We conclude that also if nuisance parameters are different across centers, there is no need to combine the data subsets; we can approximate all relevant full set estimates a posteriori from the estimates obtained from the subsets.The above equations are seen to simplify if we choose identical nuisance parameter priors throughout, i.e., Λ b,ℓ ≈ Λ b for all ℓ.In that case A b,ℓ = A b,ℓ , and hence In Section 4 an example is given in which the data in three centers follow a logistic regression model in which the center specific intercepts are the nuisance parameters and the remaining regression parameters are the parameters of interest.This results in an aggregated model that contains an intercept for each center population.
3 Bayesian regression and subset inference

Bayesian regression
The most relevant problem areas in medicine are those where each data point is a pair (X ℓi ,Y ℓi ) of input vectors X ℓi (the covariates) and output values Y ℓi (e.g., clinical outcome or treatment response).A parametrized model relates X ℓi to Y ℓi , and the available data are used to infer the parameters for this model.In this section the BFI methodology described in Section 2 is applied to parametric regression models, and associated challenges such as heterogeneity across the populations are addressed.Suppose that Y ℓi |(X ℓi = x ℓi , θ 1 ) and X ℓi |θ 2 have densities y|x, θ 1 → p(y|x, θ 1 ) and x|θ 2 → p(x|θ 2 ), respectively, so that for ).An example is the class of generalized linear models (GLMs).In GLMs the outcome variable Y ℓi is related to the covariates X ℓi only via the linear predictor β t X ℓi .By setting the first covariate equal to 1, an intercept can be included trivially in the model.Specially, for h the link function, where β is the vector of unknown regression parameters and η a vector of unknown nuisance parameters (e.g., the variance of the outcome variable in a linear regression model).For this GLM the parameter θ 1 is defined as Let D ℓ be the ℓ th data subset, and D be the union of the L subsets: As in the previous section, we aim to express the outcome of inference on the parameters in the combined set D in terms of the outcomes of L separate inferences on the constituent sets D ℓ , ℓ = 1, . . ., L.
For mathematical simplicity in subsequent expressions we assume statistically independent θ 1 and θ 2 , i.e., p(θ 1 , θ 2 ) = p(θ 1 )p(θ 2 ) and p ℓ (θ 1 , θ 2 ) = p ℓ (θ 1 )p ℓ (θ 2 ) for all ℓ.For the combined data set D and for the subsets D ℓ the log posterior distribution is then given by, respectively, So, the log posterior densities are decomposed into terms that depend on either θ 1 , or on θ 2 (or neither), and maximization with respect to θ 1 and θ 2 to obtain their MAP estimators can be performed independently.Similarly as in (3), the link between the log posterior density for the combined data set and for the subsets takes the form Again the log posterior densities log{p(θ |D)} and log{p ℓ (θ |D ℓ )} can for sufficiently large n be approximated by quadratic expansions for θ near the MAP estimators θ and θ ℓ , leading to Gaussian approximations for the posterior densities themselves.
Since the log posterior densities can both be decomposed in terms dependent on either θ 1 or θ 2 (there are no mixture terms), the matrices A and A ℓ are diagonal block matrices of the form in which the blocks { A 1 , A 1,ℓ } and { A 2 , A 2,ℓ } represent the minus curvature matrices for θ 1 and θ 2 , respectively.The quadratic log-likelihood approximation for p(θ |D) is and a similar expression holds for p ℓ (θ |D ℓ ).If all prior parameter densities are chosen to be Gaussian with mean zero and inverse covariance matrices Λ 1 and Λ 1,ℓ for θ 1 and Λ 2 and Λ 2,ℓ for θ 2 , then insertion of these quadratic approximations into (12) gives, for k = 1, 2 with B 1 and B 2 not dependent on θ .Expressions for θ 1 , θ 2 , A 1 and A 2 in terms of their subset analogous can now be extracted similarly as in (7), giving From these formulae one can see that the distribution of the covariates does not affect the estimator ( θ 1 , A 1 ); i.e., it does not affect the estimator of the regression parameters.If independence between θ 1 and θ 2 can not be assumed, the matrices A and A ℓ are not necessarily block matrices and the expressions in ( 13) and ( 14) do not hold anymore.The general theory in Section 2 is still applicable and expressions for θ and A can be obtained from (7).
It is well known that there is a direct link between Bayesian inference for regression models and penalized regression.If one chooses a Gaussian prior with a diagonal inverse covariance matrix Λ with all diagonal elements equal to λ , the MAP estimator for θ is identical to the ridge penalized ML estimator for θ with regularizer λ .[16] The higher the value of λ , the stronger the penalization, or the smaller the variance of the prior distribution.When the sample size in one or more local subsets is small compared to the number of covariates in the model, the model in that particular center may be overfitted.Increasing the value of the regularizer λ ℓ (i.e., of the inverse variance of the prior in subset ℓ) can prevent overfitting in the local models and increase the inference accuracy.This is illustrated in Section 4. Note that in most ridge regression analyses only the regression (or association) parameters are penalized, not the intercept and the nuisance parameters.However, by assuming flat priors for the intercept and the nuisance parameters the two approaches are similar.

Heterogeneity across populations
So far we have assumed that the distributions of the covariates and the conditional distributions of the outcome given the covariates are the same in all centers.In practice this means that we assumed that the subsets were sampled from statistically identical populations.This assumption might be violated, for instance if the different centers represent different nationalities or hospitals.In that case, the L subsets are sampled from structurally different populations, and the distributions of the covariates and/or of the (conditional) outcomes may well differ across centers.If we assume that the regression parameters θ 1 do not vary across centers, but that the parameters of the covariate distributions, θ 2 , may do so, and choose simple priors of the form p(θ ) = p(θ 1 ) ∏ L ℓ=1 p(θ 2,ℓ ), then the calculations as before yield equation ( 13) derived earlier for all estimators related to θ 1 .Estimation of θ 1 is hence not affected by having subset-specific parameters θ 2 .Equation (14), in contrast, is now replaced by where θ 2,ℓ is the MAP estimator of θ 2,ℓ found via inference on the full set D, and A 2,ℓ denotes the matrix of minus second derivatives of log{p(θ |D)} with respect to θ 2,ℓ .By comparing the local estimators θ 2,ℓ and considering the credible intervals as described in Subsection 2.3, the assumption of identical local populations may be verified.

Sets of covariates differ across centers
The actual set of available covariates may differ across centers, for instance because measuring and registering some medical patient characteristics is or was not part of the standard care in one or more centers.As a consequence, some local data sets may not contain data of some covariates.Reconstructing the true (missing) observations is not always possible, because measurements may be time dependent (e.g., blood pressure), the patient may have died, and privacy legislation makes it sometimes impossible and certainly time-consuming.
In general, if an individual covariate value is missing, imputation methods can be applied.Common methods are single and multiple regression imputation [17]: a value for a missing covariate is predicted by a regression model that was fitted based on the observed covariates and outcome values.If in one of the centers or local data set, a covariate is completely missing a similar strategy can be applied.A prediction model for the unobserved covariate is fitted in the centers in which the covariate is measured, the models are combined by means of the BFI strategy in order to obtain one regression model (according to the formulae above) and subsequently used to predict and impute the covariate values in the centers where the covariate was not measured.After imputation, the final BFI model can be constructed as before.This strategy only works if the simultaneous distributions of the covariates are identical across centers.

Study aims
In this section we describe the results of several simulation studies aimed at quantifying the performance of the BFI methodology.More specifically, we study the agreement between the inference results of the BFI strategy and those obtained based on the combined data of the L subsets.
Regression models are employed for multiple purposes, e.g., for predicting outcomes of new patients or subjects, or for studying association between a factor and the outcome.In association studies, the estimated regression parameters are of main interest, whereas in prediction studies the focus is on outcome prediction.If the estimates of the regression parameters obtained with the BFI methodology and those based on the combined data are close, predictions based on the two models will be very similar as well.However, if the covariates are correlated, similar predictions may also be obtained if the regression parameters differ.Therefore, we are interested in prediction agreement as well.
We use an existing real life data set for illustrating the BFI methodology.The data sets come from three locations and have different sample sizes.The outcome of interest is binary and, thus, a logistic regression model is fitted.Data of four covariates are available to predict the outcome.In these three data-sets the number of events per covariate (EPV) is below 10 or approximately equal to 10.To overcome overfitting a minimum EPV of 10 is often advised.
We have access to the data in all three locations, and from that perspective this example can be seen as a toy-example to illustrate the methodology and is not meant for deriving any practical conclusions.Based on the data, we aim to study the performance of the BFI method in the following settings: 1. Homogeneity across populations (local centers).The parameters across the three locations are equal.
2. Low sample sizes in local centers.The sample sizes in the subsets are small compared to the number of covariates in the model.

Real life data
Our data set consists of data of trauma patients from different hospitals.The outcome variable of interest is mortality (binary) and the covariates are age, sex, the Injury Severity Score (ISS, ranging from 1 (low) to 75 (high)), the Glasgow Coma Scale (GCS, which expresses the level of consciousness, ranging from 3 (low) to 15 (high)).We have data from three (types of) medical centers: a peripheral hospital without a neuro-surgical unit (NSU), a peripheral hospital with NSU, and an academic medical center.[18] The sample sizes in the three centers are 49, 106 and 216 patients, respectively.Because each subset comes from a specific hospital type, the patient populations and characteristics of the patients' traumas may be different, i.e., the distributions of the covariates and outcome variable may vary across the centers.This is certainly the case for the mortality rate: 43% in the peripheral hospital without NSU, 40% in the peripheral hospital with NSU, and 22% in the academic medical center.Also the median ISS rates differ, from 29 in the academic hospital to 41 in the peripheral hospital without NSU.The median age, percentage females and median GCS are similar across the centers (they equal 29 years, 27% females and 12 for GCS in the combined data set).Even if populations differ, the regression parameters of the covariates may equal.More details on the data are given in Appendix A. We assume that the data in the combined and in the local data sets come from Bayesian logistic regression models (with possibly different parameters).Let Y ℓi be a stochastic variable that indicates whether patient i from hospital ℓ died (Y ℓi = 1) or not (Y ℓi = 0), and let X ℓi be a vector with the covariates for this patient: 1 (for the intercept), age, sex (0: males, 1: females), ISS, and GCS.We assume that, for a patient, the probability that Y ℓi = 1, given his covariates and given the vector β is given by with the prior distribution β ∼ N 5 (0, Λ −1 ), a 5 dimensional Gaussian distribution with mean zero and a diagonal covariance matrix.
We combined the data, normalized every covariates (subtracted the sample mean and divided by the sample standard deviation of the full data set), and fitted a Bayesian logistic regression model in the combined data set.The diagonal elements of the covariance matrix are set equal to 10 (the diagonal elements of Λ = Λ ℓ equal λ = 0.1).The MAP-estimated regression parameters with their estimated standard deviations for the combined set and in the three different subsets are given in Table 1.One should always be careful trying to interpret the estimates of the regression parameters in a multivariable model, as the covariates may be correlated.However, from the estimates in the centers as well as those obtained after combining the data, we can carefully conclude that the mortality risk increases with age and ISS, but decreases with increasing GCS, which seem reasonable.The regression coefficient for sex is negative, but the corresponding standard deviation is relatively large.

Simulation procedure
In this subsection we study the performance of the BFI methodology for the case where the data in the subsets come from populations with identical characteristics, i.e., with identical covariate distributions and identical true parameter values (including those of the regression parameters).Since this does not seem to hold in our data set (see Table 4), we first randomly assign each patient to one of the three subsets, keeping the sample sizes for the three subsets fixed.After this randomization we can indeed assume that the three patient groups have the same covariate distributions and regression parameters.We then perform regression analyses at subset level and at full data set level, and quantify the agreement between the results obtained on the full combined data set versus those obtained via the BFI protocol for integrating the outcomes of the subset regressions.This procedure is repeated multiple times.Below we describe the quantities used for measuring agreement.For the prediction comparisons some extra steps are taken to ensure that estimation and prediction are not based on the same data.In every cycle and every subset, 10% of the patients are randomly left out of the estimation procedure.After obtaining the estimates for the regression coefficients with the BFI strategy, mortality probabilities are predicted for the 10% patients that were left out (based on their covariates and estimated regression parameters).More details are given below.

Quantities to measure agreement
The outcome variable is binary (mortality) and, therefore, a logistic regression model is fitted.The parameter of interest is the vector of regression parameters β (including the intercept).For simplicity of interpretation of the regression parameters and the choice of the prior distribution, we first normalize every covariate by subtracting from each covariate value in the data set the sample mean of the covariate values in the combined data set and dividing by the sample standard deviation (computed in the combined data set).This is done for each covariate.Note that this sample mean and standard deviation in the combined data set can be computed without combining all local data, because they are functions of the first and second sample moments in every local subset only (and of the local sample size).Therefore, the local data themselves do not need to be combined and standardization is also possible in practice.
Let ( β b , A b ) and ( β c , A c ) denote the estimates of (β , A) based on the BFI strategy and found after combining all data (the subscript b stands for BFI, c for combined data).To quantify the agreement between the regression estimates, the mean squared error (MSE) is computed for every regression coefficient with the formula: Here k = 1, . . ., d labels the d components of the vector β , and the superscript m labels the M independent randomization cycles of the patients over the three subsets.Note that the values β ck are the same in every cycle (i.e., is independent of m), since this estimate is based on the combined data.A small value of MSE β k means that there is hardly any loss when computing the MAP estimates with the BFI method (i.e., without combining all the data).Furthermore, the MSE is calculated for the square root of every diagonal element of minus the inverse of the curvature matrix (representing estimators of the expected errors in the regression coefficients): For testing the accuracy of BFI's patient outcome predictions we use similar quantities.For the 10% individuals that were left out from the estimation part, the prediction probabilities are estimated.In each cycle m, for an individual i from center ℓ, these probabilities are computed as The estimators β c are both obtained from 90% of the patients (a selection that is different for each cycle m, so both estimators depend on m).To compare the results from the two regression routes (regression after data integration versus BFI recombination of subset results), we compute ). 1) for homogeneous populations (rows 1 and 2) as described in Subsection 4.3.The priors are zeromean multivariate Gaussian distributions with the same covariance matrix in every subset and in the combined data set; 2) the priors in the subsets and the combined data set are taken equal to zero-mean multivariate Gaussian priors with different diagonal covariance matrices (row 3), with λ i the value in subset i and λ the value in the combined data set (Subsection 4.3); 3) the situation in which the subsets are small (rows 4-6) as described in Subsection 4.4.In all cases M = 1000.
where S (m) ℓ is the set of indices of individuals selected for prediction in subset ℓ in cycle m.The value of 37 is equal to 10% of the total sample size, and hence gives the number of individuals selected for prediction in every cycle.

Results: Parameter estimation
The prior distribution for the vector of regression parameters β is chosen to be multivariate Gaussian with mean zero and a diagonal covariance matrices Λ −1 = Λ −1 ℓ of which every element equals λ .In every subset D ℓ we fit a logistic regression model with the same prior (i.e., Λ ℓ = Λ for all ℓ), and we aggregate the results by applying the formulae in (7).The MSE β k are in the upper part of Table 2.
In this table we see that the estimates based on the BFI strategy are accurate.As expected, for higher values of λ the estimates of the regression coefficients are closer to zero, and the estimated standard deviations are smaller.Furthermore, the MSE values are also closer to zero for higher values of λ .However, better agreement between the estimates based on the BFI strategy and those found after combining the data, does not necessarily mean that the estimates are closer to the true values.

Results: Prediction
The BFI-estimated logistic regression model is used to predict mortality for new patients.To quantify its prediction performance we carried out simulation studies as described above: in each of the M = 1000 simulation cycles 10% of the patients were left out from estimation, and used instead to predict the mortality probability.The quantity MSE p in ( 16) was calculated, resulting in the values MSE p = 0.25 × 10 −3 for λ = 0.01, and MSE p = 0.17 × 10 −3 for λ = 0.1.In addition, in Figure 2  a scatter plot with the predicted probabilities based on the BFI model (vertical axis) and the combined data model (horizontal axis), for λ = 0.1.This plot shows excellent agreement.As expected for the present model definitions, the data points in Figure 2 (left) are not distributed homogeneously along the diagonal, since Gaussian distributed uncertainty in parameter estimates is mapped nonlinearly to probabilities via the expressions in (15).This effect is more clearly present when the local data sets are smaller (next subsection), where the estimate uncertainties are larger.

Results: Parameter estimation with different priors
We also considered the situation in which the priors for the vector of regression parameters β in the subsets are different.We took multivariate Gaussian distributions with mean zero again, but with inverse covariance matrices with on the diagonal λ 1 = 1, λ 2 = 0.1 and λ 3 = 0.01 for the three subsets and λ = 0.01 for the combined data set (so the lower the sample size the stronger the regularisation).The MSE β k are in row 3 of Table 2.The estimates based on the BFI strategy are accurate.We also performed a simulation study for mortality prediction.The results are very similar to those in Figure 2 (left).

Small subsets
If the sample sizes of the subsets are small compared to the number of covariates in the model, there is a risk of overfitting.Our present data set consists of three data subsets, ranging in size from 49 to 216 patients.Especially the subset with 49 patients is small, and overfitting can be addressed by taking values for λ that are not (too) close to zero.In this subsection we consider a more extreme situation in which multiple subsets are small.We create such data subsets from our original hospital data by randomly dividing all patients over eight subsets of size 40 each, plus one additional subset of 51 patients.The combined data set D remains the same, hence the same parameter estimates are found based on the combined data set.We perform the simulation study as described in Subsection 4.3 to study the effect of modest subset sample sizes on the estimation and prediction agreement.We do this for different values of λ .The values of MSE β k are given in Table 2.For the patient outcome predictions we found the following MSE values: MSE p = 2.95 × 10 −3 for λ = 0.01, MSE p = 1.08 × 10 −3 for λ = 0.1 and MSE p = 0.44 × 10 −3 for λ = 1.Upon increasing the value of λ , the estimates of the regression parameters are shrunk towards zero and agreement increases.This can be seen by the decreasing values of MSE p , and also from the scatter plots in Figure 3.For λ = 0.01 (left plot in Figure 3) the agreement is weaker, and one observes what appears to be overfitting (rotation of the data For larger values of λ the agreement between BFI recombination and regression results on the full data set improves, as expected.As mentioned before, better agreement between the estimates based on the BFI strategy and those found after combining the data does not necessarily mean that the estimates are closer to the true values; we just conclude that the BFI protocol reliably predicts the results of regression on the integrated data set.We repeated the simulation study for λ = 0.01 in the combined data set and λ = 0.1 or λ = 1 in the subsets.The results are very similar as before.

Heterogeneity across populations Different covariate distributions and effects
In this subsection we no longer use randomizations, but return to the original data to study agreement between BFI and direct regression on the combined set in terms of parameter estimates and outcome predictions, in the setting where (as is the case in our original data) the populations from the different hospital types are expected to differ.The combined set priors and subset priors were chosen to be identical, Gaussian with zero average and Λ = Λ ℓ for all ℓ, with Λ a diagonal matrix with λ on the diagonal.The parameter estimates obtained via the BFI protocol and those found via regression on the combined data set are given in the Table 1.From this table we conclude that, even if there are differences in the covariate effects between the populations where the subsets where sampled from, the BFI-estimated regression parameters are still very similar to those that would have been found upon first combining all data in a single set.For the comparisons of the predictions, as before in every cycle 10% of the individuals are selected randomly for prediction, and not used for estimation.We quantify the agreement in patient outcome probabilities again by (16).For λ = 0.1 we found MSE p = 0.36 × 10 −3 .A scatter plot for the comparison of the prediction probabilities is shown in Figure 2 (right).It can be seen that the agreement between the predictions is high.

Different prevalence
The mortality rates in the three subsets differ (see Table 4 in Appendix A).This may not be explained completely by the covariates in the model.The prevalence of mortality is reflected in the intercepts of the logistic regression models for the three subsets.To address this observed mortality difference we allow the intercept to vary across hospital types, and we aggregate the Table 3: Estimates of the regression coefficients (including estimated standard deviations) computed with the BFI strategy ( β b ), and those obtained after first combining the data ( β c ), when taking into account the differences in mortality prevalence in the three hospital types by including subset specific intercepts.
estimates from the different subsets as explained in Subsection 2.4.This yields a single logistic regression model that includes a hospital type specific intercept, but no general intercept.For comparison, we also fitted a logistic regression model with hospital type intercepts based on the combined data (instead of a model with a general intercept and a reference hospital type).
The combined set priors and subset priors were chosen to be identical, Gaussian with zero mean and Λ = Λ ℓ for all ℓ, and Λ a diagonal matrix with λ on the diagonal.For λ = 0.1 and λ = 1 the results are shown in Table 3.We see very satisfactory agreement between the estimates.We also see, as expected, that the estimates β c are typically shrunk to zero for λ = 1 more than for λ = 0.1.

Discussion
In this paper we have discussed a one-shot BFI strategy for estimating parameters in parametric models, where the data in multiple centers can not be combined for analysis.This is for instance of interest in medical settings in which large central registries do not exist and the data sets in individual medical centers or hospitals cannot be easily combined, because of e.g., privacy legislation.A major advantage of the proposed BFI strategy is that, in contrast to most FL strategies, not only parameter estimates but also the uncertainties of these estimates are computed systematically.These uncertainties reflect the differences and uncertainties in the individual centers.In medicine, complementing parameter estimates with error bars is essential in order to interpret correctly the estimation results.
In this study we have performed numerical simulations in multiple settings, with real medical data in which the patient outcomes were binary, and with realistic sample sizes.They reveal a very good agreement between the parameter estimates and patient outcome predictions obtained with the BFI procedure, with those found after first combining all data in a single integrated set.We conclude that for the given data and the chosen regression model (logistic) the inference results based on the combined data set can be computed reliably a posteriori from the inference results on the local centers; hardly any information is lost by not being able to combine the data sets.
In the proposed BFI strategy we approximated the posterior parameter distribution around the MAP estimator θ by a Gaussian distribution with mean θ and a covariance matrix equal to minus the inverse curvature matrix in θ .For large sample sizes, the MAP estimator will approach the maximum likelihood estimator and minus the inverse curvature matrix will approach the Fisher information matrix (Bernstein-von Mises theorem [15]).If the sample sizes in the centers are small (compared to the number of covariates or the dimensionality of parameter space) these identification may no longer hold.In particular, if the sample size is close to or smaller than the number of covariates, the ML estimator will be inaccurate or not even defined, whereas the MAP estimator is still well defined by the presence of the ridge penalty in the likelihood (for a Gaussian prior).However, from the calculations and the simulation studies it is not clear whether in this situation the Gaussian approximation is sufficiently accurate; i.e., whether the second order Taylor expansion of the logarithm of posteriori density is accurate.A higher order approximation (third or higher order) may give better results, but will change the derivation of the estimators θ and A. This will be part of a new project in which we aim to consider high dimensional models.
If, in one of the local centers, the posterior density does not have a unique maximum, the MAP estimator is not uniquely defined in this center and the BFI methodology cannot be used.This problem may only arise if the sample size is small.Choosing a different matrix Λ may solve the problem of the unique maximum.If the log posterior density is multimodal (has local maxima), the theory still holds, but one has to be careful when numerically optimizing the log posterior density.
In the paper we have considered two priors: the Gaussian prior and the flat noninformative prior (which in principle equals the Gaussian prior with infinite variance).These priors are not randomly chosen.The form of these priors yields a quadratic expression in the parameters in the equation (6), which is essential in the derivation of the BFI estimators.Other prior distributions can be used as well as long as they give this quadratic expression.The (inverse) covariance matrix of the Gaussian prior can be chosen by the researcher.The choice may affect the degree of regularization.
While in most of our simulations we have for simplicity used the same inverse covariance matrices for the data subsets and the combined data set, the derivations in sections 2 and 3 are in fact more general and allow these matrices to be distinct.This freedom offered by the theory to vary the covariances of the priors could for instance be employed to regularize/penalize smaller data subsets more than larger ones as was shown in subsections 4.3 and 4.4.Also if data in some centers are more reliable or "cleaner" than in other centers, or if the populations in some centers seem to be more representative for the population of interest (e.g., if the inclusion criteria differ across centers), different prior covariances can be used.By allowing this generalization in the theory and the software package it is up to the user to assume equal priors or not.
The main aim of the simulation studies and the data analysis (Section 4) was to illustrate the proposed methodology.The motivation of this research comes from oncology.Some cancers are extremely rare and only small data sets are available.This makes it impossible to study overall survival as the numbers of events per data-set are low.In this paper we have developed and applied the BFI methodology for parametric (regression) models, and in particular for GLMs.The BFI methodology is also applicable in more complex models and settings, such as models for survival data that are subject to censoring.In particular, the development of BFI protocols for semi-parametric models, like the Cox proportional hazards model [19] which is widely used in medical data analysis, is more complex, because of its infinite dimensional parameter space.This will be our next project: to develop the BFI methodology for cancer data with time-to-event outcomes.
In conclusion, we have shown that by harnessing systematically and accurately the cumulative power of multiple disjunct data sets without actually combining these data sets, the BFI methodology can reduce significantly the sizes of data sets required for extracting reliably statistically significant predictive or prognostic patterns (if such patterns are present).While with conventional methods only very strong regularities and associations can be identified in small data sets (small compared to the dimension of the model), with the new approaches more subtle ones may become detectable.
was to compare hospitals from the three categories, in terms of differences in rates of management error [18].In the data in their present form (that have since been made available for educational purposes) there is no longer any reference to the individual hospitals; there is only a variable indicating in which of the three hospital categories the data were recorded.In the absence of more detailed information we assume, for simplicity and in the terminology of the present study, that there are only three data subsets or centers, which correspond to three categories.In this paper the data are used only to study the performance of the BFI methodology.
An overview of the population characteristics is given in Table 4.

Figure 1 :
Figure 1: Left: Visualization of an iterative FL procedure.In the local centers local loss functions are optimized based on local data.The results are aggregated at a central server.Next, the estimates are updated based on the local data and the aggregated information.This procedure is repeated until convergence.Right: Visualization of the BFI procedure proposed in this paper.In all local centers a Bayesian analysis is performed and the results are sent to a server where the results of the local centers are aggregated.The word "Learning" in the name "Federated Learning" refers to the repeated cycling mechanism around different centers.In contrast, for the proposed method one cycle is enough; inference is performed only once.We therefore chose to name the proposed method Bayesian Federated Inference and not Bayesian Federated Learning.

Figure 2 :
Figure 2: Prediction probabilities based on the estimates β b (BFI, vertical axis) versus those based on β c (regression on combined data set, horizontal axis), for λ = 0.1.Perfect agreement corresponds to all points being located on the diagonal.Left: Homogeneous populations.Right: Heterogenous populations.In both plots the correlation R 2 = 1.0.

Table 1 :
Estimates of the regression parameters in each center (including estimated standard deviations), after combining the data and based on the BFI strategy.Here λ = 0.1.
across local centers.The intercepts of the models in the different locations differ.More information on the data is given in the next subsection and in Appendix A.

Table 4 :
Data characteristics in the three subsets (here indicating different hospital types) and in the combined data set.NSU stands for 'neuro-surgical unit'.