A joint Bayesian framework for missing data and measurement error using integrated nested Laplace approximations

Measurement error (ME) and missing values in covariates are often unavoidable in disciplines that deal with data, and both problems have separately received considerable attention during the past decades. However, while most researchers are familiar with methods for treating missing data, accounting for ME in covariates of regression models is less common. In addition, ME and missing data are typically treated as two separate problems, despite practical and theoretical similarities. Here, we exploit the fact that missing data in a continuous covariate is an extreme case of classical ME, allowing us to use existing methodology that accounts for ME via a Bayesian framework that employs integrated nested Laplace approximations (INLA) and thus to simultaneously account for both ME and missing data in the same covariate. As a useful by‐product, we present an approach to handle missing data in INLA since this corresponds to the special case when no ME is present. In addition, we show how to account for Berkson ME in the same framework. In its broadest generality, the proposed joint Bayesian framework can thus account for Berkson ME, classical ME, and missing data, or any combination of these in the same or different continuous covariates of the family of regression models that are feasible with INLA. The approach is exemplified using both simulated and real data. We provide extensive and fully reproducible Supporting Information with thoroughly documented examples using R‐INLA and inlabru.


Introduction
Missing data and measurement error (ME) are both cases of information loss.This loss means that at the very least, the level of uncertainty in our inferences increases, and, depending on the underlying mechanism, the error and missing data might also be biasing our conclusions in ways that are not immediately obvious (Fuller, 1987;Little and Rubin, 1987;Carroll et al., 2006).Both problems are unavoidable in any field that deals with observational data.A common example is missing responses in surveys, where ME may also be present due to imprecise or untrue answers.
Other examples can be found in exposure assessment in epidemiology (Heid et al., 2004;Goldman et al., 2011), phenotypic ME in quantitative genetics (Ponzi et al., 2018), or errors in expert coded data in political science (Marquardt, 2020).It is therefore important to understand how to evaluate the impact of ME and missing data on statistical inference, as well as how to adjust for both to ensure valid inference.
Missing data and its implications for statistical analyses have been studied extensively (e.g., Little and Rubin, 1987;van Buuren, 2018;Carpenter and Smuk, 2021).The missingness of values in a covariate can be classified based on three main mechanisms that generate the missingness.The first type is referred to as missing completely at random (MCAR), arising when the missingness occurs entirely at random, not depending on any variables, observed or unobserved.The second type, referred to as missing at random (MAR), occurs when the missingness depends on some other variable(s) which are themselves entirely observed.The last type is referred to as missing not at random (MNAR), and occurs when the missingness depends on the values of the variables that are themselves unobserved (Little and Rubin, 1987).In a separate branch of research, ME has also been studied extensively (e.g., Fuller, 1987;Gustafson, 2003;Carroll et al., 2006;Buonaccorsi, 2010;Yi, 2017).Apart from increasing uncertainty, ME typically introduces bias into parameter estimates, whereas the type and direction of bias heavily depends on the actual error mechanism.As a major distinction, we can classify ME into classical and Berkson error, which influence parameter estimates in fundamentally different ways and must therefore be treated specifically (see, e.g., Berkson, 1950;Carroll et al., 2006).There may be cases where the ME does not influence the conclusions, but this cannot be known without inspection.It is therefore recommended to model the ME or carry out a simulation study when one suspects ME in the data in order to get more insight into the effect on estimates and their uncertainty (van Smeden et al., 2019;Keogh et al., 2020;Innes et al., 2021).
When dealing with ME or missing data, Bayesian methods enable us to specifiy models that mirror the data generating process, and we can easily include models that explain these errors in the data jointly along with the actual models we are interested in.An imputation model, for example, can be fit as part of a hierarchical Bayesian model (Erler et al., 2016;Ma and Chen, 2018), and thus propagates the added uncertainty appropriately through the levels.Similarly, if there is ME, we may specify an error model describing the relation between the correct variable and the observed one, and incorporate any information we have about the ME through priors (Richardson and Gilks, 1993;Dellaportas and Stephens, 1995;Muff et al., 2015).
While most of the literature on missing data and ME does not overlap, the similarities between the two problems have recently been pointed out.As an example, Cole et al. (2006) present multiple imputation for ME correction.In another step forward, Blackwell et al. (2017) treat ME and missing data in the same framework by using the observation that missing data is a limiting case of ME, corresponding to the case when the error variance tends to infinity, that is, the complete absence of information (i.e., missingness).Soon after, Goldstein et al. (2018) described a hierarchical Bayesian model with a classical ME and imputation model that allows the covariate with missing values to depend on other variables, and Keogh and Bartlett (2021) considered ME as a missing data problem by demonstrating multiple ways to utilize this link.Noghrehchi et al. (2020) combined multiple imputation with functional methods for error correction in a joint framework, and most recently van Smeden et al. (2021) compared four different methods that address both ME and missing data, among them a Bayesian model.The integrated nested Laplace approximation framework enables Bayesian inference for complex hierarchical models and has over the past decade become a popular tool for a wide variety of statistical models (Rue et al., 2009;Martino and Riebler, 2020;Gómez-Rubio, 2020).
In the context of INLA, both missing data and ME in covariates are a challenge, as the covariates are part of the latent Gaussian Markov random field (GMRF), and INLA requires the values in this latent field to be fully observed (Gómez-Rubio et al., 2022).No joint treatment of missing data and ME in INLA has previously been proposed, although methodology for missing data and ME has been suggested separately.In Berild et al. (2022), importance sampling is combined with INLA, which then allows for missing data imputation.However, in scenarios where many observations are missing this becomes unfeasible.On the other hand, Gómez-Rubio et al. (2022) present a general approach by defining a covariate imputation model as a latent effect with a GMRF structure.In the case of covariate ME, (Muff et al., 2015) show how classical and Berkson ME in continuous covariates can be incorporated into the INLA framework.
Here, we propose to take advantage of the ME and missing data connection to provide joint ME and missing data modeling using INLA.To this end, we use the interpretation of missingness as an extreme case of classical ME in order to employ existing ME models in INLA for missing data imputation.In terms of missing data, our approach is less general than the one by Gómez-Rubio et al. (2022), since their approach also allows for modelling of the missingness mechanism, which is necessary when the missingness is MNAR.However, in many practical applications it is reasonable to make the assumption that the missingness is MAR or even MCAR, in which case the missingness mechanism is ignorable.The model presented in this paper is therefore well suited for those cases, and has the advantage of being able to account for ME simultaneously.The proposed Bayesian model can be fit in any framework for Bayesian modeling, but we especially highlight the necessary adjustments that must be made in INLA.This paper is organized as follows.In Section 2, we describe the Bayesian model that will be used to model ME and missing data in covariates.In Section 3 we explain how the model is implemented in INLA.In Section 4, we illustrate the model and its implementation with three examples.Lastly, Section 5 provides a discussion of the method, results, limitations and extensions.
2 Model specification

Classical measurement error
In the case of classical ME, we are interested in n observations of a variable x = (x 1 , x 2 , . . ., x n ) T , but we actually observe (in vector notation) w = x + u c , with an additive noise term u c ∼ N (0, τ uc D uc ) that is independent of x.The subscript c is used here to indicate that it is the error term in a classical error model, τ uc = 1/σ 2 uc denotes the precision of the error term, and D uc may capture heteroscedasticity or dependence structures, but in the simplest case D uc is the identity matrix, implying constant ME precision across all observations.A characteristic of classical ME is that the variance of the observed variable w can be additively split into σ 2 w = σ 2 x + σ 2 uc , thus the variance of x will be smaller than the variance of the misobserved version w.
When x is a covariate in a regression model, a Bayesian hierarchical model to account for classical ME in the variable x can be formulated as (see e.g., Richardson and Gilks, 1993;Dellaportas and Stephens, 1995;Muff et al., 2015) where the components are: Regression model (1) : g(µ) = η is the linear predictor in a generalized linear model (GLM), given the correct covariate values for x, as well as other covariates observed without error, stored in the matrix Z. Here, g(•) is the link function and µ is the mean vector.The coefficients β 0 , β x and β z are the intercept and the slopes for x and Z, respectively, and these may be given Gaussian priors centered at 0 with small precisions.
Classical error model (2) : The error model describes how the covariate with ME is related to the correct variable, where u c is the error in the observed variable w, and x and u c are assumed to be independent.Importantly, the prior of the precision τ uc , typically a gamma prior, must be chosen based on expert knowledge, validation data or repeated measurements to ensure identifiability (Gustafson, 2005(Gustafson, , 2010)).
Imputation model (3) : Describes how the covariate with error might depend on other fully observed covariates, stored in the matrix Z (which may or may not be equal to Z).Here, ε x is the residual term in the model for the covariate x, and its precision τ εx should be given an carefully chosen prior.The intercept α 0 and slopes α z for the covariates in Z may be given Gaussian priors centered at 0 with small precisions.In the ME literature, Model (3) is generally referred to as the exposure model.

Missing data in relation to classical measurement error
A convenient feature of the ME model in Section 2.1 is that it can at the same time be used to account for missing values.Imagine that the classical ME variance σ 2 uc exists on a continuum where we have an observation with no error when the variance is zero (w i = x i ), and as the variance increases our ME gradually increases, to the point where we have no information about the observation, which is analogous to missing data (Figure 1).We can thus think of missing data as an extreme case of ME.
Using this, we can reflect the degree of error through the prior of the classical error variance: either there is no error (σ 2 uc = 0), some finite error (0 < σ 2 uc < ∞), or an observation is completely missing (σ 2 uc → ∞).We cover the technical aspects of this in Section 3.2.Note that Figure 1 shows the ME variance σ 2 uc for illustration purposes, whereas we use the precision τ uc = 1/σ 2 uc in the rest of the paper.

Berkson measurement error
The Berkson ME (Berkson, 1950) is fundamentally different from the classical ME.While the classical ME can be thought of as random noise, the Berkson ME arises for example in situations where all members of a population are assigned the same value for some property, when in reality the value varies between the members.Berkson error is quite typical in experimental setups, where a variable is intended to be fixed at a given value, such as a concentration or a specific time point, when in reality there is some level of variation due to practical deviations from the target value (see e.g., Buonaccorsi and Lin, 2002).The Berkson error model can be formulated as x = w + u b , where x is as before the correct value, w is the observed value and Assuming w and u b to be independent, the variance of the correct obervations is additively split like σ 2 x = σ 2 w + σ 2 u b .A property of the Berkson ME is thus that the variance of the observed values w will be smaller than the variance of the correct value x, as opposed to the classical error situation.As with the classical ME, we can set up a Bayesian hierarchical model that can be used to account for Berkson ME in the variable x (Muff et al., 2015), where Model ( 4) is the regression model of interest (and corresponds to Model (1) above), and Model ( 5) is the Berkson ME model with independent error term u b .Note that we do not need an imputation model, as x is defined conditionally on the observations w in the error model (5).

A generalized measurement error model
In some cases both error types occur together, for instance in exposure assessment in epidemiology (see e.g., Reeves et al., 1998;Heid et al., 2004;Deffner et al., 2018).By combining the hierarchical classical error model given by components ( 1) -( 3) with the Berkson ME model ( 4) -( 5) it is possible to handle classical and Berkson ME in the same covariate.We again denote the correct variable as x and the observed version by w, but we need to introduce an additional latent variable r (Muff et al., 2017), where the relation between r and the correct covariate x is the Berkson error model This leads to the combined model formulated as Note that the imputation model is defined for r, as this is the latent variable entering the classical ME model, whereas the model for the latent "correct" variable x is given through the 3 Technical considerations for missing and mismeasured co-

variates in INLA
A requirement to use INLA is that the model can be expressed as a latent Gaussian Markov random field, which makes INLA slightly more restrictive than standard sampling approaches.There are also some practical issues that arise when fitting the particular model described in Section 2.4, which we will now address.

How are missing values treated in INLA?
The technical issue of how missing values are handled in INLA is not addressed in the original INLA publication (Rue et al., 2009), but the details were later discussed, for example on the FAQ website https://www.r-inla.org/faq,question 7: "How does inla() deal with 'NA's in the dataargument?".First of all, it is crucial to discriminate between missing values in the response versus missingness in the covariates.If a value is missing in the response, then that data point gives no contribution to the likelihood, while the predictive distributions for these missing values in the response will be computed automatically (Gómez-Rubio, 2020, Chapter 12.3).If a value in a covariate is missing, on the other hand, that entry is re-coded to be 0 internally.This way of handling missing values in the covariates may lead to biased posteriors, and it is the case we address here.
As is based purely on INLA, and is much more widely applicable.They also include a logistic model to describe the actual missingness mechanism, which means that it is possible to do a sensitivity analysis to examine the influence of the missingness.This is advisable when one suspects that the missingness mechanism may be MNAR, that is, when the variable is missing depending on the value of the variable itself.However, in the cases where we have reason to believe that missingness is MCAR or MAR it is not necessary to include a model for the missingness mechanism itself (van Buuren, 2018), and we can employ a simpler model than Gómez-Rubio et al. (2022).In that case, we can take advantage of the interpretation of missing data as an extreme form of ME used here.

Scaling the classical error precision for the case of missing values
In Section 2.2 we described the interpretation of missingness as an extreme form of ME.Conceptually, scaling the classical error precision to a large value (e.g., 10 12 ) would indicate that we have a small error, while scaling to a very small value (e.g., 10 −12 ) would correspond to a missing observation, since we have very large uncertainty.Values for the precision in between these two extremes would indicate a regular classical ME.The respective information can thus directly be incorporated into the corresponding element of the (scaled) diagonal matrix τ uc D uc .However, in practice it is even not necessary to set the precision to a small value if a covariate entry is missing, because the values that are completely missing in w do not contribute to the likelihood in the error model component (7) thanks to the way missing values are handled in INLA (see Section 3.1).We thus only need to scale the precision according to whether a given observation has ME or not.In the case when the value is missing, the imputation model will impute the missing value for w i based on the other covariates through the imputation and error models in ( 8) and ( 7).

Reformulating sub-models that have latent variables as responses
In both missing data and ME problems, the actual covariate x is unobserved and therefore part of the latent field.In a standard Bayesian hierarchical model, the imputation model imputes the unobserved values based on other available covariates.However, imputation models like the one presented in Equation ( 8) cannot be formulated directly in INLA, since the framework does not allow latent variables in the response.For the case of ME, a solution to this problem has however already been suggested by Muff et al. (2015): the imputation model in Equation ( 8) is reformulated with pseudo-observations 0 as the response, which circumvents the problem as the imputation model can now be handled as part of the observation model.The same workaround needs to be used for the specification of a Berkson ME model, where the unobserved variable x would otherwise stand as the response.Our full model as formulated in INLA then becomes The same approach is directly applicable to the case with missing data.Another issue that arises is the estimation of the coefficient β x , which is not straightforward since x is unobserved.In fact, although both x and β x are assumed to be Gaussian distributed, their product is not, while INLA requires that the latent field is Gaussian.Again, a solution to this was presented in Muff et al. (2015) for the ME case, and thus the approach is directly applicable to the case with missing data as well.
In brief, instead of estimating β x as a part of the latent field, we let it be a hyperparameter, which ensures that the latent field is still Gaussian, conditioned on β x .

Applications
In this section we examine two real-world data sets and carry out a simulation study.Code for all three examples is available in the Supplementary Material, where each example is presented in separate documents with extensive explanations.

Missing covariate imputation: a linear model for cholesterol
In  Rue, 2018;Gómez-Rubio et al., 2022;Berild et al., 2022).The data contains 25 observations and four variables: age group (which is categorical with three levels), BMI, hypertension and cholesterol.Note that this data set is not large enough to make inference.The example is provided as an illustration for how to implement the method for missing covariate data, since the data set is commonly used for this purpose.Here, we fit a linear regression model with cholesterol (chl) as the response, and age group and BMI as covariates.The response cholesterol and the covariate BMI have 10 and 9 missing observations, respectively, but since the missingness in the response is automatically handled by INLA (see Section 3.1), the problem reduces to imputing the missing values in the BMI variable.
We specify the joint Bayesian model with a regression model for cholesterol ( 9), the error model ( 10) and the imputation model ( 11), where the latter describes how BMI depends on age category: Here, bmi obs 1 , . . .bmi obs n are our observations, which contain missing values, whereas bmi 1 , . . .bmi n are the latent variables.Age is dummy-coded for the three age-groups.The coefficients in Model ( 9) and Model ( 11) are all given N (0, 10 −6 ) priors.The priors for the precisions of the residuals ε and ε (x) are τ y ∼ Gamma(2, 846.8) and τ x ∼ Gamma(2, 16.8), chosen based on information from the data (this is described in detail in the Supplementary Material), while the precision for the error term, τ u , is fixed to 1. Since we have no ME in this example, and since the missing values for bmi obs i (which is the response of the ME model ( 10)) do not contribute to the likelihood, we can actually set all the diagonal elements of D to 10 12 .
For comparison, we fit two additional models, one complete case model that uses only the observed BMI values, as well as a simpler imputation model, where the imputation is independent of age (Figure 2).In this simpler model, the missing body mass values have a simple Gaussian prior centered at the average of the observed values and variance four times the variance of the observed values, as in Gómez-Rubio and Rue (2018).For completeness, we also report the exact results from Gómez-Rubio and Rue (2018).Across all coefficients, the posterior means for the complete case model are consistently larger than when an imputation model is included.It is to be expected that uncertainties decrease when we are able to include more observations, however the very small credible intervals in this case are more likely an inconsistency due to the very small sample size.Overall, the example illustrates that our approach implemented in INLA is a valid alternative to existing tools for missing data imputation.

Imprecise and missing systolic blood pressure measurements
We now consider a case of ME and missing values in the covariates of a survival model, using the framework described in Section 2. The data is originally from the Third National Health and Nutrition Examination Survey (NHANES III), but we use the data as pre-processed and provided by Bartlett and Keogh (2018).They linked the NHANES III data to data from the US National Death Index, with information about the mortality status of each participant in 2011.Following Keogh and Bartlett (2021), we consider a Weibull survival model with the continuous covariates systolic blood pressure (SBP) and age, and binary covariates diabetes status, sex and smoking status.
The response is time until death by cardiovascular disease.Deaths by other causes are treated as censorings.Measurements of SBP are known to vary substantially within the same patient, and SBP is therefore measured twice for some participants, enabling us to estimate the variance of the error.
However, for some participants there is only one measurement, and for others the SBP is completely missing.Note that the smoking status is also missing for around half of the participants, but since we only consider continuous ME, all observations with missingness in the (binary) smoking status are removed for this illustrative example, resulting in n = 3433 observations.
To model the survival time we use a Weibull survival model with hazard h(t i ) = rt r−1 λ r i , where r is the shape parameter of the Weibull distribution and t i is the time until death by cardiovascular disease for individual i.The linear predictor η i for individual i is linked to the rate parameter by log(λ i ) = η i .The joint model including the ME model for SBP is then 1 , . . ., SBP (2) n ) T are the first and second blood pressure measurements, respectively, with SBP being the latent error-free version of the blood pressure, and the remaining four variables that are assumed to be without error: sex, age, smoking status and diabetes status.The ME terms u (1) and u (2) and the residual term for the imputation model are assumed to be independently distributed as N (0, τ u I) and ε (x) ∼ N (0, τ x I), respectively.For the analysis using INLA to be comparable to that of Keogh and Bartlett (2021), we used the same priors: the coefficients of the model of interest (the betas) are given N (0, 10 −6 ) priors, while the coefficients of the imputation model (the alphas) are given N (0, 10 −4 ) priors, τ u ∼ Gamma(0.5, 0.5), and τ x ∼ Gamma(0.5, 0.5).
The model that accounts for ME and missing values (n = 3433) is compared to a naive complete case analysis (n = 2667) (Figure 3), as well as to the same ME model but with the complete case data, in order to study the effect of accounting for the ME but not the missing observations.The naive complete case analysis is only applied to the individuals that had no missing values in SBP (1) , and treats SBP (1) as the correct value for SBP .We see that the estimated posterior mean for β sbp increases in the ME models, which indicates that there may be some attenuation due to the ME.Comparing the ME adjusted complete case model with the model that also imputes the missing values, the coefficients for age and diabetes increase a bit, while the one for SBP shows only a very small increase.It is notable that in this case the missingness seems to be influencing other coefficient estimates than the coefficient of the covariate that actually has the missingness.This may often be the case for both ME and missing data, when the covariate with the ME and missingness is correlated with other covariates.Finally, we compared the runtime of the model fit by Keogh and Bartlett (2021) in JAGS to the runtime of the model implemented in R-INLA.The model fit in JAGS took 31 minutes (10000 iterations were used, and the first 5000 were discarded), while INLA took 20 seconds (both were run on a 2 GHz Quad-Core Intel Core i5 processor).This improvement shows the significant practical advantage of being able to fit these complex models in R-INLA.A dotted vertical line is drawn from the posterior means of the full imputation and ME model to ease comparison.

Simulation study: Berkson and classical measurement error alongside missing data
For this example, we simulated a linear regression model with a mismeasured covariate x, observed as w, as well as an error-free covariate z.For the simulation, we first generated the error free covariate as z ∼ N (0, I) .
Keeping with the notation used in Section 2.4, the latent variable with Berkson error was then generated as and the correct value for the variable was generated by adding further variation to r by The classical ME was then added to the variable with Berkson error to obtain the variable that is actually observed, w: Next, the probability for an observation to be missing was assumed to be MAR, depending only on and the response y is generated from the correct covariate x as with (β 0 , β x , β z ) = (1, 2, 2).The simulation was repeated 100 times, with n = 1000 observations per iteration.For each data set we fit three different models: A complete case model without error correction using w as covariate, a model that accounts for both ME (classical and Berkson) and missing data (and thus reflects the data generating process), and a best-case model that is a regression of y on x and z, which shows how this basic regression model would perform if we did not have any ME or missing data.As for the other examples, complete code to reproduce this simulation is available in the Supplementary Material.We also provide an implementation using inlabru rather than R-INLA, since some may find the syntax of inlabru more user-friendly.
The posterior means of the coefficients in the ME model are very close to those actually used for the data generation (Figure 4), and clearly improved compared to the naive model, for which the posterior mean for β x is over-estimated, while the posterior mean for β z is under-estimated.
Although β z is observed without error, the errors in x will also greatly bias the estimate for β z , since z and x are correlated.for ME and missing data, and the third model (blue) is the model using the correct value of x.A dotted vertical line is drawn from the posterior means of the best case model to ease comparison.

Discussion
In

Figure 1 :
Figure 1: The scale of classical measurement error variance.The figure is adapted with permission from Blackwell et al. (2017) (copyright 2015 by the authors).
mentioned in the introduction, a few publications have tackled missing values in covariates in INLA previously.Gómez-Rubio and Rue (2018) combine Markov chain Monte Carlo with INLA to this end, and Berild et al. (2022) show how to combine importance sampling with INLA, and both use missing covariate imputation to illustrate their approaches.However, in both cases the missing observations are treated as additional parameters of the model, and thus the model becomes very computationally expensive when the number of missing observations is large, which makes it of limited practical use.The method proposed in Gómez-Rubio et al. (2022), on the other hand,

Figure 2 :
Figure 2: Posterior means and 95% credible intervals for the parameters in Section 4.1.The complete case analysis (red) is a model fit using only the complete cases (n = 16).The simple imputation model (yellow) uses a Gaussian prior for the missing observations, and is identical to the model used in Gómez-Rubio and Rue (2018) (orange) (n = 25).The full imputation model (blue) also depends on the age of the patient for the imputation of the missing BMI values (n = 25).

Figure 3 :
Figure3: Posterior means and 95% credible intervals for the parameters in Section 4.2.The naive complete case analysis (red) is a model fit using only the complete cases (n = 2667), and with no ME adjustment.The ME adjusted complete case analysis (yellow) is fit on the same complete case data, but accounting for the ME in the systolic blood pressure.Finally, the ME adjusted and imputation model (blue) uses all the observations (n = 3433) and accounts for ME and missing data.

Figure 4 :
Figure 4: Sample means and 95% quantile intervals of the posterior means from fitting three different models to 100 different simulated data sets.The dots are the posterior means for each of the runs, and are jittered along the y-axis to reduce overplotting.The first model (red) is the naive complete case analysis, simply fitting a regression model using w instead of x, the second model (yellow) adjusts Gryparis et al., 2008)ficients of the imputation model will be estimated, but it will not feed back into the model, so in practice it does not affect x.If we have Berkson error and missing data, we need to keep the classical error model in order to to impute the missing values, but we scale the classical ME precision to a large value for the values that are observed to indicate that there is no classical ME.The handling of the missingness is then done just as described in Section 2.2.imputation models need to be formulated for each such covariate.If the error in one covariate is heteroscedastic, then the elements of D uc can correspondingly be adjusted to reflect the differing precisions for the different observations.Biased errors can be specified in the error model by introducing an offset.Due to the modular structure of Bayesian hierarchical models, the components may be combined with any other model formulations that are possible within a Bayesian framework, for example random effects or spatial effects (see e.g.,Gryparis et al., 2008).
Berkson ME model.To better understand the role of the latent variable r, it may help to imagine what would happen if one of the error terms was "turned off" in the model.If we have u b = 0, then (6) simply says x = r, and we are left with the classical error model we introduced in Section 2.1.On the other hand, if there is no classical error (u c = 0) then w = r, and we are only left with the Berkson error model.ME, and observations missing at random in the same variable, with the regression model of interest being any generalized linear model that is compatible with the assumptions INLA takes.There are some straightforward extensions to this -if there is ME or missingness in multiple covariates, then separate error and order to illustrate how to model missing covariate imputation in INLA using the framework of a classical ME model, we consider the common case where only missing data is present, but no ME.We consider the nhanes2 (National Health and Nutrition Examination Survey) dataset from the mice R package (van Buuren and Groothuis-Oudshoorn, 2011), which has often been used to evaluate missing data imputation methods, (see, e.g., van Buuren and Groothuis-Oudshoorn, 2011; Gómez-Rubio and this paper we have addressed how existing ME modelling approaches for INLA can be used directly to impute missing covariates.By using a model motivated by a ME problem, we are able to work around the practical issue with covariate imputation in INLA.The resulting model formulation provides a way to deal with missingness in a covariate, but it also gives a conceptually simple way to handle cases where ME and missingness both occur in the same variable.In addition to classical ME and missing data, we show how the model can be generalized to account for Berkson ME at the same time.We show that the computational efficiency of INLA allows us to fit the models significantly faster than previously possible in sampling-based software for Bayesian inference.To reach our goal, we are combining two previously disconnected lines of research in a synergistic way: the first line of research is treating missing data as an extreme case of ME, and the second is doing missing data imputation in INLA.A few publications cited here have used a ME model for missing data imputation in sampling-based inference, but the idea has until now not been incorporated in INLA.Simultaneously, there have been a few different approaches to missing data imputation in INLA, but these have typically not been feasible if the number of missing observations is large.By using the ME model to enable imputation in INLA, we have addressed the computational challenge that was previously a limiting factor.Moreover, even though we here only considered examples with a single continuous covariate, the model can easily be expanded to accommodate multiple covariates with missing or mismeasured variables.On the other hand, it is not straightforward to account for missingness or ME in categorical covariates, since in INLA the latent variables must be assumed to be Gaussian.Modelling the misclassification and missingness of a categorical variable might be achieved by combining this method with MCMC within INLA.In summary, we have shown how a joint Bayesian model for ME and missingness allows us to account for both classical and Berkson error in continuous covariates of regression models, as well as providing a new way to do covariate imputation in INLA.A Bayesian analysis gives us the flexibility we need to reflect the error-generating process directly in the model, as well as the option to include prior knowledge about the error variance.Our approach to handle missing data and ME in INLA enables significantly faster inference compared to sampling-based methods, which makes these complex Bayesian models for ME and missing data feasible for a wide range of practical applications.vanSmeden,M., Penning deVries, B. B., Nab, L., and Groenwold, R. H. (2021).Approaches to addressing missing values, measurement error, and confounding in epidemiologic studies.Journal of Clinical Epidemiology, 131:89-100.Yi, G. (2017).Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application.Springer Series in Statistics.Springer New York.