Objective priors in the empirical Bayes framework

When dealing with Bayesian inference the choice of the prior often remains a debatable question. Empirical Bayes methods offer a data‐driven solution to this problem by estimating the prior itself from an ensemble of data. In the nonparametric case, the maximum likelihood estimate is known to overfit the data, an issue that is commonly tackled by regularization. However, the majority of regularizations are ad hoc choices which lack invariance under reparametrization of the model and result in inconsistent estimates for equivalent models. We introduce a nonparametric, transformation‐invariant estimator for the prior distribution. Being defined in terms of the missing information similar to the reference prior, it can be seen as an extension of the latter to the data‐driven setting. This implies a natural interpretation as a trade‐off between choosing the least informative prior and incorporating the information provided by the data, a symbiosis between the objective and empirical Bayes methodologies.


INTRODUCTION
Inferring a parameter ∈ Θ from a measurement x ∈  using Bayes' rule requires prior knowledge about , which is not given in many applications. This has led to a lot of controversy in the statistical community and to harsh criticism concerning the objectivity of the Bayesian approach. In the past decades, Bayesian statisticians have given (at least) two solutions to the dilemma of missing prior information: (A) (noninformative) objective priors: Objective Bayesian analysis and reference priors in particular (Berger & Bernardo, 1992;Berger, Bernardo, & Sun, 2009;Bernardo, 1979) apply mostly information theoretic ideas to construct priors that are invariant under reparametrization and can be argued to be noninformative. (B) empirical Bayes methods: If independent measurements x m ∈ , m = 1, … , M, are given for a large number M of "individuals" with individual parametrizations m ∈ Θ, which is the case in many statistical studies, empirical Bayes methods (Carlin & Louis, 1996;Casella, 1985;Efron, 2010;Maritz & Lwin, 1989;Robbins, 1956) use this knowledge to construct an informative prior as a first step and then apply it for the Bayesian inference of the individual parametrizations m (or any future parametrization * with measurement x * ) in a second step. A typical application is the retrieval of patient-specific parametrizations in large clinical studies, see Neal and Kerckhoffs (2009). However, as discussed below, many of these methods fail to be consistent under reparametrization.
The aim of this article is to extend the construction of reference priors to the empirical Bayes framework in order to derive transformation invariant and informative priors from such "cohort data". We will perform this construction along the lines of the definition of reference priors in Berger et al. (2009) and use a similar notation. Likewise, we concentrate mainly on problems with one continuous parameter (Θ ⊆ R d , d = 1), possible generalizations to the multiparameter case d > 1 are addressed shortly in Section 7.
This article is organized as follows. After introducing the notation in Section 2, we discuss empirical Bayes methods and the inconsistency of maximum penalized likelihood estimation (MPLE) under reparametrization, which is the main motivation for our work, in Section 3. Section 4 provides a solution to this issue by following the same ideas as in the construction of reference priors, resulting in a prior estimate we term the empirical reference prior. A rigorous definition and analysis of empirical reference priors is given. Section 5 provides two algorithms which show how the empirical reference prior can be implemented in practice (under the assumption of asymptotic normality)-one based on optimization and the other on a fixed point iteration. In Section 6 we then apply our methodology to a synthetic data set, illustrating invariance of the empirical reference prior under reparametrizations, as well as to the famous baseball data set of Efron and Morris (1975), where we compare our approach with the James-Stein estimator. In Section 7 we discuss possible generalizations of our approach to the multiparameter case d > 1, followed by a short conclusion in Section 8. F I G U R E 1 Graphical model and schematic representation of the underlying probabilistic model. Such a setup will be referred to as empirical Bayes framework

SETUP AND NOTATION
We will work in the empirical Bayes framework described above and visualized in Figure 1. Specifically, denoting the likelihood model by the data X = (x 1 , … , x M ) are generated by a two-stage process, where the "individual" parametrizations 1 , … , M are independent draws from the unknown prior distribution p( ) and each data point x m ∈  is drawn independently from p(x| m ), m = 1, … , M. Note that  denotes the model corresponding to the complete observation vector x ∈  ⊆ R n and that the data X = (x 1 , … , x M ) consists of M observation vectors x 1 , … , x M ∈ . This convention is necessary because our theory requires the introduction of (artificial) independent replications of the entire experiment, denoted by the model We adopt the standard abuse of notation, denoting all density functions by the letter p and letting the argument indicate which random variable it belongs to, for example, p(x) is the marginal density of x, while p( |x) denotes the posterior density of given x. In addition, ( ) will denote any other possible (proposed, guessed, or estimated) prior on Θ from some class  of admissible priors, p(x| ) ∶= ∫ p(x| ) ( )d the corresponding prior predictive distribution and ( |x) ∝ ( )p(x| ) the corresponding posterior. Furthermore, true ( ): = p( ) denotes the "true" data-generating prior.
The prior may be viewed as a hyperparameter ∈ , in which case we assume conditional independence of and x given . The marginal likelihood of given the entire data X = (x 1 , … , x M ) is given by We will also assume both the parametric and the hyperparametric model to be identifiable, see van der Vaart (1998, section 5.5), that is, since otherwise there would be no chance to recover the true distribution true ( ) = p( ) from no matter how many measurements.

INCONSISTENCY OF EMPIRICAL BAYES METHODS
Roughly speaking, empirical Bayes methods perform statistical inference in two steps-first, they estimate the prior from the entire data X = (x 1 , … , x M ), and second, they apply Bayes' rule with that prior for each x m separately 1 . We concentrate on the first step, which is often performed by maximizing the marginal likelihood L( ), for example, by means of the EM algorithm introduced by Dempster, Laird, and Rubin (1977). This procedure can be viewed as an interplay of frequentist and Bayesian statistics: The prior is chosen by maximum likelihood estimation (MLE), the actual individual parametrizations are then inferred using Bayes' rule. However, in the nonparametric case (NPMLE), that is, if no finite-dimensional parametric form of the prior is assumed, it can be proven that the marginal likelihood L( ) is maximized by a discrete distribution MLE with at most M nodeš, see Laird (1978, theorems 2-5) or Lindsay (1995, theorem 21). This typical issue of overfitting the data is often dealt with by subtracting a roughness penalty (or regularization term) Φ( ) = Φ( | ) from the marginal log-likelihood function log L( ), such that smooth or noninformative priors are favored, resulting in the so-called maximum penalized likelihood estimate (MPLE): The constant > 0 balances the trade-off between goodness of fit and smoothness or noninformativity of the prior. This approach can be viewed from the Bayesian perspective as choosing a hyperprior p( ) ∝ e − Φ( ) for the hyperparameter and performing a maximum a posteriori (MAP) estimation for : Desirable properties of the roughness penalty function Φ( ) = Φ( | ) are: 1 For theoretical purposes one should rather think of applying Bayes' rule to some future measurement x * to infer its parametrization * in order to avoid reusing the data. In practice and for a large number M of measurements this distinction is of little relevance, since the influence of one data point x m on the prior estimation can usually be neglected.
(a) noninformativity: without any extra information about the parameter or the prior, we want to keep our assumptions to a minimum (in the sense of objective Bayes methods); (b) invariance under transformations of the parameter space Θ (reparametrizations); (c) invariance under transformations of the measurement space ; (d) convexity: Since log L( ) is concave in the NPMLE case (Lindsay, 1995, section 5.1.3), a convex penalty function Φ( ) would guarantee a concave optimization problem (5); (e) natural and intuitive justification.
The penalty functions currently used are mostly ad hoc and rather brute force solutions that confine amplitudes, for example, ridge regression (McLachlan & Krishnan, 2008, section 1.6), or derivatives (Good & Gaskins, 1971;Silverman, 1982) of the prior, which are neither invariant under reparametrizations nor have a natural derivation. A more contemporary alternative is to use Dirichlet process hyperpriors p( ), which have similar limitations. In order to incorporate the notion of noninformativity, Good (1963) suggested to use the entropy as a roughness penalty, which is a natural approach from an information-theoretic viewpoint, since high entropy corresponds to high uncertainty and thereby noninformativity of the prior. However, Φ H is not invariant under reparametrizations, making it, as Good puts it, "somewhat arbitrary" (Good, 1963, p. 912): It could be objected that, especially for a continuous distribution, entropy is somewhat arbitrary, since it is variant under a transformation of the independent variable.
Following Shannon's derivation of the entropy H , we will explain why the concepts of mutual information and missing information, both of which are invariant under transformations, are far more natural quantities to use in our setup. Prior to that, let us make the notion of invariance more precise.

Invariance under transformations
Invariance under transformations guarantees consistency of the resulting probability density estimate and follows the following principle: If two statisticians use equivalent models to explain equivalent data their results must be consistent, or, as Shore and Johnson (1980) Here and in the following, * and * denote the pushforward of some measure (or density) under and , respectively.
If we wish to use the function F for the estimation of the prior (as in Equation (5) for F = log L − Φ), invariance of F causes the diagrams in Figure 2 to commute. Note that, if we restrict the considerations to some class  of admissible priors, then this class needs to be transformed correspondingly,  ∶= { * , ∈ }, such that the considered class of priors is consistent under reparametrization.
One class of functions that fulfills these invariance properties is introduced in the following theorem, where we also show that the marginal likelihood L( ) is transformation invariant up to a constant. Theorem 1. Let X = (x 1 , … , x M ) be data stemming from a model  = {p(x| ), x ∈ , ∈ Θ} and for some measurable function g : R → R, such that the above integral is well defined. Then F g is invariant under transformations of and x. Furthermore, the marginal likelihood L( ) = L( | , X) defined by (1)

is invariant under transformations of and x up to a multiplicative constant.
Proof. This is a straightforward application of the change of variables formula. ▪ F I G U R E 2 Commutative diagrams illustrating the consistency of the density estimate π est (θ). If the estimation is performed in a transformed parameter spaceΘ = (Θ) or measurement space = () (the model , the class  of admissible priors and the data X = (x 1 , … , x M ) are transformed accordingly), the results should be consistent. Note that, in the empirical Bayes framework, the data X enter in the estimation of the prior, hence the commutativity of the right diagram is not trivial

OBJECTIVE BAYESIAN APPROACH TO EMPIRICAL BAYES METHODS
The lack of invariance of common empirical Bayes methods described above will now be tackled by an approach similar to the construction of reference priors and performed along the lines of Berger et al. (2009). The two key ingredients for defining reference priors are permissibility, which yields a rigorous justification for dealing with improper priors, and the maximizing missing information (MMI) property, which is derived from information theoretic considerations and can be argued to guarantee the least informative prior.
Permissibility allows for the (positive and continuous) prior to be improper as long as it yields a proper posterior for each measurement (which is the object Bayesian statisticians are actually interested in) and these posteriors can be approximated by using proper priors arising from restricting the prior to compact subspaces Θ i ⊆ Θ.
In the empirical Bayes framework, where the aim is to approximate the true (proper) prior true ( ) = p( ), improper priors are less of an issue and we will limit ourselves to proper priors. Furthermore, it is completely unclear how to deal with improper priors in this framework, since the "restriction property" of reference priors is neither achievable nor desirable, see the remark and example below. For this reason and since the concept of permissibility has been elaborated extensively in Berger et al. (2009), we will just state its definition.

Definition 2. A strictly positive continuous function ( ) is a permissible prior
Here, 1 Θ i is the indicator function of the subset Θ i ⊆ Θ and D KL (⋅ ||⋅) denotes the Kullback-Leibler divergence defined by While permissibility is rather a technicality for dealing with improper priors, the MMI property should be seen as the defining property of reference priors and will now be discussed in more detail.
As motivated in the introduction, penalizing by means of entropy provides a natural approach to incorporate the idea of noninformativity about the parameter into the inference process. However, if we follow Shannon's derivation of the entropy H , we see that it is not the appropriate notion in our setup. Shannon (1948) derived the entropy H from the insight that the proper way to quantify the information gain, when an event with probability p ∈ [0, 1] actually occurs, is − log(p) ∈ [0, ∞]. He then defined the entropy as the expected information gain. However, the continuous analogue to this notion, the differential entropy H given by (7), faces several complications: • The information gain − log( ( )) from observing the value , as well as the entropy H itself can become negative, which is difficult to interpret.
• H is variant under transformations of , leading to an inconsistent notion of information.
• The information gain − log( ( )) relies on a direct and exact (error-free) measurement of , which is not plausible in the continuous case.
The last point becomes even more relevant in the empirical Bayes framework, where is not (and usually cannot be) measured directly, but is inferred from the measurement x of another quantity. The appropriate notion for the information gain in this setup is the Kullback-Leibler divergence D KL ( (⋅ |x) || ) between posterior and prior (Kullback & Leibler, 1951). Its expected value (from one observation of model ), the so-called expected information or mutual information of and x, is always nonnegative and invariant under transformations.
Definition 3. The expected information gained from one observation of model  on a parameter with prior ( ) is given by The expected information has very appealing properties for a penalty term:

Theorem 2. The expected information [ | ] is concave in and invariant under transformations of and x.
Proof. Concavity is proven in Cover and Thomas (2006, theorem 2.7.4) while invariance follows directly from Theorem 1. ▪ As argued in Bernardo (1979), the quantity [ |  k ], the expected information on gained from k independent observations of , describes the missing information on as k goes to infinity: By performing infinite replications of  one would get to know precisely the value of . Thus, [ |  ∞ ] measures the amount of missing information about when the prior is ( ). It seems natural to define "vague initial knowledge" about as that described by the density ( ) which maximizes the missing information in the class . 2 Following this idea, maximizing the missing information (MMI) results in the least informative prior, making Φ( ) = −[ |  ∞ ] an appealing penalty term in (5). It is now tempting to define empirical reference priors by * = arg max However, since [ |  k ] typically diverges for k → ∞, the following detour around the optimization formulation (10) appears necessary (as we will see in Section 4.3, some simplifications are possible under certain regularity conditions): Definition 4. Let  = {p(x| ), x ∈ , ∈ Θ} be a model,  be a class of prior functions with ∫ p(x| ) ( ) d < ∞ and X = (x 1 , … , x M ) be the data consisting of M independent samples from p(x). The function ∈  is said to have (I) the MMI = MMI(, ) property for model  given  if, for any compact set Θ 0 ⊆ Θ and anỹ∈ , where 0 and̃0 denote the (renormalized) restrictions of and̃to Θ 0 ; (II) the MMIL(X) = MMIL(, , X, ) property for model  given X,  and > 0 if is a proper probability density and if, for any proper probability densitỹ∈ , Here, MMIL stands for "maximizing (a trade-off between) missing information and log-likelihood". Both definitions are only useful if all the values of the expected information in (11) and (12)

4.1
The formal definition of empirical reference priors Similar to Berger et al. (2009), and in accordance with their definition of reference priors, we now define empirical reference priors, which constitute the main contribution of this article: is a reference prior for model  given prior class , if it is permissible and has the MMI property. A probability density erp ( ) = erp ( | , , X, ) is an empirical reference prior for model  given prior class , data X = (x 1 , … , x M ) and smoothing parameter > 0, if it has the MMIL(X) property.
Let us now formulate and prove the key properties of empirical reference priors.
Theorem 3 (Invariance of the empirical reference prior). The empirical reference prior is invariant under transformations of and x in the following sense: where we adopted the notation from the definition of invariance and  ∶= { * , ∈ }.
Proof. Since log L( ) is invariant under transformations of and x up to an additive constant by Theorem 1, this is a direct consequence of Theorems 1 and 2. ▪ Theorem 4 (Compatibility with sufficient statistics). If the model  = {p(x| ), x ∈ , ∈ Θ} has a sufficient statistic t = t(x) ∈  , then Proof. Since t is a function of x and a sufficient statistic for , we obtain This implies that the marginal log-likelihoods log L( ) and log L t ( ) agree up to an additive constant, where L t ( ) = ∏ M m=1 p(t m | ), t m ∶= t(x m ), denotes the marginal likelihood in terms of t: Since the expected information is also invariant under such transformations, Berger et al. (2009, theorem 5), this proves the claim. ▪ Remark 1. Reference priors have the appealing property that their restrictions to any compact subset Θ 0 coincide with the reference priors on Θ 0 , see Berger et al. (2009, section 5): However, unlike for objective priors in the absence of data, this property is not desirable in the empirical Bayes framework, as explained in the example below, and usually will not be fulfilled by empirical reference priors. Therefore, a definition of MMIL(X) using restrictions of possibly improper priors (as in the definition of MMI) is not meaningful and we are forced to limit ourselves to proper priors. This limitation is not too restrictive since the aim of empirical Bayes methods is to approximate the true prior true ( ) and improper priors do not play a major role. For compact parameter spaces Θ (and in all other cases for which the reference prior turns out to be proper), empirical reference priors provide a meaningful generalization of reference priors, which then correspond to the case M = 0, the absence of data X.
Example 1. Let the true data-generating prior be the uniform prior true ≡ 1 2 on Θ = [0, 2], that is, ∼ Unif([0, 2]), and  be the location model given by x| ∼  ( , 0.5 2 ). For a "large" data set X consisting of M = 100 measurements, the empirical reference prior erp ( | , , X) can be expected to provide a good approximation of true ( ), hence its restriction to Θ 0 = [0, 1] will be approximately uniform, see Figure 3. However, the empirical reference prior erp ( |  0 ,  0 , X) on Θ 0 has to put much more weight on values close to 1 in order to explain the many measurements x m which are larger than 1.
Hence, unlike for reference priors, the equality erp ( | , , X)| Θ 0 = erp ( |  0 ,  0 , X) is neither fulfilled nor desirable. Of course, in practice, the parameter space Θ should agree with (or at least include) the domain of the true prior in order to be consistent with the data generating distribution.
in (12), where Ψ ∶ R 2 → R can be any concave function that is monotonically increasing in both arguments. We will stick to the former formulation and choose via likelihood cross-validation (Silverman, 1986, equation (3.43)), * = arg max where X −m : = {x m ′ ∈ X | m ′ ≠ m} denotes the data set X with the mth point x m left out.

Proposition 1. The smoothing parameter (13) is invariant under transformations of and x, as long as the class  of admissible priors is transformed accordingly
Proof. From (8), (9), and Theorem 3 we obtain where we adopted the notation from the definition of invariance and C(x) = | det ( −1 ) ′ (x)| > 0 does not depend on . This proves the claim. ▪

Empirical reference priors under asymptotic normality
As proven in Clarke and Barron (1994), the reference prior coincides with the Jeffreys prior (Jeffreys, 1946(Jeffreys, , 1961, under certain regularity conditions, which basically ensure asymptotic posterior normality. Let us recall the basic results which we state for any dimension d ∈ N of the parameter space Θ. are finite and continuous in . The Fisher information matrix i( ) defined by (14) is positive definite for each ∈ Θ and equalsĩ( ) = −E[D 2 log p(x| )], where D 2 denotes the Hessian matrix with respect to . The model  is identifiable as defined by (2), the parameter space Θ is compact and  consists of all positive and continuous probability density functions on Θ (this implies that  is a convex set).

Proposition 2. Under Condition 1, there exist positive constants C
Proof. See Clarke and Barron (1994). ▪ Since the first term on the right-hand side of (15) does not depend on the prior ( ) and the second one is independent of k, the reference prior coincides with Jeffreys prior J and the definition of empirical reference priors recovers the form of the MPLE (5):

Corollary 2. Under Condition 1, the empirical reference prior is given by the following MPLE (5),
where the penalty term Φ  will be referred to as the missing information penalty.
Theorems 1 and 2 imply desirable properties of the missing information penalty Φ  and, in particular, the existence of a unique empirical reference prior that is invariant under transformations of and x.

Corollary 3. Under Condition 1, the optimization problem (16) is strictly concave in and invariant under transformations of and x. Hence, since  is convex, it has a unique solution erp .
Proof. For the concavity of log L( ) see Lindsay (1995, section 5.1.3), while the strict convexity of Φ  follows from van Erven and Harremoës (2014, theorem 11). Invariance is a direct consequence of Theorems 1 and 2. ▪ In order to realize the optimization formulation (16), we now compute analytically the gradient of the functional to be maximized. In addition, we characterize erp as the unique fixed point of a certain function F * ∶  → , which motivates the fixed point iteration in Algorithm 2.
Theorem 5. Let Condition 1 hold. Then the gradient of the functional in (16), with respect to ⟨⋅, ⋅⟩ L 2 (Θ) is given by It follows that the empirical reference prior erp is the unique fixed point of F * ∶  → , . (19)

Proof. We consider perturbations of Ψ( ) in arbitrary directions
, which proves (18). The above inner product is zero for any v ∈ C(Θ) with ∫ v = 0 if and only if its second argument is constant in . Hence, , which proves the second claim. Note that F * ∶  → , since we assumed the parameter space Θ to be compact, and that the uniqueness of the fixed point follows from the strict convexity of Ψ (Corollary 3). ▪

PRACTICAL REALIZATION OF THE EMPIRICAL REFERENCE PRIOR
In this section we demonstrate how the empirical reference prior can be computed in the asymptotically normal case (i.e., under Condition 1). We suggest two algorithms, one of which is a straightforward optimization of the strictly concave functional in (16) and the other being a fixed point iteration, the convergence of which, however, is not yet fully understood.
For simplicity, we treat only the case where Θ = [a, b] is an interval and is discretized by the equidistant grid G K defined below. Integrals over Θ will be approximated by the midpoint rule Furthermore, we assume that the Jeffreys prior J( ) can be evaluated pointwise in each k , either because its analytic form is available or by a numerical approximation of the integral in (14). Finally, we consider the computation of erp ( ) = erp ( | , , X, ) only for a given smoothing parameter . The choice of requires another optimization as discussed in Section 4.2, this time in , where the algorithms below have to be executed many times for the computation of erp ( | , , X −m , ) with varying data sets X −m , see the likelihood cross-validation formula (13). While this "brute force" solution seems computationally challenging, the overall procedure could be performed in all our examples within just a few seconds (mainly due to the strict concavity of the optimization problem (16)). A more elegant solution which combines the two optimization problems is imaginable, but goes beyond the scope of this article. In the two algorithms below, we identify each prior with its discretized version̂= ( ( 1 ), … , ( K )) (and similar for J, t , etc.).

. (II) Optimize Ψ( ) as a function of by an optimization algorithm of your choice (we use the method of moving asymptotes [MMA] of
Motivated by the fixed point characterization of erp in Theorem 5, it is tempting to implement the fixed point iteration t = F * ( t−1 ), t ∈ N. However, our empirical studies showed that the resulting sequence ( t ) t ∈ N often fails to converge, indicating that F * is, in general, not a contraction. This suggests to decelerate the iteration by choosing t = (1 − t ) t−1 + t F * ( t−1 ) with step sizes t ∈ [0, 1]. While this gave satisfactory results in our examples, it is hard to characterize the step sizes for which the new sequence converges.
1. Choose an arbitrary positive probability density 0 , for example,

NUMERICAL COMPUTATIONS
In the following, we apply the empirical reference prior approach to a synthetic data set as well as to the famous real-life data set of Efron and Morris (1975). Our code implements Algorithm 1 and is available in the form of a Julia package at https://github.com/axsk/ObjectiveEmpiricalBayes.jl.
Both optimization problems (5) and (16) are solved by Algorithm 1 using the method of moving asymptotes (MMA) of Svanberg (2001Svanberg ( /2002 from the NLopt package provided by Johnson (n.d.), after discretizing the parameter space with 200 equidistant grid points. All priors are estimated using the same data consisting of M = 100 synthetic measurements. In both cases, the Jeffreys prior can be computed analytically. We obtain a constant Jeffreys prior for the likelihood model in (20) and J(̃) ∝ |̃| −1 for the one in (21).
As expected from the theory in Section 4, we observe how the lack of invariance of conventional penalty terms is resolved by the missing information penalty Φ  , without losing the effect of regularization, see Figures 4 and 5. In addition, the empirical reference prior is strictly positive on the whole interval, making Φ  preferable to Tikhonov regularization from an objective Bayesian viewpoint: Excluding certain parameter values completely from a finite number of measurements appears unreasonable.

F I G U R E 4
Conventional penalty terms, here the Tikhonov-regularization, are variant under transformations ∶ Θ →Θ, resulting in inconsistent prior estimates (see also Figure 2). If the estimation is performed in a transformed spaceΘ it gives a different estimate than the pushforward of the estimate in Θ, est ≠ * est . Dashed lines indicate transformed densities. Here, L 2 denotes the MPLE using Tikhonov regularization as penalty. In order to demonstrate the lack of invariance of the density estimate, we chose the same smoothing parameter in both spaces F I G U R E 5 Due to the transformation invariance of the missing information penalty Φ  , the empirical reference prior estimates are consistent under reparametrization (up to a negligible numerical error), erp = * erp

Example with real-life data
To illustrate our approach on real-life data, let us consider the historical batting averages (or baseball) example of Efron and Morris (1975), which is one of the most famous small data sets in statistics. The batting averages x m (number of successful hits divided by the number of tries) for M = 18 major league baseball players early in the 1970 season-to be precise, the first N = 45 at bats of each-are used to estimate their true success rates m ∈ Θ = [0, 1], m = 1, … , M, which are taken as the averages over the remaining season (around 370 at bats for each player). This example was used by Efron and Morris (1975) to illustrate the strength of the James-Stein (JS) estimator̂J S , a particular parametric empirical Bayes method, compared with the MLÊM LE = x in terms of the (empirical) mean squared error Since the JS estimator assumes both the prior and the likelihood model to be normal, the natural binomial likelihood model  Bin given by is replaced by its normal approximation   , where 2 ∶= x(1 − x)∕N is chosen to approximate the variance estimate of the binomial distribution and x ∶= M −1 ∑ M m=1 x m . Note that in the original paper by Efron and Morris (1975) an additional arcsin transformation is applied to each of the measurements x m in order to stabilize their variance. For simplicity and following the presentation in Efron (2010, section 1.2), we omit this technical detail. We will apply our empirical reference prior approach to both, the original formulation (23) using   as well as the more meaningful likelihood model  Bin in (22). In both cases, the Jeffreys prior can be computed analytically: We obtain a constant Jeffreys prior for the likelihood model   in (23) and J( ) ∝ | (1 − )| −1/2 for  Bin in (22).
We compare these two approaches to the JS estimator which assumes a normal prior true =  ( , 2 ), making it a parametric empirical Bayes method with only the parameters and 2 of the prior to be estimated. A wonderful derivation of the JS estimator and its application to the baseball data set can be found in Efron (2010, section 1.2), with the resulting estimateŝ= where, for m = 1, … , M, the estimateŝJ S,m approximate m by the individual posterior means and Note: As illustrated in Figure 6, the three empirical Bayes methods perform very similarly, both in terms of the individual estimateŝJ S ≈̂ erp ≈̂B in erp and in terms of the overall mean squared error (MSE), and strongly outperform the "overconfident" maximum likelihood estimatêM LE = x. The ground truth for is taken as the batting average of each player over the remaining season.
The results are given in Table 1, while Figure 6 illustrates the data as well as the various point estimates, together with the three estimated priors (note that those are not density estimates in the classical sense, since the parameters m , and not the measurements x m , are samples from true ).
All three approaches provide very similar results. The main reason for this is that, as is apparent from Figure 6, the assumption of a normal prior made by the James-Stein approach seems to be a meaningful parametric choice in this specific example, where the parameter represents the performance (in terms of batting success rates) of baseball players. If this was not the case, for example, if another parametrization was chosen by, say, measuring the performance on a logarithmic scale, our nonparametric approach is likely to outperform the James-Stein estimator. In fact, our approach would provide exactly the same results for any reparametrization of the model. To the best of our knowledge, this is a unique feature among all empirical Bayes methods. . While the results of the empirical reference prior approach and the JS approach are comparable, the normal prior π JS clearly fails to be invariant under reparametrizations. Hence, the quality of the JS estimates could change drastically if, say, the players' performance was measured on a logarithmic scale. The "shrinkage toward the mean" effect of empirical Bayes estimates is well observable. The first impulse, that a smaller shrinkage would provide better results, is deceptive-this would reduce the (squared) error for some players, but drastically increase the (squared) error for others, a modification that is likely to increase the overall MSE

THE MULTIPARAMETER CASE d > 1
In the case of several parameters, the reference prior ref is no longer defined as the prior which maximizes the missing information, but by the sequential scheme presented in Berger and Bernardo (1992). This scheme applies the procedure described in Section 4 successively to the conditional priors ( | 1 , … , −1 ), = 1, … , d, after a convenient ordering of the parameters. This leads us to three possible generalizations of empirical reference priors to the multiparameter case. The construction from Section 4, in particular the definition of the empirical reference prior, will be referred to as the one-parameter construction.
(A) Adopt the definitions from Section 4 exactly as they are. In the asymptotically normal case given by Condition 1, this corresponds to the optimization problem (16), where J( ) = √ | det i( )| denotes the (arbitrarily scaled) Jeffreys prior. This construction provides an extension of the Jeffreys prior, not of reference priors. The reasons why reference priors are favored over Jeffreys prior in dimension d > 1 are marginalization paradoxes and inconsistencies of the latter, see Bernardo (2005) and references therein. It is yet unclear in how far these arguments are valid in the presence of data. Hence, this approach might be justified in the empirical Bayes framework. (B) In light of (16) and with the intention of generalizing reference priors, replace J by ref in the penalty term: This yields an extension of reference priors and agrees with the one-parameter construction in the asymptotically normal case (Condition 1), but not necessarily in the general case. (C) Define a sequential scheme similar to the one used for reference priors. For simplicity, we will restrict the presentation to the case of d = 2 parameters 1 , 2 , Θ = Θ 1 × Θ 2 , but the construction can easily be extended to any number of parameters. As is common practice for reference priors, the parameters have to be ordered by "inferential importance". We will perform similar steps to the ones in Bernardo (2005, section 3.8). Note that, as in the case of reference priors, this scheme lacks objectivity since it requires an ordering of the parameters, which is a heuristic element and not unambiguous in many applications.
(A) and (B) are straightforward generalizations of the theory presented in Section 4. It would be interesting to analyze the connection between the approaches (B) and (C).
Remark 2. Note that while all three of the above options are invariant under transformations of the measurement x, only (A) truly guarantees invariance under transformations of the parameter = ( 1 , … , d ), which it inherits from the invariance of the log-likelihood (up to an additive constant, Theorem 1) and of the Jeffreys prior. The other two options (B) and (C) depend on the particular choice of how the parameters (or, rather, the parameter components) 1 , … , d are ordered. Hence invariance is only given under reparametrizations that transform each component separately, under the assumptions that their ordering is not altered.

CONCLUSION
We successfully applied the approach for the construction of reference priors to determine a transformation invariant penalty term for MPLE, which favors noninformativity of the prior, namely the missing information [ |  k ], k → ∞. This interaction of objective Bayesian analysis and empirical Bayes methods results in a consistent and informative prior estimate, which we termed the empirical reference prior erp . The distinctive feature of erp is its invariance under reparametrization, which is, to the best of our knowledge, unique among all empirical Bayes methodologies. The smoothing parameter tunes the amount of information contained in the prior: The data, represented by the marginal likelihood L( ), yields information about the distribution of , but maximizing L( ) alone overfits the data. The penalty term Φ( ), on the other hand, favors noninformative priors. We performed this trade-off by likelihood cross-validation which we also proved to be invariant under transformations (Proposition 1).
Besides invariance, our method has further favorable properties such as compatibility with sufficient statistics and, under the assumption of asymptotic normality, strict concavity of the resulting optimization problem (16). So far, our approach lacks an explicit formula for the empirical reference prior. 3 We applied our methodology to a synthetic data set to illustrate the invariance property of the empirical reference prior as well as to a real-life example, the famous baseball data set of Efron and Morris (1975) collected to demonstrate the advantages of the James-Stein estimator, a parametric empirical Bayes method which we compare our method to. In the latter case, both approaches gave nearly identical results, the most natural explanation being that the parametric choice of the prior used in the James-Stein approach appears to be a good approximation in this specific example with this specific parametrization. Our nonparametric empirical reference prior approach is likely to provide better results in situations where the "true" prior true cannot be well-approximated by a Gaussian. Our code for the numerical computations is available at https:// github.com/axsk/ObjectiveEmpiricalBayes.jl. 3 An explicit formula for reference priors (Berger et al., 2009, theorem 7) exists, so far, only in the one-parameter case d = 1 and under rather restrictive assumptions. Furthermore, it requires the numerical approximations of integrals in the possibly high-dimensional measurement space  as well as the computation or at least some estimate of the limit k → ∞.
The generalization of our approach to several dimensions is not unambiguous and has been discussed in Section 7.