Estimating absorption time distributions of general Markov jump processes

The estimation of absorption time distributions of Markov jump processes is an important task in various branches of statistics and applied probability. While the time-homogeneous case is classic, the time-inhomogeneous case has recently received increased attention due to its added flexibility and advances in computational power. However, commuting sub-intensity matrices are assumed, which in various cases limits the parsimonious properties of the resulting representation. This paper develops the theory required to solve the general case through maximum likelihood estimation, and in particular, using the expectation-maximization algorithm. A reduction to a piecewise constant intensity matrix function is proposed in order to provide succinct representations, where a parametric linear model binds the intensities together. Practical aspects are discussed and illustrated through the estimation of notoriously demanding theoretical distributions and real data, from the perspective of matrix analytic methods.


Introduction
In this paper, we consider statistical estimation of distributions which are absorption times of general Markov jump processes, also known as inhomogeneous phase-type distributions (IPH).The data are the absorption times generated by independent samples of Markov jump processes until absorption, though the path is not observed.Thus, the incompleteness of the data is attended by an expectation-maximization (EM) algorithm, which allows for an effective maximum likelihood estimation.For practical purposes, we consider and implement the important special case where the underlying transition rates are piecewise constant.
Though time-inhomogeneous Markov jump processes have been classically used in many contexts, IPHs were only formally introduced in [2] as the distribution of the absorption times in a time-inhomogeneous Markov jump process taking values on a finite state space where one state is absorbing and the remaining transient.They are a generalization of the classic phase-type distributions (PH), where the underlying Markov jump process is time-homogeneous (see e.g.[8] for an overview of the latter).These distributions may be used in situations where modeling tail behaviors different from the exponential, like e.g.heavy tails, is a concern, cf. the examples in [2], where a subclass consisting of IPHs generated by intensity matrices which are given in terms of a single matrix scaled by some real non-negative function is considered.Within this subclass, the intensity matrices commute over time and thereby provide a link to the corresponding time-homogeneous PH distributions in terms of a parameter-dependent transformation.In this special case, the theory significantly simplifies and allows for more direct analysis.This is, for example, the case regarding statistical estimation, where [4] develops an EM algorithm based on the parameter-dependent transformation so that the main engine basically uses the conventional EM algorithm known from PH fitting in [7].
Since IPHs are absorption times of time-inhomogeneous Markov jump processes, they may naturally also be used for modeling processes that conceptually can be represented as evolving through states, e.g. in multi-state Markovian life insurance models (see e.g.[11,16]) where states (phases) relate to the different conditions of a policyholder in a time-dependent manner.This time-dependence would in general require noncommutative intensity matrices to provide meaningful models.Somewhat related, [3] considers mortality modeling using IPHs, including age and time effects, though only the subclass of commuting matrices is examined here.
For time-inhomogeneous Markov jump processes, parametric modeling and maximum likelihood estimation of its transition rates based on the associated multivariate counting process is well-established in the literature; see e.g.[6] for an overview.By assuming piecewise constant transition rates on a time grid (as an approximation), these methods are known to reduce to Poisson regressions based on aggregated occurrences and exposures in the different time intervals, cf.e.g.[1,Section 5].This connection is particularly important in situations with aggregated data pooled into periodic intervals, like yearly observations.For example Poisson regression based on yearly observations is used in the Danish FSA's benchmark model for mortality risk, considered in [13, Appendix 1], which is implemented in Danish life insurance and pension companies.
In this paper we extend the statistical fitting of IPHs from [4] to the general class of IPHs, using these well-established techniques for parametric inference of time-inhomogeneous Markov jump processes as starting point; they constitute our (unobserved) complete data framework that generates the observations of IPHs and for which an EM algorithm is developed.This is in contrast to the approach in [4], where the underlying homogeneous PH observations are seen as the building blocks.The general setting is, consequently, not reducible to the homogeneous case, and a non-trivial extension of the algorithm is required.In particular, the E-step is abstractly stated in terms of solutions of some differential equations, referred to as product integrals (see [10,14]), and the M-step involves numerical optimization.
Similarly to the completely observed data case, we identify the simplifications that arise in our EM algorithm from assuming piecewise constant transition rates on a time grid, whereby the E-step can be stated in terms of products of matrix exponentials to calculate a set of expected occurrences and exposures, and the M-step can be stated as performing maximum likelihood estimation in Poisson regressions akin to those of [1,Section 5].This fully explicit algorithm allows for computational simplifications similar to those obtained in the complete data case and incurs increased computational performance while retaining flexibility.We also implement this algorithm and show some numerical examples of mortality modeling of Danish lifetimes as well as examples of fitting to theoretical distributions, confirming that the class of models does not suffer from some of the drawbacks that usual matrix analytic methods have.Another reason for allowing for different intensity matrices in different regions of the support is more pragmatic since it allows for fitting data that traditionally requires higher order IPHs.This could, e.g., be multi-modal data or skewed data.In such cases, we may obtain adequate fits in a discretized model of a much lower dimension.
One additional extension of our model appears during the M-step since the classic EM algorithm of [7] has an explicit solution (number of jumps divided by total time spent in states; the so-called occurrence/exposure rates), while in our case we require parametrization of the transition rates to perform the required Poisson regressions.The canonical parametrization consisting of an intercept agrees with the simpler explicit solution.Fortunately, the added computational burden is low since standard software deals with generalized linear models in a stable and effective manner.
The remainder of the paper is structured as follows.In Section 2, we recall the inhomogeneous phase-type distribution (IPH).Then, in Section 3, we start out with an exposition of parametric inference of time-inhomogeneous Markov jump process, which will constitute the complete data case.Subsequently, we tackle the incomplete data problem and develop EM algorithms for general IPHs and those with piecewise constant transition rates.In Section 4, we consider an approach to a strong approximation of IPHs with piecewise constant transition using PH distributions, which may be useful for when a homogeneous representation is required.Section 5 is then devoted to numerical examples of our results.Finally, in Section 6, we present some possible extensions of our model, including a case where a pre-specified tail behavior is required.

Inhomogeneous phase-type distributions
Let X = {X(t)} t≥0 be a time-inhomogeneous Markov jump process taking values on the finite state space E = {1, . . ., p, p + 1}, p ∈ N, where the states {1, . . ., p} are transient and state p + 1 is absorbing.Denote by α = (π, 0) = (π 1 , . . ., π p , 0) the initial distribution of X, and Λ(t) = {µ ij (t)} i,j∈E the intensity matrix of X.The intensity matrix Λ(t) is then on the form where T (t) is the sub-intensity matrix function describing transitions between the transient states, and t(t) = −T (t)e consists of the transition rates to the absorbing state.Let τ denote the time until absorption of X, i.e.
The transition probability matrix P (s, t) = {p ij (s, t)} i,j∈E of X, with elements is given in terms of the product integral of the transition intensity matrix (see [10,14]): Fτ (x) = π P (0, x)e. (2.3) In this paper, we consider the statistical fitting of IPHs based on independent observations.Although in [4] an expectation-maximization (EM) algorithm was devised for the case where T (t) = λ(t)T (for parametric λ(t) intensity functions), which implies that T (t) commute for different t, no statistical model where T (•) are non-commutative has been considered in the literature.This is a drawback of significant concern for certain applications, which we seek to remedy in this paper as our main contribution; we provide a general EM algorithm and implement it in the case of a piecewise constant intensity matrix function.
2.1.IPHs with piecewise constant intensity matrices.We now consider a discretization of the time axis, where in each sub-interval, a different constant intensity matrix is defined.The purpose of this specification is two-fold.First, we seek to provide a statistical methodology for the non-commutative case, which will ease the fitting of heterogeneous data with lower matrix dimensions than previously considered.Second, and perhaps less obvious, is the generalization of discretized non-matrix versions of our model, which require a large number of intervals to provide a satisfactory approximation to the behavior of real data.In this context, the introduction of matrix parameters will allow for more flexible interpolation within sub-intervals, reducing the mesh size of the discretization.

Construct a grid
, where and introducing k(x) as the unique k ∈ {1, . . ., K} satisfying that x ∈ (s k−1 , s k ], then the product integral formula (2.1) for the (sub-)probability matrix between the transient states reduces to a product of matrix exponentials: with the convention that the empty product equals the identity matrix.The density (2.2) and survival function (2.3) then in particular reduces to:   e T k(x) (x−s k(x)−1 ) e e e, These expressions may be regarded as discrete approximations to their corresponding product integral expressions of the general case but have the advantage of being computationally much lighter to evaluate.Indeed, algorithms for computing the exponential of a matrix are varied and efficient, while product integration must be computed by numerically solving differential equations of increased complexity.
The density of τ may be discontinuous at the interval endpoints, which define the constant matrices.Indeed, consider e.g.f τ (s 1 −) and f τ (s 1 +).Since the matrix exponential is continuous, we have that Hence f τ (s 1 −) and f τ (s 1 +) may differ if t t t 0 = t t t 1 , and similarly for all the other grid points.On the other hand, if all t t t k = t t t then the density for τ is continuous.Similarly, a sufficient condition for differentiability at all points is that −T 2 k e e e does not depend on k.

Estimation
This section introduces the main contribution of the paper, namely the maximumlikelihood estimation of general IPHs through the expectation-maximization (EM) algorithm, with a special emphasis on the case of piecewise constant transition rates.
We proceed sequentially: first, the completely observed case is reviewed; second, the incomplete data setting is built using the estimators from the previous case; finally, a simplified algorithm with piecewise constant transition rates is presented.
3.1.The complete data case.We now review some methods known from the inference of time-inhomogeneous Markov jump processes on finite state spaces based on complete observations of its trajectories.We refer to [6] for a detailed exposition on this.
Suppose that we observe N ∈ N i.i.d.realizations of the time-inhomogeneous Markov jump process X on some time interval [0, T ], where T > 0 is a given and fixed time horizon; represent the data by X X X = (X (1) , . . ., X (N ) ). Denote by N = (N (1) , . . ., N (N ) ) the corresponding data of the multivariate counting process, where N (n) , n = 1, . . ., N , have components Parametrizing the transition rates with a parameter vector θ ∈ Θ, where Θ is some finite-dimensional, parameter space with non-empty interior, such that, we have that the likelihood function for the joint parameter (π π π, θ θ θ) is given by where, for i ∈ E and n ∈ {1, . . ., N }, Here, I i (s) indicates if the n'th observation has a sojourn in state i at time s, and B i denotes the total number of observations with initial state i; only the latter can be aggregated over observations due to the initial distribution not having a time-dependency.
from which we obtain the MLE of (π π π, θ θ θ): The product structure of the likelihood (3.1) (equivalently the additive structure of the log-likelihood) in π π π and θ θ θ via L X X X 0 respectively L X X X ij , i, j ∈ E, j = i, enables us to estimate these separately.Regarding π π π, we may note (or confirm by direct calculation) that the likelihood L X X X 0 is proportional to the likelihood obtained from viewing (B 1 , . . ., B p ) as an observation from the Multinomial(N, π π π)-distribution, where N is considered fixed.This gives a closed-form expression for the MLE: For θ θ θ, a closed form expression for the MLE is not available in general, and numerical methods for the optimization are required.

3.2.
The complete data case with piecewise constant transition rates.We now assume that the transition rates µ ij (•; θ θ θ) are piecewise constant on the form (2.4).The likelihood (3.1) then simplifies to where O ij (k) is the total number of occurrences of transitions from state i to j in the time interval (s k−1 , s k ], and E i (k) is the total time spent in state i in the time interval (s k−1 , s k ], the so-called local exposure: (3.5) Remark 3.1.The likelihood (3.4) can be seen to reduce to the likelihood considered in [7] by having K = 1 (corresponding to homogeneity) and no parametrization of the transition rates.
Thus, in the case of piecewise constant transition rates, the occurrences and exposures in the different time intervals, along with the number of initiations in the different states, are sufficient statistics.In fact, the resulting likelihood (3.4) is proportional to the likelihood obtained from independent observations where with N and E i (k) considered fixed.Consequently, the MLE of π π π is (still) given by πi = B i N , while the MLE of θ θ θ is obtained from Poisson regressions of the occurrences against the different times on the grid, which can be carried out using standard software packages.For example, if µ k ij (θ θ θ) is an exponential function in θ θ θ, a Poisson regression with log-link function and log-exposure as offsets can be carried out, corresponding to the fitting of the models: for some suitable known function f (2) , with a common choice being the identity.The predictions at s k and at unit exposure are then the estimates of the transition rates, In the case where the parameters in θ θ θ act as the (unknown) piecewise constant transition rates themselves, i.e. θ θ θ = (θ k ij ) k=1,...,K, i,j∈E, j =i so that the MLE of θ θ θ simplifies to so-called occurrence-exposure rates: This is a special case where transition rates are estimated directly in a "non-parametric" way and can be retrieved by considering the s k as a categorical (instead of numeric) variable in (3.8).The assumption of piecewise constant transition rates is often seen as an approximation to the general continuous versions obtained when the number of grid points tends to infinity.However, the resulting estimated models may be favorable even for coarser grid mesh sizes.
Proof.Let j ∈ {1, . . ., p} and t, s ≥ 0 such that 0 ≤ t ≤ s < τ be given.Then it follows from the law of iterated expectations and the Markov property of X that, for y > s, we get the conditional survival probability e e e X(s) P (s, y; θ θ θ)e e e F X (t) = e e e X(t) P (t, s; θ θ θ)e e e j e e e j P (s, y; θ θ θ)e e e, from which obtain the transition probabilities for Y : where we use the continuity of the transition (sub-)probability matrix (that is, continuity of product integrals) in the last equality.
Remark 3.3.In [12,16], similar conditional distributions as those of Lemma 3.2 are derived.While they consider conditional distributions given future states, we consider conditional distributions given the time of absorption, which is a slight extension in which we include (particularly simple) future jump times in the conditioning.
The result shows that developing an EM algorithm for general IPH distributions significantly increases the computational complexity compared with the homogeneous case [7] as well as the commuting inhomogeneous cases [4].Indeed, since we no longer have a set of sufficient statistics for the different states and transitions, we must in the Estep compute the conditional expected log-likelihood L(m) ij directly.Evaluating this in a parameter θ θ θ ∈ Θ involves a collection of product integral calculations, as opposed to matrix exponential calculations known from the two existing algorithms.Also, the subsequent M-step is no longer explicit with simple expressions, which is inherited from the fact that the complete data MLE is not explicit in general, and numerical optimization methods are therefore required to carry out the M-step.
As one may note from Subsection 2.1 and 3.2, the above mentioned computational complexities can be remedied by assuming piecewise constant transition rates on the form (2.4).We shall therefore assume this in the following to obtain our main algorithm and corresponding numerical examples; for completeness, we still provide the general EM algorithm in Appendix A, since different simplifications may be drawn from the general case in the future.
Consider the complete data likelihood (3.4) in the case of piecewise constant transition rates, and recall the sufficient statistics (3.5) for the different states and transitions.Since the corresponding log-likelihood is linear in these sufficient statistics, the E-step for the transitions simplifies so that it now suffices to compute the following conditional expected sufficient statistics, for k = 1, . . ., K, and then the M-step for updating θ θ θ simplifies to the Poisson regression mentioned in Subsection 3.2, but where the occurrences and exposures are replaced by their conditional expectations computed in the E-step.
Based on Theorem 3.4 for the general cases, we immediately obtain these conditional expectations in Corollary 3.5 below.For notational convenience, we let k (n) = k(τ (n) ) denote the place on the grid that the n'th observation lies in, and, for k 1 , k 2 ∈ {1, . . ., K}, k 2 ≥ k 1 , we define Then the (sub-)probability matrix in the transient states (3.10) under some parameter (π π π, θ θ θ) as well as the corresponding density (3.11) can be written as (3.16) Corollary 3.5.Suppose that the sub-intensity matrix function T is piecewise constant on the form (2.4).Then the conditional expected sufficient statistics (3.14) are given by, for i, j ∈ {1, . . ., p}, j = i, i e e e i P (0, with P and f given as in (3.16)., for fixed n ∈ {1, . . ., N }, k ∈ {1, . . ., K}, and m ∈ N 0 , this involves integrals of matrix exponentials, which may be computationally heavy.However, we can observe that by defining the block matrix , we obtain from [18] that e which reduces to a single matrix exponential calculation.Similar type of simplifications were noted in [4, Remark 2].

An approximate homogeneous representation
In full generality, a phase-type approximation for any distribution is possible through the construction of [15], where Erlang weights are constructed according to the increments of the target cumulative distribution function.However, when the target distribution arises as an absorption time of an inhomogeneous Markov jump process, recent developments in [9] provide an alternative pathwise approximation yielding strong approximants which are directly parametrized by the intensity matrix Λ.Since phase-type distributions enjoy explicit formulas which their inhomogeneous counterparts may lack, such an approximation is practically relevant, and thus we outline it below.Section 5 presents some numerical examples of such an approximation.
One implicit assumption which is relevant when applying the approximation is that n must be large enough to make T (n,m) a proper sub-intensity matrix, which depends on the maximal absolute value of the diagonal elements of the T k matrices.Additionally, the choice of m should be such that m ≥ n • max i=1,...,N {τ i }.

Numerical examples
This section presents some numerical illustrations of our above model on theoretical distributions as well as real data.In both cases, we require a straightforward extension of Algorithm 1 to when each data point has a weight associated with it.Practically speaking, this is straightforwardly dealt with by providing a weight in each contribution for the conditional expectations of the E-step and replacing N with the sum of weights in the E-step.This extension allows for the estimation of histograms, known distributions (considering a discrete version of the theoretical density), or more efficient calculations for when we have repeated values.We provide examples of the two latter uses.In all cases, we consider piecewise IPH distributions with continuous densities.
5.1.Fitting to a given distribution.It is well known that phase-type distributions struggle to fit peaked distributions where the peak does not happen close to the origin; that is, a large number of phases are required for adequate estimation.Thus, we first consider the estimation of the N (2, 1/2) theoretical distribution (left truncated at 0, as to have only positive values) by: (1) A piecewise IPH with large K and small p.
(2) A PH approximation to the piecewise IPH fit, as per Theorem 4.1.
(3) A small and large homogeneous PH, for comparison.
By "small" and "large," we have used subjective judgment, but we are somewhat limited by computational power for any dimensions far exceeding the ones presented here.
The idea is thus to use the density height as weights for a given grid (here, we take the mesh size to be ∆ t = 0.05), which is used as the observations in Algorithm 4.1.Applying this procedure can be appreciated in the left panel of Figure 5.1.We see that a very small phase-type dimension (p = 2) is required to provide a good fit if we allow for piecewise constant rates at a small grid, in this case considering 41 sub-intervals on the interval [0,4].Since all matrices in each sub-intervals are intrinsically linked through Equation (3.8), the number of parameters is kept low.We also see how an effective phase-type approximation is possible using the construction of Theorem 4.1, providing a visually indistinguishable representation from the piecewise counterpart and which enjoys a pathwise convergence interpretation.
Note that the maximal absolute value of all diagonal matrices in each sub-interval for the piecewise IPH fit is 1056.8,which from the expression (4.1) implies that n should be at least above the latter value to obtain a proper phase-type sub-intensity matrix.We have thus chosen n = 1500, and then m = n • 4.01, so the approximation is expected to be faithful up to the value 4.01.Thus the resulting phase-type approximation has state space dimension p × m = 12, 030, though the distribution is easy to manipulate, since formula (4.3) involves matrix calculus in terms of the original state space dimension p.In contrast, the right panel of Figure 5.1 shows that a 30-dimensional phase-type distribution cannot provide a similar quality of fit (let alone the 2-dimensional case).
The EM algorithm which is required in this case (implemented as in [7]) is comparatively slow for growing dimensions (and prohibitively slow for around p = 50, 150, depending on the language of implementation).
We now consider a more challenging setting with the aim of further showcasing the capabilities of our algorithm.Thus, we focus our attention on the mixture of N (2, 1/2) and N (4, 1/2) distributions, with a mixture weight of 0.55 (left truncated at 0, as to have only positive values), and we estimate two models: (1) A piecewise IPH with small K and medium p.
For this multimodal density, we chose the breakpoints around valleys and summits of the theoretical density.An interesting comment is that choosing the breakpoints directly in the low point of a valley or exactly at the summit does not seems to be as effective.
Given the chosen sub-intervals, we will use p = 10 since it seems to be the first dimension to capture both modes correctly.A PH approximation to the piecewise IPH fit, as per Theorem 4.1, is not possible in this setting since the estimated sub-intensity matrices for all sub-intervals have an overall largest absolute value in the diagonal equal to about 7.5 • 10 11 , which implies that m is in the order of magnitude of 10 12 , which is too large to make the computation of (4.3) feasible.As a general warning, we have found that for the most challenging density shapes, Theorem 4.1 will hold only theoretically, since practically it requires too many phases.This also confirms that sensible phase-type distributions do not suffice (including using the EM algorithm) in these cases.
The result of the estimation for this second case is provided in Figure 5.2, which shows the full strength of using piecewise IPH for heterogeneous data.We would like to comment that the dimension p and the number of subintervals K work together to provide an adequate fit and that a large K with small p does not work in this setting as it did for the previous unimodal distribution since the linear specification of f (2) in Equation (3.8) is no longer sufficient here.An alternative would be to consider spline specifications or higher polynomial terms.Here, we chose to increase the degrees of freedom by directly increasing p (in this case, to 10).
Another feature that arises for estimated piecewise IPH distributions is the possible kink of the density at the endpoints of each sub-interval.These are not discontinuities and usually happen when the decreasing nature of a curve is not exponential, which is the case for gaussian decay.When examining the cumulative distribution function, the joining of the density of sub-intervals is differentiable; thus, the effect is not observable at that scale.These kinks also appear in the application to mortality modeling in the next section.org/) provides, among other things, mortality rates in a yearly resolution for several countries.We presently analyze the case of Danish males and females, from 2000 up to 2020.As before, we use as log-likelihood weights the implied density from the mortality rates (which is calculated as death to exposure ratio) and use the midpoints between ages as the observed ages (corresponding to the data τ τ τ ).We divided for numerical purposes all data by 100 when estimating it.However, in the empirical versus fitted plotting, we have used the original scale (in any case, piecewise IPH are closed under scaling).
We have chosen the sub-intervals to provide more divisions for rapidly changing regions in the lifetime density, resulting in K = 9.We see from Figure 5.3 that, despite some possible kinks at the endpoints of intervals, the fit is remarkably well behaved, especially given the specific features that make modeling the entire lifetime distribution challenging: the sharp decrease after birth and the disruptions happening at around age 20 for both males and females.The increased mortality at the right endpoint also poses a challenge.The Gompertz-like behavior from around 30 to 100 is not in line with exponential decay; thus, regular sub-interval splits were required in this period.Finally, the resulting piecewise constant transition rates (in the log scale) are provided in Figure 5.4 for females and in Figure 5.5 for males, which are of interest for some disciplines that require mortality rate estimates, such as life insurance and pension applications.

Extensions
In this section, we discuss some possible extensions of theoretical and practical relevance that may be incorporated into our work but which is outside the scope of the present paper.
6.1.EM for IPHs with a pre-specified tail behavior.The focal point of the paper is to handle general IPHs with non-commutative sub-intensity matrix functions using piecewise constant transition rates as an approximation when the grid becomes finer.For a finite number of grid points, this construction implicitly implies an exponential tail behavior on the IPH distribution from the last grid point, which may not be suitable for applications on heavy-tailed data, e.g., non-life insurance data.However, it is straightforward to adapt our framework to an intrinsic possibility of obtaining a nonexponential tail behavior, using methods from [2].The procedure goes as follows.Define a function λ by for some non-negative function h, and a function g given in terms of its inverse by g −1 (x) = x 0 λ(u)du.Then τ = g(τ ), where τ ∼ IPH(π π π, T (•)) with T piecewise constant   Hence, an extension of Algorithm 1 is possible for the model (6.1) with a pre-specified tail behaviour according to the function h.Indeed, it suffices to apply the transformation g −1 (x) of the data at the beginning of each step to reduce to the piecewise constant case, apply one EM step of Algorithm 1, and then optimize the parameter of the h function.
6.2.Censoring and truncation.In survival and event history analysis, one must take into account censoring and truncation mechanisms in the statistical estimation, see, e.g., [5] for a survey.This naturally also applies to the estimation of IPHs and PHs, as these are absorption times of Markov jump processes.
Incorporation of censoring mechanisms has long been established for estimation of PHs, cf.[17], while the case of commuting matrices for IPHs is considered in [4].As we have adapted [7] to the inhomogeneous case by taking methods from [6] as onset, we believe that it is straightforward to incorporate the censoring mechanisms of [17] to our model by adapting said paper to the inhomogeneous case taking methods from [5] as the onset.
To the best of our knowledge, the incorporation of truncation mechanisms has not yet been established for the estimation PHs or IPHs.We do not believe that this extension is straightforward in either framework, as one would need to consider conditional distributions of PHs and IPHs in developing the EM algorithm.These conditional distributions do not simplify to path-independent distributions as seen for fully observed Markov processes.
6.3.Covariate information.It is straightforward to include time-independent covariate information in our statistical model.Indeed, in the Poisson regressions in the EM algorithms presented for the piecewise-constant transition rate case, one may incorporate any (possibly transformed) covariate vector linearly, though each individual would have their own intensity matrices (which the other parts of the algorithm need to keep track of).The mortality modeling of Danish lifetimes in Subsection 5.2 is an example where sex could be used as a covariate.

Proof.
By inserting the expressions for O ij (k) and E i (k) from (3.5) into (3.14) and using Theorem (3.4), we obtain the results for Ōfrom a direct application of Theorem(3.4).By writing out the exact expressions for P and f given as in (3.16), we end up with Algorithm 1, which by Corollary 3.5 produces the required MLE estimation for IPHs with piecewise constant transition rates.Remark 3.6.To compute the matrix C (n,m) k

Figure 5 . 3 .
Figure 5.3.Fitted versus empirical mortality curves using piecewise IPH distributions for Danish male and female populations from 2000 to 2020.