Assurance Methods for designing a clinical trial with a delayed treatment effect

An assurance calculation is a Bayesian alternative to a power calculation. One may be performed to aid the planning of a clinical trial, specifically setting the sample size or to support decisions about whether or not to perform a study. Immuno-oncology is a rapidly evolving area in the development of anticancer drugs. A common phenomenon that arises in trials of such drugs is one of delayed treatment effects, that is, there is a delay in the separation of the survival curves. To calculate assurance for a trial in which a delayed treatment effect is likely to be present, uncertainty about key parameters needs to be considered. If uncertainty is not considered, the number of patients recruited may not be enough to ensure we have adequate statistical power to detect a clinically relevant treatment effect and the risk of an unsuccessful trial is increased. We present a new elicitation technique for when a delayed treatment effect is likely and show how to compute assurance using these elicited prior distributions. We provide an example to illustrate how this can be used in practice and develop open-source software to implement our methods. Our methodology has the potential to improve the success rate and efficiency of Phase III trials in immuno-oncology and for other treatments where a delayed treatment effect is expected to occur.


INTRODUCTION
Assurance calculations are growing in popularity as an aid for the design of clinical trials.An assurance calculation is a Bayesian alternative to a power calculation: instead of assuming parameters (eg related to treatment effects) take particular values, we elicit prior distributions for them, enabling us to derive a probability of a successful trial outcome, accounting for uncertainty about treatment effects.The concept of an assurance calculation was first considered by Spiegelhalter and Freedman, 1 then developed by O'Hagan et al, 2 who coined the term 'assurance'.Note that the assurance method has had other terms accredited to it, such as average power, expected power and predictive power. 3o calculate assurance, we sample from the elicited prior distributions for the unknown parameters and then simulate clinical trials using these sampled values.The prior predictive probability that the trial will be 'successful' is the proportion of simulated trials that meet our stated success criteria.These success criteria are not fixed by the assurance method; instead, they are set independently by the sponsor and can be any criteria that the sponsor wishes to consider (eg, that the observed treatment effect will be positive; or statistically significant; or exceed a clinically relevant threshold).More recently, assurance has been used to calculate the probability of obtaining regulatory approval with clinically relevant effects on key endpoints after Phase IIb. 4,5.
Note that the method for the trial data analysis is also specified independently of the assurance method; the same analysis method would be assumed as that used in the power calculation.
Assurance provides a more realistic assessment of the probability a trial will give rise to a successful outcome compared to a conventional power calculation.The high failure rates of clinical trials are well-documented 6 and there are several examples where promising results from early phase trials have not been replicated in subsequent Phase III trials. 7Assurance calculations for Phase III trials should capture the strength of the available evidence after mitigating the selection bias often inherent in early phase data when a necessary condition for progress is positive Phase Ib or Phase II results.They should also account for any limitations in the available data in light of planned shifts in the patient population, outcome or treatment strategies between phases.
Accurate and reliable evaluations of risk can be used to optimize trial design and analysis plans.For example, assurance can be used to support decisions regarding study sample size, and quantitatively measure how effective various trial setups are at reducing risk, such as the timing and number of planned interim analyses. 8,9Furthermore, assurance evaluations can also enable better informed decisions on whether or not to conduct a study.Of course sponsors, such as pharmaceutical companies and public funding bodies, may choose to fund a Phase III clinical trial (or indeed a program of Phase III clinical trials) regardless of whether it has a low assurance if the corresponding expected net present value (eNPV) is sufficiently high, thus targeting resources towards research programs with the greatest expected impact for patients.
Immuno-oncology (IO) is a rapidly evolving area in the development of anticancer drugs.In trials of IO therapies, timevarying treatment effects that deviate from the proportional hazards (PH) assumption have been observed on time-to-event endpoints such as progression-free survival (PFS) and overall survival (OS).See for example, CheckMate 017. 10 In a systematic review of 63 confirmatory randomized controlled trials (RCTs) of anti-programmed cell death protein-1 and anti-programmed death/ligand 1 therapies, 11 15 studies were identified with suspected nonproportional hazards due to reasons including crossing of the OS survival curves 12,13 or a lag before the PFS survival curves separated. 14In what follows, we focus on the latter scenario and refer to this as a delayed treatment effect (DTE).
There are several challenges associated with the design and analysis of trials with nonproportional hazards.Firstly, the primary estimand should be defined with a clinically interpretable measure used to summarize the benefit of the test treatment versus control, 15 and an unbiased estimator should be selected to target it.Secondly, the test of the null hypothesis of no benefit of treatment versus control should be carefully selected acknowledging the impact of potential deviations from PH on the attained power of commonly applied procedures, such as the log-rank test. 16Where we suspect (but are not certain) that there will be a delay in the treatment effect, and furthermore are uncertain about the length of the delay if there is one, the target event number and corresponding sample size needs to be carefully chosen to provide confidence the trial will be able to meet its objectives in light of these uncertainties.
As IO trials are becoming more common, so are trials in which a DTE is observed.However, to the best of our knowledge, there has been no published work on eliciting prior distributions and calculating assurance for when a DTE is likely to be present in a clinical trial with time-to-event endpoints.In this article, we propose a method for how to elicit the relevant parameters for this trial and how to perform an assurance calculation.
In Section 2, we briefly discuss the assurance method and how it is used in practice.In Section 3, we define DTEs, present an elicitation method and signpost to the open-source software we have developed for use in this situation.In Section 4, we illustrate how our method can be used to calculate assurance.In Section 5, we investigate the robustness of our parameterisation and lastly we conclude with a brief summary in Section 6.

ASSURANCE
Suppose that an RCT is to be conducted to compare an experimental treatment with a control; we assume that this is the current standard of care, but could also be a placebo.We want to test the null hypothesis  0 that the treatment effect  = 0 versus the alternative hypothesis  1 that  ≠ 0. For a power calculation, the sample size is chosen to solve for some desired probability  * (usually 80% or 90%) and treatment effect   , typically chosen to represent a plausible and clinically relevant effect.The power of the test of  0 at   is the probability of rejecting  0 if  is as large as   .However, since the  may differ from   , the attained power of the test may deviate from the target  ⋆ .Assurance is the unconditional probability that the trial will end with the desired outcome: where  () is the prior distribution for .If a successful trial simply corresponds to rejecting  0 , Equation 2 is the expected power, interpreting   in Equation 1 as the true value of the treatment effect, rather than some minimum clinically relevant difference.
If the desired outcome is to reject  0 with data which favours the experimental treatment, then the event 'successful trial' may be defined as 'Reject  0 with θ > 0'.When calculating assurance, a key question is how to define the prior distribution  () for the unknown treatment effect.One approach would be to take  () as the posterior distribution for  resulting from using clinical data from an early Phase II trial to update a weakly informative prior distribution.However, this approach may fail to incorporate other sources of relevant information, and may become challenging if there are differences between the treatment effect studied in Phase II and the quantity of interest in the future trial.Alternatively, the prior distribution(s) for the parameters of interest could be elicited from a group of experts, in light of the Phase IIb trial data and any other information that is deemed relevant -data from drugs with a similar mechanism of action, knowledge about the disease area etc.For a detailed discussion about the method of eliciting parameters in these contexts, see Dallow et al. 8 The reason expert elicitation is useful in these circumstances is to bridge the gap between data from the completed Phase IIb trial and the quantities of interest in the planned Phase III trial.For example, the future trial may consider different endpoints, the patient population may change, or a different dose/dosing regime may be proposed. 17Also, when working in a rare disease setting, there may be limited data available.In this context, expert elicitation is useful as it allows the study team to combine heterogeneous sources of information (RCTs, case series, observational data) when a formal mathematical synthesis of these data would be very complex. 18n the context of assurance methods, O'Hagan et al 2 considered eliciting beliefs for clinical trials with Normally distributed and dichotomous endpoints.Gasparini et al 19 also considered Normally distributed endpoints, and Alhussain and Oakley 21 considered eliciting uncertainty about the variance of Normally distributed endpoints.For time-to-event outcomes, Spiegelhalter et al 27 considered stipulating a Normal prior distribution for the log hazard ratio under a PH assumption, as did Hiance et al. 28 Ren and Oakley 20 considered expert elicitation for both parametric and non-parametric models.Azzolina et al 22 produced a comprehensive literature review of assurance methods that use expert elicitation (both theoretical and applied).

Delayed treatment effects
Figure 1 shows a Kaplan-Meier plot from a Phase III trial, CheckMate 017, 10 in which a DTE was observed.The trial enrolled patients with advanced squamous-cell non-small-cell lung cancer (NSCLC) and compared the current standard of care, docetaxel, against an experimental treatment, nivolumab.The plot is based on the reconstructed individual patient data 29 derived from published Kaplan-Meier survival curves.We see that both the control and experimental treatment curves follow the same trajectory for some time (approximately 3 months), after which they separate.
In a survival trial, suppose we have two groups: the control group and the experimental treatment group.We denote the hazard function for the control group as ℎ  () and the hazard function for the experimental treatment group as ℎ  ().In a typical survival trial,  1 assumes that the hazard function for the experimental treatment group is less than or equal to the hazard function for the control group at all time points, that is  1 : ℎ  () ≤ ℎ  (), ∀.This suggests that patients in the experimental treatment arm immediately benefit from the intervention compared to those in the control arm.
In a trial in which a DTE is thought likely to occur, we make a different assumption.We assume that the hazard function for the experimental treatment group is the same as that of the control group until a certain time  , which represents the delay in the experimental treatment taking effect.After time  , we assume that the experimental treatment group starts experiencing some benefit relative to the control group: where ℎ *  () ≤ ℎ  () describes the benefit of the experimental treatment relative to control. 30In survival trials, the PH assumption is often made.It is used in Cox regression and the log-rank test, which is a standard statistical test in survival trials, is most powerful under this assumption.However, when DTEs are present, this assumption is violated because the hazard ratio becomes time-dependent.This poses challenges for the design and analysis of trials with DTEs.
1][32][33][34][35][36][37][38][39][40] However, most of these discussions focus on regaining statistical power lost due to the delay by using alternative analysis methods, such as weighted log-rank tests or the difference in restricted mean survival times (RMST). 41These methods aim to account for the time-dependent hazard ratio without assuming PH or the specific shape of the underlying survival curves.

Assurance for delayed treatment effects
We propose an elicitation technique and parameterisation to calculate assurance in these circumstances.By doing so, we are able to capture experts' uncertainty about the relevant parameters and provide a more realistic judgement of the probability of success of the proposed trial.
We suppose that the survival times in the control group follow a Weibull distribution with hazard function and corresponding survival function We assume survival times in the experimental treatment group, after a delay of length  , also follow a Weibull distribution with different parameters to the control.This induces the hazard function and corresponding survival function Prior to time  , we assume the experimental treatment group has the same survival function as the control (Equation 5).Thus, the survival function for the experimental treatment group is The hazard ratio of the two groups (derived from Equations 4 and 6) is

Constructing the prior distributions
From Equations 5 and 8, we see that there are five unknowns:  ,   ,   ,   and   .To calculate assurance, prior distributions are required for these parameters.In the following sections we propose a method for eliciting these priors, including the questions to ask.

Prior(s) for 𝜆 𝑐 and 𝛾 𝑐
We first elicit judgements on the two parameters for the survival times in the control group;   , also known as the scale parameter, and   , the shape parameter.We assume that there exists some historical data on the control so that we can derive where  hist is the historical data for the control group intervention.Schmidli et al 42 consider using a meta-analytic-predictive (MAP) prior for control group parameters.Bertsche et al 43 extend this method to specifically consider time-to-event data.Alternatively, to include expert elicitation at this stage, see Ren and Oakley, 20 who consider eliciting beliefs when survival times are assumed to follow a Weibull distribution.

Prior for 𝑇
We propose a hierarchical procedure for eliciting judgements about  ,   and   , as shown in Figure 2. The existence of a DTE presupposes that the treatment has any effect in the first place, but the experts may not be certain of this.Hence we first need to elicit a probability that the treatment has any effect, and then elicit judgements about  conditional on the assumption that the treatment has some effect.To avoid ambiguity, we define "a treatment effect" as any separation between the survival curves for the control and treatment groups: they are not equal.Hence the first question we ask is

FIGURE 2
The proposed elicitation scheme as described in Sections 3.3.2and 3.3.3.

"What is your probability that the population survival curves separate at some point in time?"
We define  to be the proposition that the population survival curves separate, and   to be elicited probability that this proposition is true.Note that the proposition  refers to the unobserved true population distributions of survival, and not sampled Kaplan-Meier curves that would be observed in a trial.We now elicit judgements about  , conditional on .Given , we allow for the possibility of no delay in the treatment effect; we propose a prior of the form with  delay ∼ Gamma(, ).Any non-negative distribution could be used for  delay but we expect the Gamma distribution to be sufficiently flexible.We therefore need questions that an expert would be willing to answer and from which we can identify values for  DTE ,  and .We elicit judgements about the probability  DTE by asking the following question: "If we suppose that the population survival curves separate at some time, what is your probability that there is a delay before they separate?" Finally, we elicit judgements about the distribution of  delay by stating "Suppose that the population survival curves separate with a delay.We want you to consider your uncertainty about the delay." We then use a standard method for eliciting a univariate distribution for  delay .Methods for eliciting univariate distributions can be found in O'Hagan et al 44 and implemented using the Sheffield Elicitation Framework (SHELF). 45SHELF is a package of protocols, templates and guidance documents for conducting expert elicitation.There are various methods that SHELF uses to elicit distributions that involve asking an expert to provide quantile judgements (e.g. a median; tertiles) or probability judgements (the probability of the uncertain quantity lying in some interval).In either case, the expert is, in effect, specifying points on their cumulative distribution function.Parametric distributions are fitted to these judgements using a least squares procedure: the parameters are chosen to ensure the points on the fitted cumulative distribution function are as close as possible to the elicited.Feedback (additional quantiles or probabilities from the fitted distribution) is then provided to the expert to check the adequacy of the elicited distribution.

Prior(s) for 𝜆 𝑒 and 𝛾 𝑒
The final two parameters which we need to elicit distributions for are the two treatment parameters,   and   .We would not expect an expert to make judgements about these parameters directly.We instead follow usual practice of eliciting judgments about observable quantities, 46 from which a prior for   and   can be inferred.Some possible choices of observable quantities are • median survival time on the experimental treatment; • survival probability at time ; and • greatest distance between survival curves and how big is this difference.
In practice, we have found experts have a preference to make judgements about hazard ratios, for example, • hazard ratio at time ; and • maximum hazard ratio and when this occurs.
To elicit   and   , we require the expert to provide their beliefs for at least two of the above questions, which is likely to be a difficult task for the expert.We can simplify the elicitation task by making the assumption that   =   .We then have a piecewise-constant hazard ratio We can rearrange Equation 11 for the case when  >  to obtain Hence, conditional on   and   we can elicit a distribution for the hazard ratio for  >  , from which a distribution for   can be derived.We make a standard modelling assumption that the treatment effect as described by the hazard ratio is independent of the control group response as determined by the parameters   and   .We investigate the implications of assumption   =   in Section 5.
We denote the post-delay hazard ratio by HR * (where it is assumed that  is true).We propose a prior Again, any non-negative distribution could be used for HR * .As with  , we need questions that an expert would be willing to answer and from which we can identify values for  and .We elicit judgements about the distribution HR * by stating "Suppose that the population survival curves separate.We now want you to consider your uncertainty about the hazard ratio once the experimental treatment begins to take effect." We would then use a standard method for eliciting a univariate distribution for HR * , as described in Section 3.
with   completing the prior specification.For algorithmic convenience, if  is not true, we set  = 0 and HR * = HR = 1.
We do not expect  to be informative for the control group survivor function, so we assume (  ,   |,  hist ) = (  ,   | hist ).
We have assumed an expert's judgements about HR * | are conditionally independent of  given S. If the expert wanted to incorporate dependence between these parameters, a more complicated elicitation method could be used, such as the SHELF extension method, 45 illustrated in Holzhauer et al. 17 .
The elicitation technique discussed above assumes a single expert, but it is likely that in practice multiple experts will be consulted.Eliciting a distribution from multiple experts typically involves either eliciting a distribution from each expert separately and then aggregating the results, or alternatively getting the experts to agree on a single distribution.The SHELF method involves a combination of the two: experts first make judgements independently, which are then shared with the group.Following a facilitated discussion, the experts are then asked to agree on a single distribution reflecting the perspective of a "Rational Impartial Observer".Other methods for eliciting distributions from multiple experts are available. 47,48

Computing assurance under the DTE model
We use these elicited distributions to calculate assurance for various sample sizes using Algorithm 1.This algorithm incorporates free parameters that can be adjusted to reflect operational constraints in a clinical trial: the control and treatment group sample sizes   and   , and the total number of required events .We let  be a free parameter in the algorithm as it is common to run event driven survival trials.This is because, for a time-to-event endpoint, statistical information for the log-hazard ratio is a function of  and therefore attained power will be determined by the number of events observed at the analysis time.Changing these free parameters will have consequences for assurance, so it is important to consider different combinations of trial designs in order to find the one which best suits the needs of the sponsor.For example, for a fixed   and   , if we increase  (the number of events) we may increase assurance at the cost of needing to run the trial for longer.
The recruitment schedule and analysis technique in Algorithm 1 (and Algorithm 2 in Section 5.2) are left unspecified, as these choices are not part of the assurance methodology; they can be selected separately.In the example of Section 4, we use a Fleming-Harrington weighted log-rank test for the analysis (as there is high prior belief that the separation of the survival curves will be subject to a delay).By default, we assume uniform recruitment for 12 months.
To implement our methods we have developed an R Shiny app which is available both as an offline R package and hosted online.Instructions are provided in the Appendix.The Shiny app allows users to choose from two recruitment schedules: piecewise constant and power method (taken directly from the nphRCT 49 R package).Different testing approaches have been proposed for use in the non-proportional hazards setting (e.g.max-combo test, 11 weighted log-rank tests, 50 difference in RMST, 41 and more 51 ).Our app offers two of these statistical tests: a standard log-rank test and a Fleming-Harrington weighted log-rank test taken from the nph 36 R package).Finally, the elicitation process may inform refinements to the analysis plan.
Algorithm 1 calculating assurance when a DTE is likely to be present in a clinical trial Inputs: sample sizes   and   , the elicited priors (  ,   | hist ), ( |), (HR * |), the probability of the survival curves separating   , the number of events  (we require  ≤   +   ) and the number of iterations .For  = 1, … , : The assurance is then estimated as

EXAMPLE
In this section, we illustrate the proposed method with a hypothetical example where we design a two-arm Phase III superiority trial to test whether a new drug is beneficial versus the current standard of care, docetaxel, in patients with advanced non-smallcell lung cancer (NSCLC).As the drug is in the IO area, we expect a DTE.The primary efficacy endpoint is OS, we assume uniform recruitment for 12 months, 1:1 allocation, and that the data will be analysed with a Fleming-Harrington weighted logrank test with  = 0 and  = 1, as we have a high probability that the treatment will be subject to a delay and we want to place more weight on late differences in the survival curves.We assume that the trial final analysis will take place when 80% of patients have died.

FIGURE 3
Reconstructed Kaplan-Meier curves for the docetaxel arm in three different trials: ZODIAC, 52 REVEL 53 and INTEREST. 54We see that the three curves are similar and we assume exchangeability of the trials in our example.

Prior distribution(s) for the control parameters
There exists historical data on docetaxel, so we are able to use this to generate a prior distribution for control group parameters.In Bertsche et al 43 they found three trials in which docetaxel was used as the control in a clinical trial; ZODIAC, 52 REVEL 53 and INTEREST. 54We also use the results from these three trials, but we use the published Kaplan-Meier curves to reconstruct the individual patient data. 29The three Kaplan-Meier curves can be seen in Figure 3. Since survival in all three trials appears similar, we choose to pool the data from all three trials and use this to update non-informative priors for   and   , using Markov chain Monte Carlo (MCMC) to sample from the posterior distributions.The generated MCMC samples are then used as a prior distribution for the future trial of interest.

Eliciting the prior distribution for the length of delay
We now need to elicit the expert's probability that the population survival curves separate at some point in time.Suppose the expert specifies this as 90%, that is   = 0.9.The expert is then asked for their uncertainty about the length of delay, given that the survival curves do separate.Suppose the expert's probability that the effect of the experimental treatment will be subject to a delay is 70%, that is,  DTE = 0.7.The expert is then for about their beliefs about  , conditional on there being a delay.
The expert provides a median of 4 months and two quartiles (25% and 75%) of 3 and 5 months, respectively.A Gamma(, ) distribution is fitted to these judgements, so that  delay = Gamma(7.29,1.76).Combining these beliefs using Equation 10, we have the following mixture prior distribution  = { 0, with probability 0.3 Gamma(7.29,1.76), with probability 0.7 .
The fitted quartiles of a Gamma(7.29,1.76) distribution are 3.03, 3.95 and 5.05.These would be presented to the expert for feedback.

Eliciting the prior distribution for the post-delay hazard ratio
The second quantity of interest is HR * |.Suppose the expert provides a median of 0.6 and two quartiles (25% and 75%) of 0.55 and 0.7, respectively.Again, we fit a Gamma(, ) to these judgements, so  HR = Gamma (29.6, 47.8).As per Equation 13, we have the following prior distribution HR * | ∼ Gamma(29.6,47.8).The fitted quartiles of Gamma (29.6, 47.8) distribution are 0.54, 0.61 and 0.69, and again, this would be presented to the expert for feedback.

Calculating assurance
We use these elicited prior distributions to calculate assurance for this example using Algorithm 1.In Figure 4, an assurance curve is plotted to inform sample sizes required for this clinical trial.Also seen in Figure 4 are three other power/assurance curves.The two power curves correspond to including no uncertainty in the parameters, with the control parameters,   and   , being the MLE from the three pooled data sets, as discussed in Section 4.1.The values for  and HR * are the median values given by the experts.For one of the power curves, we have assumed that  is 0, and therefore does not account for the fact that the treatment effect may be subject to a delay.Also shown is an assurance curve, corresponding to a more flexible approach to calculating assurance, this approach is presented in Section 5.2.The distributions/values for the first three curves are found in Table 1.We kept the recruitment scheduling, analysis method etc, as described at the start of Section 4, constant across all four scenarios.
In Figure 4, we see that both of the power calculations are much more optimistic than the other two scenarios; we require far fewer patients for the same power, at all sample sizes.This highlights the importance of incorporating uncertainty into the trial parameters.However, we must reiterate, the assurance method is not simply used for setting sample sizes for the proposed trial.We anticipate the assurance method being used as one step in a thorough process to decide whether or not to go ahead with the trial, and if we do run the trial, define the characteristics of the proposed trial; length, number of patients, number of events etc.For example, if we required a quicker trial, we may choose to decrease the number of events we need to observe before stopping the trial, , but this will surely come at a cost of reducing the assurance/power seen.Therefore, it is important that a number of different trial designs are considered, and then assurance curves can be plotted to help inform the ultimate decision(s).  1 Distribution/values for the parameters, for the first three scenarios seen in Figure 4.

SIMPLIFIED PRIOR DISTRIBUTION: DISCUSSION
To elicit the parameters in this scenario, we select a method that would be both effective and straightforward for the experts.As a result, we simplified the original parameterisation by fixing   =   .By doing so, we were able to focus on hazard ratios as the basis for our questioning, thus ensuring that the elicitation process was both easy and intuitive for the experts.However, it's important to investigate the robustness of this simplification.The results of our investigation are presented in the following section.Finally, we provide an alternative method for calculating assurance.This alternative is designed to accommodate situations in which the aforementioned simplification may not be preferred.

Robustness of the parameterisation
The investigation aimed to assess the impact of the simplification we introduced,   =   , into the model by comparing two parameterisation methods: Method A and Method B. Method A incorporated the simplification, while Method B allowed   to vary.Historical data from three clinical trials (Checkmate 017, 10 Checkmate 141, 55 and Checkmate 017 and Checkmate 057 FIGURE 4 Power/assurance curves for the example given in Section 4. We see that the sample size required for 80% power/assurance is greatly different under the different scenarios and highlights the importance of including uncertainty in the design stage of a clinical trial combined, 56 the Kaplan-Meier plots for these trials are seen in Figure 5) with observed DTE were used to estimate the five unknown parameters (from Equations 5 and 8) using both methods.For clarity, Table 2 shows how the two methods estimate the five unknown parameters.

FIGURE 5
Kaplan-Meier plots for the data sets introduced in Section 5.For (a) the data set is from trial Checkmate 017, 10 for (b) the data set is from trial Checkmate 141 55 and for (c) the data set is from trials Checkmate 017 and Checkmate 057 combined. 56e estimated parametric survival curves generated by both methods are presented in Figure 6.Observing all three examples, we can see that the parametric treatment survival curve produced by Method B exhibits a marginally superior fit compared to the treatment curves derived from Method A. This is what we would intuitively expect, as Method B incorporates two free parameters,   and   , while Method A only employs a single free parameter,   .However, despite Method B giving a better fit to the data, Method A still approximates the data well.These findings suggest that the simplification introduced by Method A would likely have minimal practical impact on real decision-making processes.

FIGURE 6
The same Kaplan-Meier plots as in Figure 5, with estimates for both Method A and B (as introduced in Section 5) overlaid.For each data set,  has been visually estimated.The control parameters,   and   , have been estimated from the data (the overlaid blue line).In Method A, the simplification has been made (  =   ) and then   has been estimated using a least squares procedure.In Method B, both the treatment parameters (  ,   ) have been simultaneously estimated.
Power calculations were performed to quantify the impact of the difference in the fitted experimental treatment survival curves.The results, depicted in Figure 7, showed almost indistinguishable power curves for both methods across all three datasets.This suggests that, in these examples, the assumption   =   did not lead to different practical outcomes.

A more flexible approach to evaluating assurance
We have demonstrated that for the three historical trials considered, the simplification does not appear to have any practical implications.However, by making this simplification, the possible experimental treatment survival curves are constrained to align with the shape of the control survival curve.In Figure 8(a), it can be observed that the experimental treatment survival curves seem to be 'parallel' to the control curve due to this simplification and the fixed shape parameters,   =   , being the same for both curves.
It is important to acknowledge that in certain trial designs, practitioners may feel uneasy about using this method, particularly if they believe that the experimental treatment survival curve would not align in a parallel manner to the control curve.In response to this concern, we have developed an alternative assurance calculation, referred to as Algorithm 2. Algorithm 2 aims to generate experimental treatment curves that would be obtained if we had not made the simplification and allowed the curves to be sampled from a Weibull(  ,   ) distribution after the delay occurred.The process is depicted in Figure 9.
To implement Algorithm 2, we utilize the elicited mixture prior distributions for  and HR * |.First, we sample a large number of experimental treatment curves, denoted as .Next, we sample a value for  from the elicited prior distribution, as shown in Figure 9(a).We define  as the time at which the control curve reaches a survival probability of 0.01.Then, we independently sample two survival probabilities,  1 and  2 , at 0.25 and 0.6 through the trial, respectively, as depicted in FIGURE 7 Power curves for the two methods considered in Section 5, for all three of the trials.We see that, in all three cases, the power curves are very similar to each other, thus indicating that the simplification does not seem make any practical difference in these examples.Method A is the simplification we made in the elicitation process.
Figure 9(b) and 9(c).The only condition imposed is that  1 >  2 .Using  ,  1 and  2 , we apply a least squares procedure to fit a Weibull distribution to these points, as illustrated in Figure 9(d).
The sampled experimental treatment curves obtained from Algorithm 2 can be used to calculate assurance in the same manner as Algorithm 1. Figure 8(c) displays 10 sampled experimental treatment curves using this more flexible method.Additionally, Figure 8(d) shows the pointwise confidence intervals of the experimental treatment curves.Comparing these figures to Figures 8(a) and (b), we observe that the pointwise confidence intervals are very similar, indicating that the sampled experimental treatment curves fall within the same boundaries.However, the alternative method allows for sampling a more diverse range of curves, providing increased flexibility.
We have implemented Algorithm 2 in the example presented in Section 4, and the results are seen in Figure 1.The flexible assurance curve closely resembles the assurance curve obtained using Algorithm 1.This demonstrates that the more flexible assurance method may not significantly impact decision-making, but it may make practitioners more comfortable if they believe that the imposed constraint is not representative of the potential experimental treatment curves observed in practice.It is worth noting that the selection of time points (0.25 and 0.6 ) for sampling survival probabilities in Algorithm 2 is somewhat arbitrary.These values were chosen to produce realistic experimental treatment survival curves.However, if more restrictive or flexible curves are desired, these two points could be adjusted accordingly (e.g., closer together or further apart).

SUMMARY
In conclusion, assurance calculations have emerged as a valuable tool in the design and analysis of clinical trials.By incorporating Bayesian principles and considering prior distributions for unknown parameters, assurance calculations provide a more realistic assessment of a trial's probability of success compared to traditional power calculations.This approach acknowledges the inherent uncertainties in clinical research and allows for the simulation of trial outcomes based on sampled prior distributions.Assurance calculations offer several advantages for trial design and decision-making.They assist in optimizing sample size, assessing risks, and evaluating the effectiveness of different trial setups, including the timing and number of planned interim analyses.Furthermore, assurance evaluations enable better-informed go/no-go decisions regarding study conduct, directing resources towards research programs with the highest expected impact for patients.
In the rapidly evolving field of immuno-oncology, assurance calculations have the potential to address challenges associated with time-varying or delayed treatment effects on time-to-event endpoints.We have extended the assurance method to include survival trials in which a delayed treatment effect is likely to occur.Overall, assurance calculations provide a robust framework for quantifying the probability of success in clinical trials while considering uncertainty.By incorporating Bayesian methods and accommodating complexities in trial design, assurance calculations contribute to more informed decision-making, improved trial design, and ultimately, more effective and impactful clinical research.
Algorithm 2 calculating flexible assurance for when a DTE is likely to be present Inputs: sample sizes   and   , the elicited priors (  ,   | hist ), ( |), (HR * |), the probability of the survival curves separating   , the number of events  (we require  ≤   +   ), the maximum trial length  max , the number of initial samples  and the number of iterations .
3. For  = 1, … , : i sample  , ,  , from (  ,   ); ii define  to be the time at which the survival probability in the control group equals 0.01 (using  , ,  , and Equation 5); iii sample a survival probability,  1, , from the column in matrix  which corresponds to 0.25 ; iv sample a survival probability,  2, from the column in matrix  which corresponds to 0.6 (we require  2, <  xvi define   = 1 if the data give rise to a 'successful' outcome (0 otherwise).
The assurance is then estimated as

FIGURE 1
FIGURE 1 Kaplan-Meier plot of a Phase III trial, Checkmate 017, 10 in which DTEs are present.The control and experimental treatment curves follow the same trajectory for approximately 3 months, after which they separate.

FIGURE 8 FIGURE 9
FIGURE 8 (a) 10 sampled experimental treatment survival curves, generated by Algorithm 1, (b) pointwise confidence intervals (0.1 and 0.9) for 500 of these sampled experimental treatment curves, (c) 10 sampled experimental treatment survival curves, generated by Algorithm 2, (d) pointwise confidence intervals (0.1 and 0.9) for 500 of these sampled experimental treatment curves.

TABLE 2
How each of the five parameters are estimated in both of the methods introduced in Section 5, Method A is the simplification we made in the elicitation process (MLE = Maximum Likelihood Estimation).
1, ); v sample   from ( ); vi simultaneously solve these equations to find the best fitting values of  , and  , (can use nleqslv()); vii sample survival times for the control group  1, , … ,    , using the sampled  , ,  , (can use rweibull()); viii sample survival times for the experimental treatment group  1, , … ,    , using the sampled   , and  , ,  , (can use inversion sampling); ix sample recruitment times  1, , … ,    +  , from from the pre-specified recruitment schedule; x add the survival times from each group to the recruitment times to obtain a pseudo event time  1, , … ,    +  , ; xi order the pseudo event times and define   to be the time at which  events have been observed; xii remove any observation in which the recruitment time  , >   ; xiii censor any observation for which the pseudo event time  , >   ; xiv for any censored observation, redefine the survival time to be   −  , ; xv perform the method of analysis on the data  1, , … ,    , and  1, , … ,    , ;