Guaranteed in‐control performance of the EWMA chart for monitoring the mean

Research on the performance evaluation and the design of the Phase II EWMA control chart for monitoring the mean, when parameters are estimated, have mainly focused on the marginal in‐control average run‐length (ARLIN). Recent research has highlighted the high variability in the in‐control performance of these charts. This has led to the recommendation of studying of the conditional in‐control average run‐length (CARLIN) distribution. We study the performance and the design of the Phase II EWMA chart for the mean, using the CARLIN distribution and the exceedance probability criterion (EPC). The CARLIN distribution is approximated by the Markov Chain method and Monte Carlo simulations. Our results show that in‐order to design charts that guarantee a specified EPC, more Phase I data are needed than previously recommended in the literature. A method for adjusting the Phase II EWMA control chart limits, to achieve a specified EPC, for the available amount of data at hand, is presented. This method does not involve bootstrapping and produces results that are about the same as some existing results. Tables and graphs of the adjusted constants are provided. An in‐control and out‐of‐control performance evaluation of the adjusted limits EWMA chart is presented. Results show that, for moderate to large shifts, the performance of the adjusted limits EWMA chart is quite satisfactory. For small shifts, an in‐control and out‐of‐control performance tradeoff can be made to improve performance.

into account the random variability of the FAR, the so-called "practitioner to practitioner" variability, which is inherent to parameter estimation. Motivated by this, Saleh et al 2 examined the CARL IN distribution of the EWMA chart as a function of the number of Phase I subgroups (m), subgroup size (n), and λ. Based on the standard deviation (SDCARL IN ) of CARL IN , they concluded, contrary to Jones et al, 1 that, much larger Phase I sample sizes are required to design Phase II EWMA charts with larger λ than with smaller λ.
In his seminal work on prospective application of the Phase II X chart, Chakraborti 3 was among the first group of authors to highlight the variation present in the conditional run-length distribution and hence the importance of examining the practitioner to practitioner variability via the conditional run-length distribution. He emphasized how the conditional false alarm rate (CFAR) behaves as a random variable when parameters are estimated and used to construct Phase II charts. Inspired by this, for the Phase II S and S 2 charts, Epprecht et al 4 examined the CFAR distribution as a function of the Phase I sample size mn. They then made recommendations about the minimum size of the Phase I sample, which is required, to guarantee, with a high probability 1 − p, that the CFAR will not exceed some specified nominal CFAR value (denoted CFAR 0 ). This is the exceedance probability criterion (EPC) introduced by Albers et al 5 and Gandy and Kvaloy 6 which sets an upper prediction bound to CFAR. In the same spirit, we examine the CARL IN distribution of the EWMA chart as a function of λ, m, and n = 5 and set a lower prediction bound to CARL IN . We then make recommendations about the value of m, which is required, to guarantee, with a specified high probability 1-p, that the CARL IN will exceed a nominally specified value, denoted ARL 0 . This approach has been recommended in the recent literature as the CARL IN (also the CFAR) is a random variables with high variability which is the cause of practitioner to practitioner variation. Our results reveal that in order for the EWMA chart to meet the EPC specification, even more Phase I data are needed than was previously recommended by Saleh et al 2 and Jones et al. 1 Moreover, consistently with Jones et al 1 but contrary to Saleh et al, 2 it will be seen that small values of λ require larger Phase I sample sizes than large values of λ.
However, in practice, it may be difficult and expensive to get such huge amounts of Phase I data. Hence, control limits are adjusted as a function of the amount of data available at hand. Jones 7 adjusted the control limits of the Phase II EWMA chart to achieve a certain nominally specified marginal or unconditional in-control ARL (ARL 0 ). This is the unconditional approach. Values of the charting constant (L) were given graphically for different values of ARL 0 , m, n, and choice of λ ranging from 0.02 to 1. However, the unconditional approach ignores the practitioner to practitioner variability (the variation in the CARL IN distribution). To this end, Saleh et al 2 used the EPC and bootstrapping to design the EWMA chart when parameters are estimated. The EPC does not ignore the variation in the CARL IN distribution but controls it with a high probability in the form of a prediction interval. The EPC was popularized by Jones and Steiner 8 and Gandy and Kvaloy 6 ; since then, the EPC and the associated bootstrap approach have been used by many authors. We mention, among others, Saleh et al, 9 Aly et al, 10 Faraz et al, 11,12 and Hu and Castagliola. 13 However, bootstrapping is computer intensive and may be somewhat difficult to apply in practice. This is also exacerbated by the fact that, even though the underlying problem and the chart performance specifications may be the same, repeated applications of the bootstrap approach would almost surely result in different adjusted limits and can lead to comparability issues. Hence, it is not surprising that, with the exception of Faraz et al 11 and Hu and Castagliola, 13 the authors who have used the bootstrap approach did not provide or show tables of their new charting constants. Each of the Hu and Castagliola 13 charting constants was found by running the bootstrap approach 100 times and averaging the results. On an average computer, this takes a lot of time. Hence, without these tables, coming up with the charting constant can be frustrating for a practitioner.
Under the assumption that the process output is normally distributed, bootstrapping is not necessary to apply the EPC. For example, Goedhart et al [14][15][16] provided analytical results in the form of numerical solutions and approximations, for the Shewhart charts for the mean, and provided tables for the charting constants. But, for the EWMA chart, such analytical approximations are difficult to obtain because the charting statistics are dependent. Consequently, this paper presents a different method of adjusting the Phase II control limits according to the EPC, which guarantees, with a specified high confidence, that the CARL IN of the EWMA chart exceeds a nominal ARL 0 . Our approach is based on the simple idea of approximating the CARL IN distribution by an empirical distribution, which is obtained by generating many Phase I subgroups, and using the Markov Chain to calculate the corresponding CARL IN values. It will be seen that this approach requires less computational effort than the bootstrap approach, yet it produces results that are as accurate as some known analytical results. Thus, tables and graphs of the required charting constants are provided to help practitioners implement the EWMA chart with estimated parameters easily in practice. A program to implement our method, written in R, is available from the authors on request.
The paper is organized as follows. Section 2 introduces some notation and terminology used, gives an overview of the EWMA chart and the Markov Chain technique, and presents the estimators that are used to estimate the unknown process parameters. Section 3 evaluates the traditional EWMA chart in terms of the EPC and provides rough guidelines on the number of Phase I subgroups required to achieve a certain high proportion of high CARL IN values relative to a reasonable nominal value. Section 4 presents the new charting constants (adjusted control limits) so that the Phase II EWMA chart has a guaranteed nominal IC performance according to the EPC. Section 5 gives a detailed evaluation of the IC and OOC performance of the new constants (the EPC adjusted limits based Phase II EWMA chart) and compares it with the performance of the traditional Phase II EWMA chart with unadjusted limits (limits calculated for Case K) according to the EPC. Finally, a summary and some conclusions are given.

| EWMA CHART WITH ESTIMATED PARAMETERS
Let X ij , i = 1, 2, … , m and j = 1, 2, … , n denote the IC Phase I data from a normal distribution with an unknown mean μ 0 and an unknown standard deviation σ 0 . For a smoothing constant 0 < λ ≤ 1, starting at sampling stage i = m + 1, m + 2, …, the standardized plotting statistic for the Phase II EWMA chart with the estimated parameters is given by p , X i is the i th Phase II sample mean, and b μ 0 and b σ 0 are the Phase I estimators of the unknown parameters μ 0 and σ 0 , respectively. It is also assumed that the Phase II data are normally distributed, and for generality, let μ and σ denote the mean and the standard deviation, respectively, of this distribution. In this paper, we use the  20 Schoonhoven et al 21,22 ) is equivalent, because, m(n − 1)is typically quite large in our applications and hence the constant c 4 (m(n − 1) + 1) is indistinguishable from 1.
We write the statistic W i in its canonical form Note that the random variables T i and Z are independent standard normal variables that are mutually independent and are also independent of Q. Because . For simplicity, we use the asymptotic (steady state) control limits where L is the charting constant to be found for a given λ value and some chart design/performance metric. The performance metric is usually some property of the IC run-length distribution, eg, ARL IN . For example, for a given value of λ and a nominal ARL IN = ARL 0 , when parameters are known, the L values can be found in Crowder 23 or in the R package "spc." Often, these L values for Case K are used to construct the Phase II EWMA when estimated parameters are used in the control limits. It is recognized in the literature that this is a problem in the sense of getting many more false alarms than nominally expected, particularly when the amount of Phase I data is small to moderately large. We provide some solutions for correcting this problem. Performance of a control chart is often evaluated by the run length distribution and its associated characteristics, eg, the mean (the expected value), the standard deviation, and percentiles. The conditional run-length distribution is the run-length distribution that is calculated for given values of Q and Z for a given set of data obtained from a Phase I analysis. The expected value of this distribution, denoted CARL, is also a random variable with its own distribution. The expected value of the distribution of CARL is the unconditional ARL, denoted ARL. The conditional run-length distribution and the CARL of an EWMA chart may be calculated (approximated) using the Markov Chain method, see Brook and Evans 24 and Lucas and Saccucci, 25 among others. Thus, applying the Markov Chain method, conditionally on Q and Z, the CARL of the Phase II EWMA chart can be conveniently written as where t (which is generally taken as an odd integer) represents the number of transient states in the state space of a Markov Chain, v ′ e is the 1 × t row vector with one in the middle position (for an odd integer t, the middle position is unique) and 0 elsewhere, u e is a t × 1 column vector of ones, I is the t × t identity matrix, P = [p lk ] is the t × t "essential" (conditional) transition probability matrix and l; k The transition probabilities of the essential conditional transition probability matrix,p lk , are calculated, under normality and conditional on Q and Z, as follows.
More information on the derivation of result (5) can be found in Saleh et al. 9 From Equation 4, for fixed m, n, δ, λ, and L, it is clearly seen that the CARL depends on the random variables Q and Z, and hence the CARL is a random variable. Saleh et al 2 studied the effect of m and Phase I estimates on the distribution of CARL IN (the CARL when σ = 0). They found that unless the Phase I parameter estimates are "close" to the true but unknown parameter values, the CARL IN values can vary widely and from the nominal ARL 0 . However, a practitioner will almost never know where his/her estimates are in relation to the unknown process parameters. Thus, when parameters are estimated, using the charting constants for Case K to design Phase II EWMA charts is a risky proposition, because it can result in very low CARL IN values which will almost surely call into question the process monitoring regime. This risk can be somewhat reduced by increasing m. However, as will be seen in the next section, the value of m that is required to reduce the probability of low CARL IN values can be very large. Hence, many control charts in the recent literature with estimated parameters are now designed such that It follows that the ARL 0 is the 100p th percentile value of the distribution of CARL IN . This is the EPC approach that we use to evaluate and design the EWMA chart in the following sections.

| PERFORMANCE ASSESSMENT OF A STANDARD PHASE II EWMA CHART USING THE EPC
Recall that a standard Phase II EWMA chart uses the charting constants obtained in the known parameter case when parameter estimates are plugged in to form the Phase II EWMA chart. Thus, for a givenp ∈ (0, 1) and m, n, λ, L, t, we want to find the 100p th percentile, CARL IN,p , of the distribution of CARL IN (Q, Z, m, n, λ, L, t). Once found, CARL IN,p is compared with ARL 0 , which is the theoretical value that must be exceeded, in an application, with a high probability 1 − p. The comparison between the CARL IN,p and ARL 0 will be based on the percentage difference (PD), which we define as PD ¼ The algorithm for the evaluation of the traditional Phase II EWMA using the EPC is given in Appendix B. Table 1 shows the CARL IN,p values of the standard Phase II EWMA charts for ARL 0 = 100,200,370,500; n = 5 and different combinations of λ, m, and p. From Table 1, it can be seen that when m is small, the PD values (shown in the brackets in each cell) are very high in absolute values. This means, for example, for λ = 0.1, m = 30, p = 0.05 and ARL 0 = 500, we have CARL IN,p = 50, which is 90% below (PD = −90%) the nominal ARL 0 = 500. Thus, in this case, we expect the CARL IN of the chart to be at least 50 with 95% probability (and conversely, the CARL IN of the chart to be at most 50 with 5% probability). Ideally, we would like the chart to deliver at least a large CARL IN value (say ARL 0 = 500) with 95% certainty. The value CARL IN,p = 50 is too low, and the risk of getting a number that low is very high. It can also be seen that when m increases, the CARL IN,p values increase to within 6% less than the nominal ARL 0 values. The convergence is faster for λ = 0.5 than for λ = 0.1. Furthermore, it can be seen that larger values of p or/and λ are associated with larger CARL IN Table 3. Table 2 shows the number of subgroups m required to guarantee that the CARL IN exceeds CARL IN,p by a certain specified high probability (1 − p). Mathematically, this is written as where ε ≥ 0% is a nominally specified PD value. Note that ε ≥ 0 because in general CARL IN,p < ARL 0 (see Table 1). Note also that if ε = 0%, then CARL IN,p = ARL 0 , and therefore Equation 7 reduces to Equation 6. Looking at Table 2, for fixed ARL 0 , λ, and p, it can be seen that decreasing ε from 20% to 0% increases the number of Phase I subgroups m required to achieve adequate IC EPC performance. It can also be seen that for fixed ARL 0 , λ, and ε, decreasing p from 0.10 to 0.05 increases the value of m. Thus, decreasing ε or p or both improves the IC chart performance, while increasing ε or p or both degrades the IC chart performance. This also shows the flexibility of the EPC formulation (Equation 7), which can be used to improve the IC chart performance or to balance it with the OOC chart performance by manipulating ε or p or both. Later, we will provide an example of how this balance can be achieved. Table 3  From Table 3, it can be seen that for p = 0.05,0.10; ε = 0%, n = 5, and all λ, it will take more than 10 000 Phase I subgroups to guarantee (with a high probability) that the nominal ARL 0 value will be exceeded. Thus, based on the EPC, it is seen that significantly more Phase I data are required than previously recommended by both Jones et al 1 and Saleh et al. 2 Furthermore, for the EPC approach, it can be seen that when CARL IN,p is ε = 10% or ε = 20% below the ARL 0 ; a large number of subgroups is still required to guarantee with high certainty that CARL IN

| ADJUSTMENT OF THE STANDARD PHASE II EWMA CHART LIMITS FOR GUARANTEED CONDITIONAL PERFORMANCE
We have seen that, to achieve adequate EPC performance, a very high number of Phase I subgroups is required when using the standard Phase II EWMA chart limits. In practice, it may be difficult and expensive to come up with these high Phase I subgroup numbers. Thus, for a given amount of Phase I data (number of Phase I subgroups, with a fixed sample size), the control limits need to be adjusted. Consider again the EPC: P(CARL IN (Q, Z, m, n, λ, L, t) > ARL 0 (1 − ε)) ≥ 1 − p, which is equivalent to stating that the cdf of CARL IN (Q, Z, m, n, λ, L, t) at ARL 0 must be less than or equal to p. Then, given ε, p, ARL 0 , m, n, λ, and t, we want to solve this equation for L. Because a closed-form analytical expression for the cdf of CARL IN is not available, a formula to   Table 4 gives the L values that guarantee, with (1 − p)% probability, that the CARL IN will exceed a specified lower bound ARL 0 . Looking at Table 4 for ARL 0 = 370, λ = 1 and m = 50, 100, 300, 1000, it can be seen that our constants L = 3.24, 3.16, 3.09, 3.05 are exactly equal to those in Goedhart et al. 14,16 The constants in Goedhart et al 14,16 were obtained analytically and are regarded as an improvement to the computationally intensive bootstrap approach. This  m = 30,50,100,300,1000 validates our method. In addition to Table 4 for ARL 0 = 100,200,370,500, we have generated four figures in which the practitioner may find his/her constant L given its own m and λ by means of interpolation. These are shown below.
Looking at Figures 1-4, for any given ARL 0 , m and λ values, it can be seen that the adjusted L values are all greater than the corresponding Case K L values. It can also be seen that, for a given ARL 0 and λ, the adjusted L values decrease as m increases and converge to the known parameter (unadjusted/standard) L value. Consequently, Phase II EWMA chart that are designed using the new L values will have wider control limits, and this will lead to an improved IC performance than the charts whose design uses the Case K L values. This improved IC performance, that is widening the limits, can lead to some deterioration of the OOC chart performance. This has been noted in the literature (see, eg, Goedhart et al 15 ) as the price to pay for satisfactory nominal IC chart performance with a high probability. However, it is possible with our approach to relax the IC behavior of the EWMA chart. This can be done by increasing ε or p or   (Continues) both in Equation 7. As will be seen, in the next section, the result will be less wider adjusted limits, which will improve the OOC performance.

| IC AND OOC PERFORMANCE ANALYSIS AND A COMPARISON OF THE ADJUSTED AND THE UNADJUSTED LIMITS
Using the bootstrap approach, Saleh et al 2 came up with an EWMA chart such that P(CARL IN > 200) = 0.90 for λ = 0.1, m = 50and n = 5. Following this, they evaluated the IC and OOC conditional performance of this chart. However, their performance evaluations were done only for this chart and were limited only to δ = 0 and δ = 1. In this section, we make a much more detailed evaluation and comparison between the performance of the EWMA charts with the proposed adjusted limits and that of the standard (unadjusted) limits chart, for various shifts δ. We also compare our results with the performance results of the bootstrap based (adjusted limits) EWMA chart in Saleh et al. 2 Furthermore, we use the flexibility of the EPC formulation in Equation 7 to adjust the trade-off between the IC and OOC performance of the EWMA chart. By trade-off, we mean, for example, sacrificing a little IC performance for a better OOC performance. Table 5 shows the CARL IN,p values for various combinations of p, λ, δ, and m for both adjusted and unadjusted limits. Again, for given λ and δ = 0, the adjusted limits were obtained such that P(CARL IN > 200) = 1 − p = 0.90 (so that the 10 th percentiles of the CARL IN distribution should be close to 200) while the unadjusted limits were obtained from the R package "spc," such that ARL 0 = 200, in the known parameters case. Looking at Table 5, for δ = 0and all λ, it can be seen that the IC performance of the chart with the adjusted limits is as specified. For example, for m = 50, p = 10% and λ = 0.1, 0.2, 0.5, 1, it can be seen (see the bolded row at perc = 10%) that CARL IN,p = 202,198,205,205; respectively, and for m = 100, we have CARL IN,p = 200,208,196,199. All of these CARL IN,p are very close to the nominal ARL 0 = 200. Besides, for δ = 0and all λ, p, m values, the values for the adjusted limits are always higher than the corresponding unadjusted limit. So, for all percentiles (perc), the adjusted limits charts always guarantee, with high probability (close to the nominal), larger CARL IN values compared with the unadjusted limits charts. Thus, the good IC performance of the adjusted limits charts is not only limited to p = 10%, but extends over the entire range of perc's.
However, as mentioned before, because the adjusted limits are wider, they can be insensitive to true process shifts compared with the unadjusted limits. We explore this for the cases when δ = 0.25,0.5,1, 1.5. Looking at Table 5 for m = 50, small shifts δ = 0.25, 0.5and all λ values, it can be seen that the medians (see the bolded rows at perc = 10%) of the CARL distributions for the unadjusted and the adjusted limits charts are radically different. The largest difference occurs at λ = 0.1, while the smallest difference occurs at λ = 1. Decreasing perc and/or increasing m reduces the differences slightly, but the pattern remains the same. Thus, for a small shift δ ≤ 0.50, the OOC chart performance of the adjusted limits EWMA chart is not as good as that of the unadjusted limits charts, particularly when λ = 0.1. But of course the point is that the IC performance of the unadjusted limits based chart is a much bigger problem. However, for larger shifts, such as δ = 1 or δ = 2 and for all perc and λ values, the CARL IN,p values for the unadjusted and the adjusted limit charts are quite close. This is even more so when m = 100. Thus, for moderate to large values of δ, the OOC chart performance of the adjusted limits EWMA chart is comparable to that of the EWMA chart with the unadjusted limits or the limits for the known parameter case.
Note that, in the literature (eg, Saleh et al 2 ), authors who use the bootstrap approach often compare the OOC behavior of the unadjusted and the adjusted limits charts solely on the basis of a shift of size δ = 1. Figure 5 shows the boxplots for the OOC CARL distributions of the EWMA chart with the adjusted and the unadjusted limits for λ = 0.1, δ = 1, m = 50 and n = 5. Based on Figure 5, it has always been concluded that the OOC performance of the bootstrap adjusted limits is not radically different from that of the unadjusted limits. However, we have shown through the CAR-L IN,p values, in Table 5, that this only occurs when δ is moderate to large. Therefore, widening the control limits by the EPC criterion makes them a little insensitive to small process shifts but guarantees a nominal performance with high probability. This may be the trade-off one has to accept. However, it is possible to adjust this trade-off to get a better OOC performance. This can be done by sacrificing a bit of IC performance.  From Figure 6, it can be seen that increasing ε from 0% to 35% leads to a slight loss of the IC performance. For example, when ε = 35%, the proportion of CARL IN values that are less than 200 is 20%. But, this is still way better than the 85% that occurs when the unadjusted limits (ε = 69%) are used. From Figure 7, it can also be seen that increasing ε from 0% to 35% leads to an improved OOC performance in the sense that the median for the OOC CARL IN distribution of ε = 35% is closer to the median (the dotted vertical line) for the OOC CARL IN distribution of ε = 69%. Thus, by sacrificing a bit of the IC performance, it is possible to improve the EWMA charts ability to detect small shifts.

| SUMMARY AND CONCLUSIONS
We study the impact of practitioner to practitioner variability on the performance of the Phase II EWMA chart. As in Epprecht et al, 4 we use the EPC criterion to evaluate the performance of a Phase II EWMA chart with limits for the known parameter case and give recommendations about the required number of Phase I subgroups to achieve nominal performance. Our results show that in order to attain or exceed a specified lower bound of CARL IN (given by ARL 0 ) with a specified high probability, more Phase I data are required than previously recommended by Saleh et al 2 and Jones et al 1 Moreover, consistently with Jones et al 1 but contrary to Saleh et al, 2 our results also show that smaller values of λ may require a larger number of Phase I subgroups, that is, more Phase I data.
Because it is expensive and sometimes impractical to get such large amount of Phase I data to estimate the process parameters and construct Phase II charts that guarantee a high probability of high CARL IN 's under the EPC, the control limits are adjusted as a function of the available Phase I data. In this regard, where analytical methods could not be conveniently used, eg, for the EWMA chart or where normality cannot be assumed, the bootstrap approach has been an attractive choice. However, many SPC practitioners and researchers have felt that the bootstrap approach may be somewhat complex and have looked for an alternative. In this paper, we presented an alternative method that can be used instead of the bootstrap approach. Our method produces the same results as the bootstrap approach, but it is faster. Based on the new method, tables and the graphs of the adjusted charting constants are provided to help practitioners implement the Phase II EWMA chart with estimated parameters more easily in practice. The new charting constants are larger than the traditional ones commonly used for Case K. Thus, the EWMA charts constructed using these new constants have wider limits, particularly for small λ and/or m.
Adjusting the limits of the EWMA chart, using our new constants, guarantees with high probability that the CARL IN performance will be as nominally specified. However, there is some concern about the deterioration in the OOC CARL performance relative to using the unadjusted limits which are wider. This is of course true for all types of control charts with estimated parameters and has been observed, for example, for the Shewhart charts (see Goedhart et al 15 ). The extent of the deterioration depends on the size of the shift δand m. For moderate to large shifts (say δ = 1 and more), the difference in the OOC CARL performance between the adjusted and unadjusted limits is negligible. However, for small shifts (say δ = 0.25, 0.50) and small m, the difference is not negligible. Thus, adjusting the control limits can make the chart somewhat insensitive to detecting small shifts. The insensitivity to small shifts may be improved by sacrificing some IC chart performance as illustrated in Figures 6 and 7. Nonetheless, it is important to keep in mind that the IC chart performance is perhaps the most important to have higher confidence in, so sacrificing some OOC performance may be the price one has to pay when a given amount of Phase I data are used to estimate the parameters to construct a control chart.
Step 3: Calculate the 100pth percentile CARL IN, p of the empirical CARL IN distribution.
Step 4: Calculate ¼ CARL IN;p − ARL 0 ARL 0 × 100, the percentage difference between the CARL IN,p and the ARL 0 .
Interpretation of PD: A negative PD value means that CARL IN, p < ARL 0 by PD percentage points. A positive PD value means that CARL IN, p > ARL 0 by PD percentage points.

APPENDIX C
A STEP-BY-STEP ALGORITHM FOR FINDING L USING THE EPC APPROACH Step 1: Fix ε, p, ARL 0 , m, n, λ, t, and a value of L in the search interval L ∈ [Case K, ∞) Step 2: Generate the empirical distribution of CARL IN (See Appendix A) Step 3: Calculate the pth percentile CARL IN,p from the empirical distribution Step 4: If CARL IN,p > ARL 0 (1 − ε) stop and use the current value of L otherwise increment L and return to step 2.
To find L very quickly, for a given set of m value, start with the largest m.