At the origination of the Shewhart control chart, it was assumed that the process parameters were known. The in-control Average Run Length (ARL) and the probability of having a false alarm (*P*) were introduced as metrics to indicate the in-control performance. These two metrics are related when the process data are i.i.d. normally distributed: the ARL equals 1/*P*. When the process parameters are unknown and have to be estimated, a similar relation holds for each estimated control chart, but the relation between the *expected* ARL (the average of the ARLs of all possible estimated charts) and the *expected**P* is different. Control charts based on estimates are often designed such that the in-control ARL equals a predefined value. This paper shows that the expected in-control ARL is a less suitable design criterion. Copyright © 2016 John Wiley & Sons, Ltd.

Toyota has long been recognized as a leader in lean manufacturing and production quality through a dedicated practice of continuous process improvements and waste elimination techniques hallmarked within their Toyota Production System (TPS) and the ‘Toyota Way’ principles. Toyota's long list of successes and quality achievements has inspired companies within all industry sectors, not just automotive, to seek application of the coveted TPS into their process models in hopes of achieving the rewards that lean production promises. However, a recent series of automotive recall announcements associated with Toyota quality control have led many industry experts and students to reflect on possible inadequacies of the TPS House model. This article seeks to identify potential structural shortcomings and possible deficiencies of the TPS House in light of the root causes of recall crisis and suggest potential re-structuring to better achieve continuous improvement. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, we propose four control charts for simultaneous monitoring of mean vector and covariance matrix in multivariate multiple linear regression profiles in Phase II. The proposed control charts include sum of squares exponential weighted moving average (SS-EWMA) and sum of squares cumulative sum (SS-CUSUM) for monitoring regression parameters and corresponding covariance matrix and SS-EWMARe and SS-CUSUMRe control charts for monitoring mean vector and covariance matrix of residual. Proposed methods are able to identify the out-of-control parameter responsible for shift. The performance of the proposed control charts is compared with existing method through Monte-Carlo simulations. Moreover, the diagnostic performance of the proposed control charts is evaluated through simulation studies. The results show better performance of the proposed control charts rather than competing control chart. Finally, the applicability of the proposed control charts is illustrated using a real case of calibration application in the automotive industry. Copyright © 2016 John Wiley & Sons, Ltd.

]]>We present and discuss a stochastic model describing the wear process of cylinder liners in a marine diesel engine. The model is based on a stochastic differential equation, and Bayesian inference is illustrated. Corrosive action and measurement error, both quite negligible, are modeled with a Wiener process whereas a jump process is used to describe the contribution of soot particles to the wear process. The model can be used to forecast the wear process and, consequently, plan condition-based maintenance activities. In the paper, we provide a critical illustration of the mathematical and computational aspects of the model. We propose a strategy that, implemented for simulated and real data, allows for stable parameter estimation and forecasts. Copyright © 2016 John Wiley & Sons, Ltd.

]]>With many predictors in regression, fitting the full model can induce multicollinearity problems. Thus, ridge regression provides a beneficial means of stabilizing the coefficient estimates in the fitted model. Outliers can distort many measures in data analysis and statistical modeling, while influential points can have disproportionate impact on the estimated values of model parameters. Graphical summaries, called *firework plots*, are simple tools for evaluating the impact of outliers and influential points in regression. Variations of the plots focus on allowing visualization of the impact on the estimated parameters and variability. This paper describes how three-dimensional and pairwise firework plots as well as scalable waterfall–firework plots can be used to increase understanding of contributions of individual observations and as a complement to other regression diagnostic techniques in the ridge regression setting. Using these firework plots, we can find outliers and influential points and their impact on model parameters and show how in some applications, the type of analysis used changes the impact of various observations. We illustrate the methods with two examples. Copyright © 2016 John Wiley & Sons, Ltd.

In public health surveillance, control charts based on the daily number of hospitalizations may be monitored to detect outbreaks and/or to plan the offer of health assistance. A generalized linear model with negative binomial distribution is proposed to the number of hospitalizations, and it depends on the exposed population and covariates, as the day of week and sines and cosines to describe the seasonality. The objective of this study is to compare (in terms of *A**R**L*_{1}) the exponentially weighted moving average and the cumulative sum control charts for monitoring daily counts based on simulations of the daily number of hospitalizations due to respiratory diseases for people over 65 years old in São Paulo city (Brazil). Copyright © 2016 John Wiley & Sons, Ltd.

Degradation analysis is very useful in reliability assessment for complex systems and highly reliable products, because few or even no failures are expected in a reasonable life test span for them. In order to further our study on degradation analysis, a novel Wiener process degradation model subject to measurement errors is proposed. Two transformed time scales are involved to depict the statistical property evolution over time. A situation where one transformed time scale illustrates a linear form for the degradation trend and the other transformed time scale shows a generalized quadratic form for the degradation variance is discussed particularly. A one-stage maximum likelihood estimation of parameters is constructed. The statistical inferences of this model are further discussed. The proposed method is illustrated and verified in a comprehensive simulation study and two real applications for indium tin oxide (ITO) conductive film and light emitting diode (LED). The Wiener process model with mixed effects is considered as a reference. Comparisons show that the proposed method is more general and flexible, and can provide reasonable results, even in considerably small sample size circumstance. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper develops an adaptive exponentially weighted moving average (EWMA) chart that can be used as either a p chart for monitoring significant departures from in-control non-homogenous probabilities of failure or success or a risk-adjusted control chart for success or failure of an event. An example of a risk adjustment process is monitoring the performance of a particular surgery over time where we need to adjust for the temporal changes in patient case mix. If the magnitude of this shift is known in advance, as would be the case in some hypothesis testing applications, then the paper offers a way of selecting the appropriate exponential weights to be efficient at detecting such a variable shift. The adaptive EWMA p chart is tested using extensive simulations. Processes for its efficient design are offered. The example application offers practitioners a means of evaluating a trial in real time rather than the traditional approach of evaluating the trial at the end of the study period. This is helpful in deciding how long the trial should run as well as potentially adapting the design over time as more is understood about the trial uncertainties. This may be particularly useful in evaluating expensive trials. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This study focuses on the simulation of automotive warranty data, using a two-dimensional (2D) parametric approach based on copulas. We start with the description of a real automotive warranty database and follow it through to the building of 2D parametric model for warranty prediction. The task of parameter fitting and measuring the quality of the fit is also addressed. The accuracy of the model proposed is compared with previously published non-parametric models. Finally, using this 2D parametric model, data are generated and compared with the original data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Because in Weibull analysis, the key variable to be monitored is the lower reliability index (*R*(*t*)), and because the *R*(*t*) index is completely determined by both the lower scale parameter (*η*) and the lower shape parameter (*β*), then based on the direct relationships between *η* and *β* with the log-mean (*μ*_{x}) and the log-standard deviation (*σ*_{x}) of the analyzed lifetime data, a pair of control charts to monitor a Weibull process is proposed. Moreover, because the fact that in Weibull analysis, right censored data is common, and because it gives uncertainty to the estimated Weibull parameters, then in the proposed charts, *μ*_{x} and *σ*_{x} are estimated of the conditional expected times of the related Weibull family. After that both, *μ*_{x} and *σ*_{x} are used to monitor the Weibull process. In particular, *μ*_{x} was set as the lower control limit to monitor *η*, and *σ*_{x} was set as the upper control limit to monitor *β.* Numerical applications are used to show how the charts work. Copyright © 2016 John Wiley & Sons, Ltd.

The main challenge in maintenance planning lies in the realistic modeling of the maintenance policy. This paper is focused on the maintenance optimization of complex repairable systems using Bayesian networks. A new policy is developed for periodic imperfect preventive maintenance policy with minimal repair at failure; this policy allows us to take into consideration several types of preventive maintenance with different efficiency levels. The Bayesian networks are used for complex system modeling, allowing the evaluation of the model parameters. The Weibull parameters and the maintenance efficiency are evaluated thanks to the proposed methodology using Bayesian inference. The approach developed in this paper is applied on a real system, to determine the optimal maintenance plan for a turbo-pump in oil industry. Copyright © 2016 John Wiley & Sons, Ltd.

]]>When optimizing a product or process with multiple responses, a two-stage Pareto front approach is a useful strategy to evaluate and balance trade-offs between different estimated responses to seek optimum input locations for achieving the best outcomes. After objectively eliminating non-contenders in the first stage by looking for a Pareto front of superior solutions, graphical tools can be used to identify a final solution in the second subjective stage to compare options and match with user priorities. Until now, there have been limitations on the number of response variables and input factors that could effectively be visualized with existing graphical summaries. We present novel graphical tools that can be more easily scaled to higher dimensions, in both the input and response spaces, to facilitate informed decision making when simultaneously optimizing multiple responses. A key aspect of these graphics is that the potential solutions can be flexibly sorted to investigate specific queries, and that multiple aspects of the solutions can be simultaneously considered. Recommendations are made about how to evaluate the impact of the uncertainty associated with the estimated response surfaces on decision making with higher dimensions. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Because of its advantages of design, performance, and effectiveness in reducing the effect of patients' prior risks, the risk-adjusted Bernoulli cumulative sum (CUSUM) chart is widely applied to monitor clinical and surgical outcome performance. In practice, it is beneficial to obtain evidence of improved surgical performance using the lower risk-adjusted Bernoulli CUSUM charts. However, it had been shown that the in-control performance of the charts with constant control limits varies considerably for different patient populations. In our study, we apply the dynamic probability control limits (DPCLs) developed for the upper risk-adjusted Bernoulli CUSUM charts to the lower and two-sided charts and examine their in-control performance. The simulation results demonstrate that the in-control performance of the lower risk-adjusted Bernoulli CUSUM charts with DPCLs can be controlled for different patient populations, because these limits are determined for each specific sequence of patients. In addition, practitioners could also run upper and lower risk-adjusted Bernoulli CUSUM charts with DPCLs side by side simultaneously and obtain desired in-control performance for the two-sided chart for any particular sequence of patients for a surgeon or hospital. Copyright © 2016 John Wiley & Sons, Ltd.

]]>A commonly used model to analyze experiments with normal responses does not distinguish between replicates and repeats. The same problem arises with binary and count responses where we can use a generalized linear model. In this article, we propose using models that explicitly allow for two sources of variation, that due to replicates and that due to repeats. In addition, for experiments carried out on high-volume, existing processes, there are often large amounts of data, collected in different ways, that are available to aid in the planning and analysis of the experiment. We demonstrate the value of using these available data with two detailed examples. We finish with a brief summary and raise some further issues. Copyright © 2016 John Wiley & Sons, Ltd.

]]>A phase-I study is generally used when population parameters are unknown. The performance of any phase-II chart depends on the preciseness of the control limits obtained from the phase-I analysis. The performance of phase-I bivariate dispersion charts has mainly been investigated for bivariate normal distribution. However, this assumption is seldom fulfilled in reality. The current work develops and studies the performance of phase-I |*S*| and |*G*| charts for monitoring the process dispersion of bivariate non-normal distributions. The necessary control charting constants are determined for the bivariate non-normal distributions at nominal false alarm probability (*FAP*_{0}). The performance of these charts is evaluated and compared in a situation when samples are generated by bivariate logistic, bivariate Laplace, bivariate exponential, or bivariate *t*_{5} distribution. The analysis shows that the proper consideration to underlying bivariate distribution in the construction of phase-I bivariate dispersion charts is very important to give a real picture of in or out of control process status. Copyright © 2016 John Wiley & Sons, Ltd.

Complex physical systems are increasingly modeled by computer codes which aim at predicting the reality as accurately as possible. During the last decade, code validation has benefited from a large interest within the scientific community because of the requirement to assess the uncertainty affecting the code outputs. Inspiring from past contributions to this task, a testing procedure is proposed in this paper to decide either a pure code prediction or a discrepancy-corrected one should be used to provide the best approximation of the physical system.

In a particular case where the computer code depends on uncertain parameters, this problem of model selection can be carried out in a Bayesian setting. It requires the specification of proper prior distributions that are well known as having a strong impact on the results. Another way consists in specifying non-informative priors. However, they are sometimes improper, which is a major barrier for computing the Bayes factor. A way to overcome this issue is to use the so-called *intrinsic Bayes factor* (IBF) in order to replace the ill-defined Bayes factor when improper priors are used. For computer codes which depend linearly on their parameters, the computation of the IBF is made easier, thanks to some explicit marginalization. In the paper, we present a special case where the IBF is equal to the standard Bayes factor when the *right-Haar* prior is specified on the code parameters and the scale of the code discrepancy.

On simulated data, the IBF has been computed for several prior distributions. A confounding effect between the code discrepancy and the linear code is pointed out. Finally, the IBF is computed for an industrial computer code used for monitoring power plant production. Copyright © 2016 John Wiley & Sons, Ltd.

The failure mode and effect analysis (FMEA) is a widely applied technique for prioritizing equipment failures in the maintenance decision-making domain. Recent improvements on the FMEA have largely focussed on addressing the shortcomings of the conventional FMEA of which the risk priority number is incorporated as a measure for prioritizing failure modes. In this regard, considerable research effort has been directed towards addressing uncertainties associated with the risk priority number metrics, that is occurrence, severity and detection. Despite these improvements, assigning these metrics remains largely subjective and mostly relies on expert elicitations, more so in instances where empirical data are sparse. Moreover, the FMEA results remain static and are seldom updated with the availability of new failure information. In this paper, a dynamic risk assessment methodology is proposed and based on the hierarchical Bayes theory. In the methodology, posterior distribution functions are derived for risk metrics associated with equipment failure of which the posterior function combines both prior functions elicited from experts and observed evidences based on empirical data. Thereafter, the posterior functions are incorporated as input to a Monte Carlo simulation model from which the expected cost of failure is generated and failure modes prioritized on this basis. A decision scheme for selecting appropriate maintenance strategy is proposed, and its applicability is demonstrated in the case study of thermal power plant equipment failures. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Because of the characteristics of a system or process, several prespecified changes may happen in some statistical process control applications. Thus, one possible and challenging problem in profile monitoring is detecting changes away from the ‘normal’ profile toward one of several prespecified ‘bad’ profiles. In this article, to monitor the prespecified changes in linear profiles, two two-sided cumulative sum (CUSUM) schemes are proposed based on Student's *t*-statistic, which use two separate statistics and a single statistic, respectively. Simulation results show that the CUSUM scheme with a single statistic uniformly outperforms that with two separate statistics. Besides, both CUSUM schemes perform better than alternative methods in detecting small shifts in prespecified changes, and become comparable on detecting moderate or large shifts when the number of observations in each profile is large. To overcome the weakness in the proposed CUSUM methods, two modified CUSUM schemes are developed using *z*-statistic and studied when the in-control parameters are estimated. Simulation results indicate that the modified CUSUM chart with a single charting statistic slightly outperforms that with two separate statistics in terms of the average run length and its standard deviation. Finally, illustrative examples indicate that the CUSUM schemes are effective. Copyright © 2016 John Wiley & Sons, Ltd.

The statistical performance of traditional control charts for monitoring the process shifts is doubtful if the underlying process will not follow a normal distribution. So, in this situation, the use of a nonparametric control charts is considered to be an efficient alternative. In this paper, a nonparametric exponentially weighted moving average (EWMA) control chart is developed based on Wilcoxon signed-rank statistic using ranked set sampling. The average run length and some other associated characteristics were used as the performance evaluation of the proposed chart. A major advantage of the proposed nonparametric EWMA signed-rank chart is the robustness of its in-control run length distribution. Moreover, it has been observed that the proposed version of the EWMA signed-rank chart using ranked set sampling shows better detection ability than some of the competing counterparts including EWMA sign chart, EWMA signed-rank chart, and the usual EWMA control chart using simple random sampling scheme. An illustrative example is also provided for practical consideration. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Full automation of metal cutting processes has been a long held goal of the manufacturing industry. One key obstacle to achieving this ambition has been the inability to monitor completely the condition of the cutting tool in real time, as premature tool breakage and heavy tool wear can result in substantial costs through damage to the machinery and increasing the risk of non-conforming items that have to be scrapped or reworked. Instead, the condition of the tool has to be indirectly monitored using modern sensor technology that measures the acoustic emission, sound, spindle power and vibration of the tool during a cut. An online monitoring procedure for such data is proposed. Firstly, the standard deviation is extracted from each sensor signal to summarise the state of the tool after each cut. Secondly, a multivariate autoregressive state space model is specified for estimating the joint effects and cross-correlation of the sensor variables in Phase I. Then we apply a distribution-free monitoring scheme to the model residuals in Phase II, based on binomial type statistics. The proposed methodology is illustrated using a case study of titanium alloy milling (a machining process used in the manufacture of aircraft landing gears) from the Advanced Manufacturing Research Centre in Sheffield, UK, and is demonstrated to outperform alternative residual control charts in this application. © 2016 The Authors Quality and Reliability Engineering International Published by John Wiley & Sons Ltd.

]]>This paper deals with the optimization of industrial asset management strategies, whose profitability is characterized by the Net Present Value (NPV) indicator which is assessed by a Monte Carlo simulator. The developed method consists in building a metamodel of this stochastic simulator, allowing to obtain, for a given model input, the NPV probability distribution without running the simulator. The present work is concentrated on the emulation of the quantile function of the stochastic simulator by interpolating well chosen basis functions and metamodeling their coefficients (using the Gaussian process metamodel). This quantile function metamodel is then used to treat a problem of strategy maintenance optimization (four systems installed on different plants), in order to optimize an NPV quantile. Using the Gaussian process framework, an adaptive design method (called quantile function expected improvement) is defined by extending in our case the well-known efficient global optimization algorithm. This allows to obtain an “optimal” solution using a small number of simulator runs. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Its wide application in practice makes the monitoring of the rate of rare events a popular research topic. Recently a researcher proposed plotting the counts between events on an individuals *X*-chart with an upper control limit to detect process improvement and plotting the reciprocals of the counts on an *X*-chart to detect process deterioration. He also used the median as the center line and the median moving range to obtain control limits in both control charts to address the problem of the standard deviation estimate inflation caused by extreme values. In our paper, we investigated the statistical performance of the four proposed approaches using simulation. We find using the mean results in a high proportion of ineffective control limits, while using the median avoids the issue of ineffective control limits but produces an unacceptably high proportion of false alarms. Copyright © 2016 John Wiley & Sons, Ltd.

The Taguchi robust design method traditionally deals with single-characteristic problems. Various methods have been developed for extending the Taguchi single-characteristic robust design method to the case of multi-characteristic robust design problems. However, most of those methods have shortcomings in that they do not properly consider the variance–covariance structures among performance characteristics and/or do not preserve the original properties of the Taguchi signal-to-noise ratio for single-characteristic robust design problems. To overcome these shortcomings, this paper develops a multivariate loss function approach to multi-characteristic robust design problems with an appropriately defined signal-to-noise ratio. Its performance is evaluated using simulated examples, and the results indicate that it generally outperforms existing representative methods for correlated as well as uncorrelated experimental data. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This paper investigates regularization for continuously observed covariates that resemble step functions. The motivating examples come from operational test data from a recent US Department of Defense test of the Shadow Tactical Unmanned Aircraft system. The response variable, quality of video provided by the Shadow to friendly ground units, was measured on an ordinal scale continuously over time. Functional covariates, altitude and distance, can be well approximated by step functions. Two approaches for regularizing these covariates are considered, including a thinning approach commonly used within the Department of Defense to address autocorrelated time series data, and a novel ‘smoothing’ approach, which first approximates the covariates as step functions and then treats each ‘step’ as a uniquely observed data point. Datasets resulting from both approaches are fit using a mixed model cumulative logistic regression, and we compare their results. While the thinning approach identifies altitude as having a significant impact on video quality, the smoothing approach finds no evidence of an effect. This difference is attributable to the larger effective sample size produced by thinning. System characteristics make it unlikely that video quality would degrade at higher altitudes, suggesting that the thinning approach has produced a Type 1 error. By accounting for the functional characteristics of the covariates, the novel smoothing approach has produced a more accurate characterization of the Shadow's ability to provide full motion video to supported units. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, motivated by a multiple profile monitoring problem, we introduce general functional exponentially weighted moving average (EWMA) control charts. When functional data to be monitored are smooth enough to be representable by a finite dimensional basis, a particular version of these functional EWMAs is shown to be a multivariate EWMA applied to basis coefficients. Hence, it is called f-EWMA for monitoring single profiles and f-MEWMA for multiple profiles. The use of f-MEWMA is illustrated in connection to health monitoring of a steam sterilizer during its life cycle. Indeed, each sterilization run gives several profiles related to machine health, and degradation of the steam sterilizer during its life cycle modifies profile curvature in an unpredictable way. Hence, a control chart capable to monitor multiple sterilization profiles during the sterilizer life cycle is needed. The f-EWMA thresholds or control limits have been computed using Monte Carlo simulations. Moreover, the f-EWMA performance has been assessed using experimental data generated in laboratory according to anomalies considered relevant to the sterilizer maintenance program. Consequently, the average run length for these anomalies has been computed applying Monte Carlo simulation to experimental results. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Test laboratories with International Organization for Standardization/International Electrotechnical Commission 17025:2005 accreditation are obliged to calculate measurement uncertainty and declare the calculated value. Furthermore, they have to ensure the quality of the test results, and their participation in interlaboratory comparisons is mandatory for the accreditation. To this end, a standard procedure is available, and the laboratory's performance is also assessed by comparing its results with the reference value.

While several studies consider the problem of analyzing interlaboratory comparison data, the problem still remains of how to include all the measurements (containing uncertainties and outliers) and all the dispersion effects arising during the test activity, in the analysis. This paper aims to improve the analysis of interlaboratory comparison data by focusing on an error measurement model, which considers the declared measured values and the corresponding uncertainties, and by also accounting for other dispersion effects involved in the interlaboratory activity. The problems of the small sample size and the presence of outliers are taken into account through the calculation of confidence intervals, by also evaluating the contribution of the variances estimated for the uncertainties, namely, by the signal-to-noise and reliability ratios. Moreover, the laboratory's performance is assessed by discriminating for the presence of outliers related to the reference value and/or to the uncertainty. The results are satisfactory in view of the issues addressed in this study, especially if we consider the specific kind of data. Copyright © 2016 John Wiley & Sons, Ltd.

Component-based software development is now a widely used software development technique. In this paper, we propose a reliability evaluation model used to evaluate component-based software systems, focusing on analyzing the effects of different components on software reliability. Our model utilizes the complex network theory based on the state-based evaluation approach. First, a detailed analysis is made to identify the components used in a software system. Next, the most influential node discovery algorithm in complex network theory is used to calculate the impact factor of each component. Finally, the reliability of the software system is evaluated based on the impact factors. Results show that the proposed model achieves better accuracy compared with conventional models by using the internal structure of the software system during evaluation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>This work proposes a method to improve the QoS provided to internet users by website servers. In particular, the goal is to minimize the expected number of browsing steps (clicks), also known as the expected path length, which are required to reach a website page by a community of users. We use Markov chain modeling to represent the transition probabilities from one webpage to another, and the first passage from page to page that can be retrieved from web server logs. The proposed method deletes links among webpages to minimize the expected path length of the website. Three different methods for achieving this goal are examined: (i) a greedy deletion heuristic; (ii) an approximated branch and bound algorithm; and (iii) a cross-entropy metaheuristic. Numerical studies show that the proposed greedy heuristic results in the optimal solution in more than 60% of the tested cases, while in almost 90% of the cases, the obtained solution is within 10% of the optimal solution. Both the approximated branch and bound and the cross-entropy methods achieved optimality in more than 80% of the tested cases; however, this came with a much higher computational cost. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Customer satisfaction is usually measured by questionnaires with statements scored on an anchored scale. Responses to such surveys consist of compositional data (CoDa) by considering the frequency distribution of ratings across questions or respondents. By CoDa, we mean vectors whose elements contain relative information, that is, its total sum is not informative. In this paper, we explore the contribution of CoDa methodology to the analysis of customer satisfaction surveys. Compositional methods are based on the principle of working on coordinates, that is, values obtained by the logarithm of ratios of the parts in a composition. We present common compositional tools such as descriptive statistics, control chart and principal component analysis (among others), and show an example of application to the annual customer satisfaction survey of the ABC Company. We highlight the advantage of the compositional approach in dealing with non response, which turns into a difficulty when dealing with zeros. We finally underline the pros and cons of the proposed analysis. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The coating of materials plays an important role in various fields of engineering. Essential properties such as wear protection can be improved by a suitable coating technique. One of these techniques is high-velocity oxygen-fuel spraying. A drawback of high-velocity oxygen-fuel spraying is that it lacks reproducibility due to effects which are hard to measure directly. However, coating powder particles are observable over time during their flight towards the material and contain valuable information about the state of the process. Because of their smooth nature, measures of temperature and velocity can be assumed as target variables in generalized function-on-scalar regression. We propose methods to perform residual analysis in this framework aiming at the detection of individual residual functions which deviate from the majority of residuals. These methods help to detect anomalies in the process and hence improve the estimators. Functional target variables result in functional residuals whose analysis is barely explored. One reason might be that ordinary residual plots should be inspected at each observed point in time. We circumvent this infeasible procedure by the use of functional depths that help to identify unusual residuals and thereby gain deeper insight of the data-generating process. In a simulation study, we find that a good depth for detecting trend outliers is the *h*-modal depth as long as the link function is chosen correctly. In case of shape outliers rFUNTA pseudo-depth performs well. Copyright © 2016 John Wiley & Sons, Ltd.

Multivariate control charts are well known to be more sensitive to the occurrence of variation in processes with two or more correlated quality variables than univariate charts. The use of separate univariate control charts to monitor multivariate process can be misleading as it ignores the correlation between the quality characteristics. The application of multivariate control charts allows for the simultaneous monitoring of the quality characteristics by forming a single chart. The charts operate on the assumption that process observations are normally distributed, but in practice this is not always the case. In this study, we examine and present multivariate dispersion control charts for detecting shifts in the covariance matrix of normal and non-normal bivariate processes. These control charts, referred to as *SMAX*, *QMAX*, *MDMAX* and *MADMAX*, rely on dispersion estimates, such as the sample standard deviation (*S*), interquartile range (*Q*), average absolute deviation from median (*MD*) and median absolute deviation (*MAD*), respectively. We compare the performances of these charts to the existing multivariate generalized variance |** S**| and

Health data are collected dominantly through sensors mounted on different locations in the system. Optimization of sensor network has a significant influence on the reliability of system health prognostics process. In this research, the effect of sensors reliability is studied on their placement optimization. Sensors are considered in this study as components in system failure model. This study is aimed to use ‘Priority AND’ gate for evaluating the effect of time dependencies of sensors as well as components failure on the optimal sensor placement. Because of that, PAND gate is added to the fault tree between all sensors and their corresponding components to develop the failure model of each sensor placement scenario. For calculating the probability of top event, a Monte Carlo-based algebraic approach is proposed. In algebraic approach, temporal operator ‘BEFORE’ is proposed for calculating the probability of ‘PAND’ gate. The result of using ‘BEFORE’ operator is an analytical solution for probability of each cut sequence. Because of the complexity of analytical solution in practical problems, a Monte Carlo simulation is utilized on the solution in this research. Then the probability of each cut sequence is calculated. Consequently, the probability of top event for each scenario is obtained. Finally, all scenarios are ranked based on top event probabilities. As a case study, optimization of sensor placement has been demonstrated on steam turbine and results are discussed. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Correlation analysis is one of the standard and most informative descriptive statistical tools when studying relationships between variables in bivariate and multivariate data. However, when data is contaminated with outlying observations, the standard Pearson correlation might be misleading and result in erroneous outcomes.

In this paper, we propose three new approaches to find linear correlation based on the nonparametric method designed to analyse time series data, the singular spectrum analysis. In these proposals, the correlation is obtained after removing the noise from the data by using singular spectrum analysis based methods. The usefulness of our proposals in contaminated data is assessed by Monte Carlo simulation with different schemes of contamination, and with applications to real data on aluminium industry and synthetic sparse data. In addition, the model comparisons are made with robust hybrid filtering methods. Copyright © 2016 John Wiley & Sons, Ltd.

It is well known that measurement error of numerical measurements can be divided into a systematic and a random component and that only the latter component is estimable if there is no gold standard or reference standard available. In this paper, we consider measurement error of nominal measurements. We motivate that, on a nominal measurement scale too, measurement error has a systematic and a random component and only the random component is estimable without gold standard.

Especially in literature about binary measurement error, it is common to quantify measurement error by ‘false classification probabilities’: the probabilities that measurement outcomes are unequal to the correct outcomes. These probabilities can be split up in a systematic and a random component. We quantify the random component by ‘inconsistent classification probabilities’ (*ICP*s): the probabilities that a measurement outcome is unequal to the modal (instead of correct) outcome. Systematic measurement error then is the event that this modal outcome is unequal to the correct outcome.

We introduce an estimator for the *ICP*s and evaluate its properties in a simulation study. We end with a case study that demonstrates not only the determination and use of the *ICP*s but also demonstrates how the proposed modeling can be used for formal hypothesis testing. Things to test include differences between appraisers and random classification by a single appraiser. Copyright © 2016 John Wiley & Sons, Ltd.

Dynamic fault trees (DFTs) are powerful tools to model industrial systems having dynamic failure mechanisms, such as sequence- and function-dependent failure behaviors. Yet for large and complex DFTs, their quantitative analyses are still of great challenges. Up to now, many researchers have presented several approaches to deal with this problem, and among which, the sum of disjoint products (SDP) methods, such as dynamic binary decision tree, sequential binary decision diagram (SBDD), and improved SBDD, have proven to be an efficient way. In SDP methods, negating a generalized cut sequence is an unavoidable task. Yet, for a complex cut sequence expression where normal, cold and warm spares basic events coexist, its negating operation is still difficult and needs to be further studied. In this paper, based on De Morgan theorem, improved explicit formulas for negating a generalized cut sequences are presented. The new concept of universal set of basic event and its operating rules are proposed to deduce the simplified expressions of general enforcing occurring cut sequences and warm spares occurring cut sequences. To validate the presented approaches, a typical system DFT is analyzed. The results indicate the reasonability and effectiveness of the improved negating formulas. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The zero-inflated Poisson distribution serves as an appropriate model when there is an excessive number of zeros in the data. This phenomenon frequently occurs in count data from high-quality processes. Usually, it is assumed that these counts exhibit serial independence, while a more realistic assumption is the existence of an autocorrelation structure between them. In this work, we study control charts for monitoring correlated Poisson counts with an excessive number of zeros. Zero-inflation in the process is captured via appropriate integer-valued time series models. Extensive numerical results are provided regarding the performance of the considered charts in the detection of changes in the mean of the process as well as the effects of zero-inflation on them. Finally, a real-data practical application is given. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Regular two-level fractional factorial designs have complete aliasing in which the associated columns of multiple effects are identical. In this article, we show how Bayesian variable selection can be used to analyze experiments that use such designs. Bayesian variable selection naturally incorporates heredity in addition to sparsity and hierarchy. This prior information is used to identify the most likely combinations of active terms. The method is demonstrated on simulated and real experiments. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Multivariate nonparametric control charts can be very useful in practice and have recently drawn a lot of interest in the literature. Phase II distribution-free (nonparametric) control charts are used when the parameters of the underlying unknown continuous distribution are unknown and can be estimated from a sufficiently large Phase I reference sample. While a number of recent studies have examined the in-control (IC) robustness question related to the size of the reference sample for both univariate and multivariate normal theory (parametric) charts, in this paper, we study the effect of parameter estimation on the performance of the multivariate nonparametric sign exponentially weighted moving average (MSEWMA) chart. The in-control average run-length (ICARL) robustness and the out-of-control shift detection performance are both examined. It is observed that the required amount of the Phase I data can be very (perhaps impractically) high if one wants to use the control limits given for the known parameter case and maintain a nominal ICARL, which can limit the implementation of these useful charts in practice. To remedy this situation, using simulations, we obtain the “corrected for estimation” control limits that achieve a desired nominal ICARL value when parameters are estimated for a given set of Phase I data. The out-of-control performance of the MSEWMA chart with the correct control limits is also studied. The use of the corrected control limits with specific amounts of available reference sample is recommended. Otherwise, the performance the MSEWMA chart may be seriously affected under parameter estimation. Copyright © 2016 John Wiley & Sons, Ltd.

]]>When dealing with practical problems of stress–strength reliability, one can work with fatigue life data and make use of the well-known relation between stress and cycles until failure. For some materials, this kind of data can involve extremely large values. In this context, this paper discusses the problem of estimating the reliability index *R* = *P*(*Y* < *X*) for stress–strength reliability, where stress *Y* and strength *X* are independent *q*-exponential random variables. This choice is based on the *q*-exponential distribution's capability to model data with extremely large values. We develop the maximum likelihood estimator for the index *R* and analyze its behavior by means of simulated experiments. Moreover, confidence intervals are developed based on parametric and nonparametric bootstrap. The proposed approach is applied to two case studies involving experimental data: The first one is related to the analysis of high-cycle fatigue of ductile cast iron, whereas the second one evaluates the specimen size effects on gigacycle fatigue properties of high-strength steel. The adequacy of the *q*-exponential distribution for both case studies and the point and interval estimates based on maximum likelihood estimator of the index *R* are provided. A comparison between the *q*-exponential and both Weibull and exponential distributions shows that the *q*-exponential distribution presents better results for fitting both stress and strength experimental data as well as for the estimated *R* index. Copyright © 2016 John Wiley & Sons, Ltd.

The in-control performance of any control chart is highly associated with the accuracy of estimation for the in-control parameter(s). For the risk-adjusted Bernoulli cumulative sum (CUSUM) chart with a constant control limit, it had been shown that the estimation error could have a substantial effect on the in-control performance. In our study, we examine the effect of estimation error on the in-control performance of the risk-adjusted Bernoulli CUSUM chart with dynamic probability control limits (DPCLs). Our simulation results show that the in-control performance of risk-adjusted Bernoulli CUSUM chart with DPCLs is also affected by the estimation error. The most important factors affecting estimation error are the specified desired in-control average run length, the Phase I sample size, and the adverse event rate. However, the effect of estimation error is uniformly smaller for the risk-adjusted Bernoulli CUSUM chart with DPCLs than for the corresponding chart with a constant control limit under various realistic scenarios. In addition, we found a substantial reduction in the mean and variation of the standard deviation of the in-control run length when DPCLs are used. Therefore, use of DPCLs has yet another advantage when designing a risk-adjusted Bernoulli CUSUM chart. Copyright © 2016 John Wiley & Sons, Ltd.

]]>To monitor a Weibull process with individual measurements, some methods such as power transformation, inverse erf function, and Box–Cox transformation have been used to transform the Weibull data to a normal distribution. In this study, we conduct a simulation study to compare their performances in terms of the bias and mean square errors. A practical guide is recommended. Additionally, we present the maximum exponentially weighted moving average chart based on the transformation method to monitor a Weibull process with individual measurements. We compare the average run lengths of the proposed chart and the combined individual and moving range charts under the three cases including mean changes, sigma changes, and both mean and sigma changes. It is shown that the proposed control chart outperforms the combined individual and moving range charts for all three cases. Moreover, two examples are used to illustrate the applicability of the proposed control chart. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Non-stochastic simulation models, such as finite element or computational fluid dynamics, often support real experiments in industrial research. It has become a common practice to provide a meta-model as computer experiments can be highly complex and time-consuming, and the design space is often broad. The meta-model is an approximation of the computer experiments response adapted both globally and locally on the design space, in order to capture local minima/maxima. The Kriging model, first proposed in Geostatistics, is doubtlessly the most popular meta-model because of its recognized ability to provide high-quality predictions. The underlying correlation structure can be evaluated either by estimating the parameters of correlation or by means of a variogram. In this paper, the performance of the Kriging model is compared with an Artificial Neural Network meta-model in order to determine which model guarantees higher accuracy in predicting the result of four-dimensional computational fluid dynamics experiments for low pressure turbines where energy loss values are provided. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Generalized equations for calculating the probability of failure on demand (PFD) in accordance with the IEC 61508 standard and a model based on Markov processes, taking into account common cause failures, are proposed in this paper. The solutions presented in the standard and in many references concentrate on simple *k*-out-of-*n* architectures. The equations proposed in the standard concern cases for *n* ≤ 3. In safety-related systems applied in industry, architectures of a number of elements *n* larger than three often occur. For this reason, a generalized equation for calculating PFD was proposed. For cases presented in the standard, the proposed equation provides identical results. The presented simplified Markov model allows the determination of the system availability (*A*(*t*)) and unavailability (1–*A*(*t*)) as well as their values in the steady state (*A* and 1–*A*). This model can be an alternative method of PDF calculations for various *k*-out-of-*n* architectures with self-diagnostic elements. Calculations performed according to the proposed models provide very similar results. The developed models are suitable for practical implementations in calculations of the safety integrity level. Copyright © 2016 John Wiley & Sons, Ltd.

Exponentially weighted moving average (EWMA) control charts are consistently used for the detection of small shifts contrary to Shewhart charts, which are commonly used for the detection of large shifts in the process. There are many interesting features of EWMA charts that have been studied for complete data in the literature. The aim of present study is to introduce and compare the double exponentially weighted moving average (DEWMA) and EWMA control charts under type-I censoring for Poisson-exponential distribution. The monitoring of mean level shifts using censored data is of a great interest in many applied problems. Moreover, a new idea of conditional median is introduced and further compared with the existing conditional expected values approach for monitoring the small mean level shifts. The performance of the DEWMA and EWMA charts is evaluated using the average run length, expected quadratic loss, and performance comparison index measures. The optimum sample size comparisons for the specified and unspecified parameters are also part of this study. Two applications for practical considerations are also discussed. It is observed that different censoring rates and the size of shifts significantly affect the performance of the EWMA and DEWMA charts. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Cognitive Reliability and Error Analysis Method (CREAM) is a common Human Reliability Analysis (HRA) method of second generation. In this paper, to improve the capabilities of CREAM, we propose a probabilistic method based on Bayesian Network (BN) to determine control mode and quantify Human Error Probability (HEP). The BN development process is described in a four-phase methodology including (i) definition of the nodes and their states; (ii) building the graphical structure; (iii) quantification of BN through assessment of the Conditional Probability Tables (CPT) values and (iv) model validation. Intractability of knowledge acquisition of large CPTs is the most significant limitation of existing BN model of CREAM. So, the main contribution of this paper lies in its application of Recursive Noisy-OR (RN-OR) gate to treat large CPTs assessment and ease knowledge acquisition. RN-OR allows combination of dependent Common Performance Conditions (CPCs). Finally, a quantitative HEP analysis is applied to enable more precise estimation of HEP through a probabilistic approach. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this paper, we propose a new acceptance sampling plan based on the exponentially weighted moving average (EWMA) with the yield index for simple linear profiles with one-sided specifications. The EWMA model provides information about current lots' and preceding lots' quality characteristics. The plan parameters are determined according to the smoothing constant of the EWMA statistic and various risks between the producer and the customer. As the smoothing constant equals one, the proposed plan becomes the traditional single sampling plan. The number of profiles required for lot sentencing using the proposed method is more economical than the traditional single sampling plan. The smaller the value of the smoothing constant, the lower the number of profiles required. A practical example from wind turbine manufacturing is applied to illustrate the performance of the proposed approach. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The purpose of this paper is to select the appropriate maintenance strategies for each failure mode of functionally significant item of conventional milling machine. In order to describe the criticality analysis of conventional milling machine, this paper presents a study on reliability-centered maintenance with fuzzy logic and its comparison with conventional method. On the basis of fuzzy logic, failure mode and effect analysis is introduced integrating with fuzzy linguistic scale method. After that, weighted Euclidean distance formula and centroid defuzzification is used for calculating risk priority number. The results indicate that based on risk priority number, value criticality ranking was decided, and appropriate maintenance strategies were suggested for each failure mode. It also reflects that a more accurate ranking can be performed by the application of fuzzy logic using linguistic rule to failure mode and effect analysis. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In the last 5 years, research works on distribution-free (nonparametric) process monitoring have registered a phenomenal growth. A Google Scholar database search on early September 2015 reveals 246 articles on distribution-free control charts during 2000–2009 and 466 articles in the following years. These figures are about 1400 and 2860 respectively if the word ‘nonparametric’ is used in place of ‘distribution-free’. Distribution-free charts do not require any prior knowledge about the process parameters. Consequently, they are very effective in monitoring various non-normal and complex processes. Traditional process monitoring schemes use two separate charts, one for monitoring process location and the other for process scale. Recently, various schemes have been introduced to monitor the process location and process scale simultaneously using a single chart. Performance advantages of such charts have been clearly established. In this paper, we introduce a new graphical device, namely, circular-grid charts, for simultaneous monitoring of process location and process scale based on Lepage-type statistics. We also discuss general form of Lepage statistics and show that a new modified Lepage statistic is often better than the traditional of Lepage statistic. We offer a new and attractive post-signal follow-up analysis. A detailed numerical study based on Monte-Carlo simulations is performed, and some illustrations are provided. A clear guideline for practitioners is offered to facilitate the best selection of charts among various alternatives for simultaneous monitoring of location-scale. The practical application of the charts is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Big data is big news, and large companies in all sectors are making significant advances in their customer relations, product selection and development and consequent profitability through using this valuable commodity. Small and medium enterprises (SMEs) have proved themselves to be slow adopters of the new technology of big data analytics and are in danger of being left behind. In Europe, SMEs are a vital part of the economy, and the challenges they encounter need to be addressed as a matter of urgency. This paper identifies barriers to SME uptake of big data analytics and recognises their complex challenge to all stakeholders, including national and international policy makers, IT, business management and data science communities.

The paper proposes a big data maturity model for SMEs as a first step towards an SME roadmap to data analytics. It considers the ‘state-of-the-art’ of IT with respect to usability and usefulness for SMEs and discusses how SMEs can overcome the barriers preventing them from adopting existing solutions. The paper then considers management perspectives and the role of maturity models in enhancing and structuring the adoption of data analytics in an organisation. The history of total quality management is reviewed to inform the core aspects of implanting a new paradigm. The paper concludes with recommendations to help SMEs develop their big data capability and enable them to continue as the engines of European industrial and business success. Copyright © 2016 John Wiley & Sons, Ltd.

Multivariate capability analysis has been the focus of study in recent years, during which many authors have proposed different multivariate capability indices. In the operative context, capability indices are used as measures of the ability of the process to operate according to specifications. Because the numerical value of the index is used to conclude about the capability of the process, it is essential to bear in mind that almost always that value is obtained from a sample of process units. Therefore, it is really necessary to know the properties that the indices have when they are calculated on sampling information, in order to assess the goodness of the inferences made from them.

In this work, we conduct a simulation study to investigate distributional properties of two existing indices: NMCpm index based on ratio of volumes and *Mp*_{2} index based on principal component analysis. We analyze the relative bias and the mean square error of the estimators of the indices, and we also obtain their empirical distributions that are used to estimate the probability that the indices classify correctly a process as capable or as incapable. The results allow us to recommend the use of one of these indices, as it has shown better properties. Copyright © 2016 John Wiley & Sons, Ltd.

A synthetic and a runs-rules
charts that are combined with a basic
chart are called a Synthetic-
and an improved runs-rules
charts, respectively. This paper gives the zero-state and steady-state theoretical results of the Synthetic-
and improved runs-rules
monitoring schemes. The Synthetic-
and improved runs-rules schemes can each be classified into four different categories, that is, (i) non-side-sensitive, (ii) standard side-sensitive, (iii) revised side-sensitive, and (iv) modified side-sensitive. In this paper, we first give the operation and, secondly, the general form of the transition probability matrices for each of the categories. Thirdly, in steady-state, we show that for each of the categories, the three methods that are widely used in the literature to compute the initial probability vectors result in different probability expressions (or values). Fourthly, we derive the closed-form expressions of the average run-length (*ARL*) vectors for each of the categories, so that, by multiplying each of these *ARL* vectors with the zero-state and steady-state initial probability vectors, yield the zero-state and steady-state *ARL* expressions. Finally, we formulate the closed-form expressions of the extra quadratic loss function for each of the categories. Copyright © 2016 John Wiley & Sons, Ltd.

The prevalence of large observational databases offers potential for identifying predictive relationships among variables of interest, although observational data are generally far less informative and less reliable than experimental data. We consider the problem of selecting a subset of records from a large observational database, for the purpose of designing a small but powerful experiment involving the selected records. It is assumed that the database contains the predictor variables but is missing the response variable, and that the purpose is to fit a logistic regression model after the response is obtained via the experiment. Active learning methods, which treat a similar problem, usually select records sequentially and focus on the single objective of classification accuracy. In contrast, many emerging applications require batch sample designs and have a variety of objectives that may include classification accuracy or accuracy of the estimated parameters, the latter being more in line with the optimal design of experiments (DOE) paradigm. The aim of this paper is to explore batch sampling from databases from a DOE perspective, particularly regarding the configuration, performance, and robustness of the designs that result from the different criteria. Through extensive simulation, we show that DOE-based batch sampling methods can substantially outperform random sampling and the entropy method that is popular in active learning. We also provide insight and guidelines for selecting appropriate design criteria and modeling assumptions. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Statistical profile monitoring has received much attention during recent years. While numerous contributions and applications have been demonstrated in the literature, the control statistics across many of the proposed methodologies have mainly remained unchanged, which somehow hinders further improvement of the monitoring schemes. In this paper, however, we propose a novel approach to leverage the information in the area formed between the sampled and in-control profile to improve the monitoring scheme performance. Specifically, we develop a control statistic based on the convolution of the observed and in-control profiles to monitor the shifts in the slope and intercept parameters. We also extend the mean square statistic to area weighted total sum of squares, to more effectively monitor the shift in the standard deviation. Extensive simulation studies are conducted to demonstrate the performance of the proposed methodology in comparison with some of the existing approaches. Copyright © 2016 John Wiley & Sons, Ltd.

]]>In this article, a new bivariate semiparametric Shewhart-type control chart is presented. The proposed chart is based on the bivariate statistic (*X*_{(r)}, *Y*_{(s)}), where *X*_{(r)} and *Y*_{(s)} are the order statistics of the respective *X* and *Y* test samples. It is created by considering a straightforward generalization of the well-known univariate median control chart and can be easily applied because it calls for the computation of two single order statistics. The false alarm rate and the in-control run length are not affected by the marginal distributions of the monitored characteristics. However, its performance is typically affected by the dependence structure of the bivariate observations under study; therefore, the suggested chart may be characterized as a semiparametric control chart.

An explicit expression for the operating characteristic function of the new control chart is obtained. Moreover, exact formulae are provided for the calculation of the alarm rate given that the characteristics under study follow specific bivariate distributions. In addition, tables and graphs are given for the implementation of the chart for some typical average run length values and false alarm rates. The performance of the suggested chart is compared with that of the traditional *χ*^{2} chart as well as to the nonparametric *SN*^{2} and *SR*^{2} charts that are based on the multivariate form of the sign test and the Wilcoxon signed-rank test, respectively. Finally, in order to demonstrate the applicability of our chart, a case study regarding a real-world problem related to winery production is presented. Copyright © 2016 John Wiley & Sons, Ltd.

Quality has become a key determinant of success in all aspects of industry. Exponentially weighted moving average control chart is an important tool of statistical process control used to monitor and improve quality of industrial processes. To enhance the performance of control charts, there are many strategies including the choice of an efficient plotting statistic, the choice of an efficient sampling design, the application of runs rules, and the use auxiliary information among many others. In this study, we propose nine different signaling schemes to enhance the performance of an exponentially weighted moving average control chart for location parameter, which is based on the exploitation of auxiliary information. Performance evaluation of the proposed schemes is carried out in terms of average run length. Comparisons of proposals are made with the classical as well as the auxiliary based exponentially weighted moving average and cumulative sum charts, which indicate that the proposed schemes perform better than the comparative counterparts under discussion. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Nonparametric control charts can be useful as an alternative in practice to the data expert when there is a lack of knowledge about the underlying distribution. In this study, a nonparametric cumulative sum (CUSUM) sign control chart for monitoring and detecting possible deviation from the process mean using ranked set sampling is proposed. Ranked set sampling is an effective method when the observations are inexpensive, and measurements are perhaps destructive. The average run length is used as performance measure for the proposed nonparametric CUSUM sign chart. Simulation study shows that the proposed version of the CUSUM sign chart using ranked set sampling generally outperforms than that version of the nonparametric CUSUM sign chart and the parametric CUSUM control chart using simple random sampling scheme. An illustrative example is also provided for practical consideration. Copyright © 2016 John Wiley & Sons, Ltd.

]]>The variable sample size (VSS)
chart, devoted to the detection of moderate mean shifts, has been widely investigated under the context of the average run-length criterion. Because the shape of the run-length distribution alters with the magnitude of the mean shifts, the average run length is a confusing measure, and the use of percentiles of the run-length distribution is considered as more intuitive. This paper develops two optimal designs of the VSS
chart, by minimizing (i) the median run length and (ii) the expected median run length for both deterministic and unknown shift sizes, respectively. The 5th and 95th percentiles are also provided in order to measure the variation in the run-length distribution. Two VSS schemes are considered in this paper, that is, when the (i) small sample size (*n _{S}*) or (ii) large sample size (

In the software reliability engineering literature, few attempts have been made to study the fault debugging environment using discrete-time modelling. Most endeavours assume that a detected fault to have been either immediately removed or is perfectly debugged. Such discrete-time models may be used for any debugging environment and may be termed black-box, which are used without having prior knowledge about the nature of the fault being debugged. However, if one has to develop a white-box model, one needs to be cognizant of the debugging environment. During debugging, there are numerous factors that affect the debugging process. These factors may include the internal, for example, fault density, and fault debugging complexity and the external that originates in the debugging environment itself, such as the skills of the debugging team and the debugging effort expenditures. Hence, the debugging environment fault removal may take a longer time after having been detected. Therefore, it is imperative to clearly understand the testing and debugging environment and, hence, the urgency to develop a model. The model ought to take into account the fault debugging complexity and incorporate the learning phenomenon of the debugger under imperfect debugging environment. This objective dictates developing a framework through an integrated modelling approach based on nonhomogenous Poisson process that incorporates these realistic factors during the fault debugging process. Actual software reliability data have been used to demonstrate applicability of the proposed integrated framework. Copyright © 2016 John Wiley & Sons, Ltd.

]]>Multivariate count data are popular in the quality monitoring of manufacturing and service industries. However, seldom effort has been paid on high-dimensional Poisson data and two-sided mean shift situation. In this article, a hybrid control chart for independent multivariate Poisson data is proposed. The new chart was constructed based on the test of goodness of fit, and the monitoring procedure of the chart was shown. The performance of the proposed chart was evaluated using Monte Carlo simulation. Numerical experiments show that the new chart is very powerful and sensitive at detecting both positive and negative mean shifts. Meanwhile, it is more robust than other existing multiple Poisson charts for both independent and correlated variables. Besides, a new standardization method for Poisson data was developed in this article. A real example was also shown to illustrate the detailed steps of the new chart. Copyright © 2016 John Wiley & Sons, Ltd.

]]>No abstract is available for this article.

No abstract is available for this article.

No abstract is available for this article.

According to Shewhart, control charts are not very sensitive to small and moderate size process shifts that is why those are less likely to be effective in Phase II. So to monitor small or moderate size process shifts in Phase II, cumulative sum (CUSUM) and exponentially weighted moving average (EWMA) control charts are considered as alternate of Shewhart control charts. In this paper, a Shewhart-type control chart is proposed by using difference-in-difference estimator in order to detect moderate size shifts in process mean in Phase II. The performance of the proposed control chart is studied for known and unknown cases separately through a detailed simulation study. For the unknown case, instead of using reference samples of small sizes, large size reference sample(s) is used as we can see in some of nonparametric control chart articles. In an illustrative example, the proposed control charts are constructed for both known and unknown cases along with Shewhart -chart, classical EWMA, and CUSUM control charts. In this application, the proposed chart is found comprehensively better than not only Shewhart -chart but also EWMA and CUSUM control charts. By comparing average run length, the proposed control chart is found always better than Shewhart -chart and in general better than classical EWMA and CUSUM control charts when we have relatively higher values of correlation coefficients and detection of the moderate shifts in the process mean is concerned. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Statistical design of experiments (DOE) is widely used today for process and product characterization and optimization. Owing to cost and time considerations, sometimes only a minimum number of experimental runs can be conducted, with added challenges in analysis when the experimental outcomes cannot be measured on a continuous scale and are expressed only in qualitative terms such as ‘excellent’, ‘satisfactory’ and ‘poor’: such outcomes are variously described as ‘categorical’, ‘attribute’, ‘qualitative’, ‘discrete’ or ‘counted’ in nature. This paper offers practical techniques of handling small experiments with such non-standard DOE response data which are otherwise impossible to analyze by standard statistical software. The suggested procedures, built upon what is called a Likelihood Transfer Function (LTF), do not require complex data analysis but would yield results consistent with the constraints of experimental conditions as well as the objectives of stakeholders. Copyright © 2015 John Wiley & Sons, Ltd.

]]>In this paper, a new quality control technique is discussed, in which the quality characteristic shifts with time. A trends semi-circle control chart is proposed to control this type of processes effectively. An optimization model is suggested to determine the optimal interval of adjustment. We also discuss the average run length of the proposed control chart and the extension to the EWMA chart. An example is used to illustrate its application in a production process. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Exponentially weighted moving average (EWMA) control charts can be designed to detect shifts in the underlying process parameters quickly while enjoying robustness to non-normality. Past studies have shown that performance of various EWMA control charts can be adversely affected when parameters are estimated or observations do not follow a normal distribution. To the best of our knowledge, simultaneous effect of parameter estimation and non-normality has not been studied so far. In this paper, a Markov chain approach is used to model and evaluate performance of EWMA control charts when parameter estimation is subject to non-normality using skewed and heavy-tailed symmetric distributions. Using standard deviation of the run length (SDRL), average run length (ARL), and percentiles of run lengths for various phase I sample sizes, we show that larger phase I sample sizes do not necessarily lead to a better performance for non-normal observations. Copyright © 2015 John Wiley & Sons, Ltd.

]]>A profile is a relationship between a response variable and one or more independent variables that can describe the quality of a process or product. On the other side, for an in-control process, capability indices are the criteria for process quality improvement that allows meeting customer expectations. Recently, evaluating the process capability of profiles has been investigated by some researchers. In all of these efforts, the response variable in the profile follows a normal distribution. However, sometimes, this assumption is violated, and the response variable may follow a binary or binomial distribution. In this paper, we propose two methods to measure the process capability when the quality of a process is characterized by a logistic regression profile. The performance of the proposed indices is evaluated through simulation studies. Finally, the application of the proposed methods is illustrated through a real case. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Cold standby systems subject to periodic inspections are widely applied in industry. However, the establishment of system reliability, expected time to failure, and appropriate time interval between inspections in a form accessible to industrial and maintenance engineers are still challenging issues. This paper aims to develop equations that solve this problem based on an analysis of expected exposure time for active and redundant components. A table and a general analytic expression along with graphs were elaborated to allow for the establishment of the appropriate time interval between inspections, given the level of reliability required and the number of standbys available. The main advantage of the results presented in this paper is the ability to conduct the reliability evaluation without the use of complex formulations such as Markov process or Laplace transforms that are usually beyond the skills of the industrial and maintenance staff. Also, a comparison with the exact solution using probability theory is presented, and it is proved that the method developed in this study provides a good approximation for practical applications. Copyright © 2015 John Wiley & Sons, Ltd.

]]>In the present work, starting from well-known methodologies, a new reliability allocation method [critical flow method (CFM)] has been proposed.

We focused on the most important conventional methods and discussed their limitations in order to motivate the current research. The results show the main common problem of the most conventional reliability allocation methods: they are developed for complex systems with series configurations but not for series–parallel ones. The consequence is an increase of the required units' reliability (series configuration) in order to guarantee the reliability system target.

Actually, the design and manufacturing of a subsystem with an extremely low failure rate would consume a considerable amount of economic resources. The proposed method can solve the shortcomings of the conventional methods with a new reliability approach useful to series–parallel configurations in order to obtain an important cost saving. The CFM has been applied to a liquid nitrogen cooling installation in a thermonuclear system, with many series–parallel configurations in order to guarantee the whole safety system. The proposed technique can be applied to working complex systems, and, in general, in the design phase of new installations. By comparing the CFM application results with real parameters, the new technique has been validated. The computational results clearly demonstrate the advantages of the proposed method. In particular, by applying the method to series–parallel configurations, it allocates failure rates higher than conventional methods, with a component cost reduction. Copyright © 2015 John Wiley & Sons, Ltd.

In most quality control applications, the errors generated from measurement system can adversely affect the ability of control charts in detecting out-of-control conditions. In this paper, the effect of measurement error with linearly increasing-type variance on the performance of maximum exponentially weighted moving average and mean-squared deviation (MAX-EWMAMS) control chart is studied. For this purpose, different out-of-control scenarios including mean shifts, variance shifts, and simultaneous shifts in both are considered, and the detecting performance of the proposed approach is investigated through simulation study. The results of simulation study in terms of three criteria including average run length, standard deviation of run lengths, and the empirical distribution of run lengths prove that the measurement error with linearly increasing-type variance can adversely affect the performance of MAX-EWMAMS control. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Many industrial experiments involve random factors. The random blocks model defines a covariance structure in the data, thus generalized least square estimators of the parameters are used, and their covariance matrix is usually computed using the inverse of the generalized least square estimators information matrix. Many optimality criteria are based on this approximation of the covariance matrix. However, this approach underestimates the true covariance matrix of the parameters, and thus, the optimality criteria should be corrected in order to pay attention to the actual covariance. The bias in the estimation of the covariance matrix is negligible (or even null) for many models, and for this reason in those cases, it has no sense to deal with the corrected criteria because of the complexity of the calculations involved. But for some models, the correction does have importance, and thus, the modified criteria should be considered when designing; otherwise, the practitioner may risk to deal with poor designs. Some analytical results are presented for simpler models, and optimal designs taking into account the corrected variance will be computed and compared with those using the traditional approach for more complex models, showing that the loss in efficiency may be very important when the correction for the covariance matrix is ignored. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Vulnerability of networks is not only associated with the ability to resist disturbances but also has an impact on stable development of the networks in the long run. In this paper, a new vulnerability evaluation based on fuzzy logics is proposed. To obtain the vulnerability of the networks, fuzzy logic is utilized to model uncertain environment. Therefore, this evaluation can be divided into two steps. One is to use a graph to represent the network and analyze the main properties of the network, including average path length, edge betweenness, degree, and clustering coefficient. The other is to use fuzzy logics according to the main properties. Namely, this step is to calculate deviations, design rule database, and obtain vulnerability. Two examples are given to show the efficiency and practicability of the proposed method at the end. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Control charts are important tools in statistical process control used to monitor shift in process mean and variance. This paper proposes a control chart for monitoring the process mean using the Downton estimator and provides table of constant factors for computing the control limits for sample size (*n* ≤ 10). The derived control limits for process mean were compared with control limits based on range statistic. The performance of the proposed control charts was evaluated using the average run length for normal and non-normal process situations. The obtained results showed that the
control chart, using the Downton statistic, performed better than Shewhart
chart using range statistic for detection of small shift in the process mean when the process is non-normal and compares favourably well with Shewhart
chart that is normally distributed. Copyright © 2015 John Wiley & Sons, Ltd.

This article analyzes the simultaneous control of several correlated Poisson variables by using the Variable Dimension Linear Combination of Poisson Variables (VDLCP) control chart, which is a variable dimension version of the LCP chart. This control chart uses as test statistic, the linear combination of correlated Poisson variables in an adaptive way, i.e. it monitors either *p*_{1} or *p* variables (*p*_{1} < *p*) depending on the last statistic value. To analyze the performance of this chart, we have developed software that finds the best parameters, optimizing the out-of-control average run length (ARL) for a shift that the practitioner wishes to detect as quickly as possible, restricted to a fixed value for in-control ARL. Markov chains and genetic algorithms were used in developing this software. The results show performance improvement compared to the LCP chart. Copyright © 2015 John Wiley & Sons, Ltd.

Recently, the exponentially weighted moving average (EWMA) statistic has been applied to acceptance sampling plans. The advantage of EWMA statistic is to consider the quality of the current lot and the preceding lots. As the smoothing parameter value equals to one, the sampling plan based on the EWMA statistic becomes a single sampling plan. In this study, we propose a sampling plan based on the EWMA yield index for lot sentencing for autocorrelation within linear profiles. The plan parameters are determined by considering the acceptable quality level at the producer's risk and the lot tolerance percent defective at the consumer's risk. The plan parameters are tabulated for various combinations of the smoothing constant of EWMA statistic and the acceptable quality level and lot tolerance percent defective at two risks. An example is provided for illustrating the proposed plan. Copyright © 2015 John Wiley & Sons, Ltd.

]]>This paper discusses an application of confidence intervals to the threshold decision value used in logistic regres]sion and discusses its effect on changing the quantification of false positive and false negative errors. In doing this a grey area, in which observations are not classified as success (1) or failure (0), but rather ‘uncertain’ is developed. The size of this grey area is related to the level of confidence chosen to create the interval around the threshold as well as the quality of logistic regression model fit. This method shows that potential errors may be mitigated. Monte Carlo simulation and an experimental design approach are used to study the relationship between a number of responses relating to classification of observations and the following factors: threshold level, confidence level, noise in the data, and number of observations collected. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Residual control charts are acknowledged to be effective tools for statistical process control of multistage processes. In these monitoring procedures, the models on the stage-wise correlation should be first derived before the control charts are implemented. Therefore, the monitoring performance is inevitably affected by the model fitting scheme. Most of the previous works are under the assumption that the derived models represent the process behavior perfectly. Far less is known about the effects of the model inaccuracy on the monitoring performance. To investigate the effects of the underlying models on the monitoring performance, residual control charts based on two different modeling schemes are compared in this paper. The results indicate that the charting performance is correlated with the model fitting schemes. That is, a more accurate model will significantly increase the detection power and decrease the false alarm rate as well. Copyright © 2015 John Wiley & Sons, Ltd.

]]>The exponentially weighted moving average (EWMA) model has been successfully used in acceptance sampling plans. The EWMA model provides the quality information of the current lot and the preceding lots. In addition, a multiple dependent state (MDS) sampling plan considers the quality information of the preceding lots. In this study, we present two new sampling plans for linear profiles. One is based on EWMA model with yield index using the single sampling plan, and the other is based on EWMA model with yield index using the MDS sampling plans. The plan parameters are determined by a nonlinear optimization approach. As the smoothing parameter value equals to one, the first proposed plan becomes the traditional single sampling plan. In addition, we compare the proposed plans with the traditional single sampling plan. The results indicate that the MDS sampling plan based on EWMA model with yield index with smaller value of smoothing parameter performs better than the traditional single sampling plan and the single sampling plan based on EWMA model with yield index in terms of the sample size required. One real example is used to illustrate the proposed plan. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Multivariate control charts are usually implemented in statistical process control to monitor several correlated quality characteristics. Process dispersion charts are used to determine the stability of process variation (which is typically done before monitoring the process location/mean). A Phase-I study is generally used when population parameters are unknown. This article develops Phase-I |*S*| and |*G*| control charts, to monitor the dispersion of a bivariate normal process. The charting constants are determined to achieve the required nominal false alarm probability (*FAP*_{0}). The performance of the proposed charts is evaluated in terms of (i) the attained false rate and (ii) the probability of signaling for out-of-control situations. The analysis shows that the proposed Phase-I bivariate charts correctly control the *FAP* (the false alarm probability) and detect a shift occurring in the bivariate dispersion matrix with adequate probability. An example is given to explain the practical implementation of these charts. Copyright © 2015 John Wiley & Sons, Ltd.

We evaluate the performance of the Crosier's cumulative sum (C-CUSUM) control chart when the probability distribution parameters of the underlying quality characteristic are estimated from Phase I data. Because the average run length (ARL) under estimated parameters is a random variable, we study the estimation effect on the chart performance in terms of the expected value of the average run length (AARL) and the standard deviation of the average run length (SDARL). Previous evaluations of this control chart were conducted while assuming known process parameters. Using the Markov chain and simulation approaches, we evaluate the in-control performance of the chart and provide some quantiles for its in-control ARL distribution under estimated parameters. We also compare the performance of the C-CUSUM chart to that of the ordinary CUSUM (O-CUSUM) chart when the process parameters are unknown. Our results show that large number of Phase I samples are required to achieve a quite reasonable performance. Additionally, the performance of the C-CUSUM chart is found to be superior to that of the O-CUSUM chart. Finally, we recommend the use of a recently proposed bootstrap procedure in designing the C-CUSUM chart to guarantee, at a certain probability, that the in-control ARL will be of at least the desired value using the available amount of Phase I data. Copyright © 2015 John Wiley & Sons, Ltd.

]]>In this work, we propose and study general inflated probability distributions that can be used for modelling and monitoring unusual count data. The considered models extend the well-known zero-inflated Poisson distribution because they allow the excess of values, other than zero. Four simple upper-sided control schemes are considered for the monitoring of count data based on the proposed general inflated Poisson distributions, and their performance is evaluated under various out-of-control situations. The usefulness of the considered models and techniques is illustrated via two real-data examples, while practical guidelines are provided as well. Copyright © 2015 John Wiley & Sons, Ltd.

]]>In many fields, there is the need to monitor quality characteristics defined as the ratio of two random variables. The design and implementation of control charts directly monitoring the ratio stability is required for the continuous surveillance of these quality characteristics. In this paper, we propose two one-sided exponentially weighted moving average (EWMA) charts with subgroups having sample size *n* > 1 to monitor the ratio of two normal random variables. The optimal EWMA smoothing constants, control limits, and *ARL*s have been computed for different values of the in-control ratio and correlation between the variables and are shown in several figures and tables to discuss the statistical performance of the proposed one-sided EWMA charts. Both deterministic and random shift sizes have been considered to test the two one-sided EWMA charts' sensitivity. The obtained results show that the proposed one-sided EWMA control charts are more sensitive to process shifts than other charts already proposed in the literature. The practical application of the proposed control schemes is discussed with an illustrative example. Copyright © 2015 John Wiley & Sons, Ltd.

This paper investigates detecting significant increases in communication patterns and levels between small groups of individuals within a moderate-size targeted group. Potential applications range from trying to establish emerging thought leaders within an organisation to the detection of the planning stages of a crime. The scan statistic is a popular choice for monitoring and detecting spatio-temporal outbreaks, but it is difficult to apply to large-scale target groups because of the computational effort required. When monitoring communication levels between thousands of people, the number of combinations of people whose communication may have increased is very high, and to scan through all of these to find which combinations have increased communications significantly is an enormous task. A successful surveillance plan will have early communication outbreak detection properties and good diagnostic capabilities for identifying individuals contributing to this outbreak. This paper proposes a new computationally feasible approach for detecting communication outbreaks based on exponentially weighted moving average smoothed communication counts between individuals within the network. We apply a cumulative sum of ordered signal-to-noise (SN) ratios for communication counts to flag significant departures from their respective median values. This plan is demonstrated to be efficient at detecting changes in communication levels for a small part of the network and diagnosing who is involved in the outbreak. Copyright © 2015 John Wiley & Sons, Ltd.

]]>The aim of this paper is to investigate the issue of degradation modeling and reliability assessment for products under irregular time-varying stresses. Conventional degradation models have been extensively used in the relevant literature to characterize degradation processes under deterministic stresses. However, the time-varying stress, which may affect degradation processes, widely exists in field conditions. This paper extends the general degradation-path model by considering the effects of time-varying stresses. The new degradation-path model captures influences of varying stresses on performance characteristics. A nonlinear least square method is used to estimate the unknown parameters of the proposed model. A bootstrap algorithm is adopted for computing the confidence intervals of the mean time to failure and percentiles of the failure-time distribution. Finally, a case study of lithium-ion cells is presented to validate the proposed method. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Gas transmission pipeline network is of great importance to any country using natural gases in its various technological processes. However, the usefulness cannot overshadow the threat posed to people and property by the grid failures. In order to quantify the reliability of the grid, se veral widely recognized pipeline incident databases have been established. However, each database contains data about pipelines operated in remote geographical regions with varying soil types, under different incident registration criterion. For a longer time period even in single database, there is variation of these incident registration criteria. Therefore, analysis of an entire sample without regard to the incident criteria change raises suspicions about the validity of resulting inferences.

Authors move beyond the qualitative pipeline incident database comparison and provide a methodology for quantitative integration of all available statistical information to improve gas pipeline network reliability evaluation. We develop a new model called Criteria-dependent Poisson model, which takes into account various incident data collection criteria and extend it to the hierarchical (Bayesian) case when different databases with differing incident registration criteria can be joined in the same analysis. With the real data examples, we demonstrate the applicability of our method, which unfolds itself to be of great usefulness in reliability prediction. The Lithuanian pipeline network failure rate assessment shows the advantages of hierarchical structuring of Criteria-dependent Poisson model in small sample problems. Copyright © 2015 John Wiley & Sons, Ltd.

Many quality characteristics have means and standard deviations that are not independent. Instead, the standard deviations of these quality characteristics are proportional to their corresponding means. Thus, monitoring the coefficient of variation (CV), for these quality characteristics, using a control chart has gained remarkable attention in recent years. This paper presents a side sensitive group runs chart for the CV (called the SSGR CV chart). The implementation and optimization procedures of the proposed chart are presented. Two optimization procedures are developed, i.e. (i) by minimizing the average run length (ARL) when the shift size is deterministic and (ii) by minimizing the expected average run length (EARL) when the shift size is unknown. An application of the SSGR CV chart using a real dataset is also demonstrated. Additionally, the SSGR CV chart is compared with the Shewhart CV, runs rules CV, synthetic CV and exponentially weighted moving average CV charts by means of ARLs and standard deviation of the run lengths. The performance comparison is also conducted using EARLs when the shift size is unknown. In general, the SSGR CV chart surpasses the other charts under comparison, for most upward and downward CV shifts. Copyright © 2015 John Wiley & Sons, Ltd.

]]>In robust parameter design (RPD), the ultimate goal is to identify the settings of control factors, which lead to an optimal mean with minimum process variation. In order to achieve this goal, usually two objective functions corresponding to the mean and variance of the desired quality characteristic are considered. Next, settings for the control variables (factors) are determined such that the values achieved for the two objective functions are as close to their ideal values as possible. This article highlights the impact of the miss-specification of noise variables as fixed factors in RPDs. The miss-specification or error in factor levels causes inappropriate estimates of the response model, which consequently affects the optimal settings of the control variables. The results are illustrated through an experimental example. Moreover, three different formulations are applied to determine the optimal settings for the case of *Larger The Better* (LTB). The performance of the formulations is also evaluated. Copyright © 2015 John Wiley & Sons, Ltd.

Because of cost and time limitations, reliability experiments frequently contain subsampling, which is a restriction on randomization. A two-stage approach can analyze right censored Weibull distributed reliability data with subsampling. However, in implementing such a method, we found that it did not address the problems of how to perform confidence intervals of low percentiles and reduce the bias of estimates. In this paper, we present a two-stage bootstrapping approach and an unbiasing factor approach to solve the aforementioned problems. An example is provided to illustrate the proposed method. In addition, the proposed method is compared with existing methods through simulation. The resulting simulations show that the proposed method performs well in low percentiles. Copyright © 2015 John Wiley & Sons, Ltd.

]]>Reliability is a measure of how well a product will perform under a certain set of conditions for a specified amount of time especially in the field environments. In this paper, a reliability study of a computer numerical control (CNC) system is described. For this analysis, field failure data from a shop manufacturing factory collected over the course of a year on approximately 20 CNC machine tools during their operating period were analyzed. Based on the field failure data, the two-parameter exponential distribution was found to be applicable to describe the time between failures of the CNC system from among many distributions including Weibull, gamma, two-parameter exponential, normal, and logistic using the chi-squared test.

In this paper, we discuss the reliability estimation of the CNC system based on the collected field failure data from a manufacturing factory using the maximum likelihood estimate (MLE) and uniform minimum variance estimate (UMVUE) methods. We also discuss the confidence intervals of the mean residual lifetime and reliability function. The result shows that the UMVUE method can provide much better and more accurate results in estimating the reliability of the CNC system than the MLE. This finding, on the one hand, seems to be obvious because the UMVUE is not only an unbiased estimator but also sufficient statistic with the smallest variance; on the other hand, it is not straightforward to obtain the UMVUE of any complex function, which is the reliability function in this case. This is a very important finding and is very encouraging because it indicates that the reliability analysis of the CNC system based on the UMVUE can be more than compensated by the ability of the complexity of parameter estimation method to better evaluate and predict the reliability of the CNC system. Hence, we believe that it is worth the effort to derive those parameter functions using UMVUE method. Copyright © 2015 John Wiley & Sons, Ltd.

The development of clean, sustainable alternative energy sources is increasingly important. One promising alternative to depleting fuel reserves is algae-based biodiesel fuel, which is both non-toxic and renewable. Despite the tremendous potential of algae-based biodiesel fuel, it has not yet been profitable because of the high cost per unit area of large cultivation. We present a novel application of Orthogonal Array Composite Designs (OACDs) to optimize lipid production of a cell-free system for algae. An OACD consists of a two-level fractional factorial design and a three-level orthogonal array.

We start with an initial screening experiment based on six chemicals using an OACD with 50 runs. Based on this experiment, two chemical compounds were removed and a follow-up 25-run OACD with four chemicals was performed. Our analysis shows that only three chemicals – nitrogen, magnesium, and phosphate – are essential for lipid accumulation, and a range of optimum combinations of these three chemicals is identified. The lipid accumulation for these three chemical combinations is substantially higher in comparison to the commercial medium, which contains 16 chemicals and soil water. This leads to a reduced cost of the chemical medium and increased efficiency of biodiesel production from the algal-based cell-free system, which can be used to significantly expand the use of biodiesel as a viable alternative to fossil fuels. Copyright © 2015 John Wiley & Sons, Ltd.

The growing power of computers enabled techniques created for design and analysis of simulations to be applied to a large spectrum of problems and to reach high level of acceptance among practitioners. Generally, when simulations are time consuming, a surrogate model replaces the computer code in further studies (e.g., optimization, sensitivity analysis, etc.). The first step for a successful surrogate modeling and statistical analysis is the planning of the input configuration that is used to exercise the simulation code. Among the strategies devised for computer experiments, Latin hypercube designs have become particularly popular. This paper provides a tutorial on Latin hypercube design of experiments, highlighting potential reasons of its widespread use. The discussion starts with the early developments in optimization of the point selection and goes all the way to the pitfalls of the indiscriminate use of Latin hypercube designs. Final thoughts are given on opportunities for future research. Copyright © 2015 John Wiley & Sons, Ltd.

]]>