The quality of a process or product can be characterized by a functional relationship between a response variable and one or more explanatory variables. In this work, we develop a novel hybrid nonparametric–parametric procedure for the monitoring of nonlinear profiles, that is, realizations of a noisy nonlinear functional relationship between variables. In particular, we focus on the ‘shape’ property of profiles as a way of measuring their quality. Starting from a nonparametric reference curve, we select our model from a universe of parametric deformations of such a curve with the property of preserving certain important shape characteristics. To this aim, we design a metric based on the solution of a related optimization problem. In addition, we show that the problem is well posed from a theoretical point of view. Finally, we illustrate the performance of the proposal with numerical examples from simulated and real environments. Copyright © 2014 John Wiley & Sons, Ltd.

]]>The design of single sampling plans in which the lot acceptance decision is based on both variables and attribute measurement of quality is discussed. A new plan, called the combined attributes–variables plan, is proposed incorporating an acceptance number to the regular variables plan for consumer protection. A design approach for the new plan is also developed for food manufacturing applications in which the sample size cannot be predetermined because of short production lengths and other analytical testing issues. Copyright © 2014 John Wiley & Sons, Ltd.

]]>Stochastic inventory control theory has focused on the order and/or pricing policy when the length of the selling period is known. In contrast to this focus, we examine the optimal length of the selling period—which we refer to as *market exit time*—in the context of a novel inventory replenishment problem faced by a supplier of a new, trendy, and relatively expensive product with a short life cycle. An important characteristic of the problem is that the supplier applies a price skimming strategy over time and the demand is modeled as a nonhomogeneous Poisson process with an intensity that is dependent on time. The supplier's problems of finding the optimal order quantity and market exit time, with the objective of maximizing expected profit, is studied. Procedures are proposed for joint optimization of the objective function with respect to the order quantity and the market exit time. Then, the effects of the order quantity and market exit time on the supplier's profitability are explored on the basis of a quantitative investigation. Copyright © 2014 John Wiley & Sons, Ltd.

Anomaly detection within non-numerical sequence data has developed into an important topic of data mining, but comparatively little research has been done regarding anomaly detection without training data (unsupervised anomaly detection). One application found in computer security is the detection of a so-called masquerade attack, which consists of an attacker abusing a regular account. This leaves only the session input, which is basically a string of non-numerical commands, for analysis. Our previous approach to this problem introduced the use of the so-called average index difference function for mapping the non-numerical symbol data to a numerical space. In the present paper, we examine the theoretical properties of the average index difference function, present an enhanced unsupervised anomaly detection algorithm based on the average index difference function, show the parameters to be theoretically inferable, and evaluate the performance using real-world data. Copyright © 2014 John Wiley & Sons, Ltd.

]]>One of the most traditional methods for information security can be as easy as sequence matching, such as the signature-based methods for virus detection. However, it is now well accepted that the signature-based methods are no longer satisfactory solutions for many security problems. The signature is usually too rigid, resulting in detection that is hard to adjust and easy to bypass. Statistical learning approaches can complete the puzzle to form an integrated defense system. Numerous statistical learning methods have been proposed in the last couple of decades for various applications. To solve information security problems statistically, we need to carefully choose appropriate statistical learning methods and evaluation procedures so that what seems to be a meaningful and effective method in terms of the statistical analysis can also be beneficial when the method is deployed to the real world. This paper aims to give an introductory and as self-contained as possible overview for how to correctly and effectively apply statistical methods to information security problems. We also demonstrate a couple of applications of the statistical learning methods on the problems of botnet detection and account security. Copyright © 2014 John Wiley & Sons, Ltd.

]]>Having in view the pricing of commodity derivatives in Libor Market Model (LMM) setting, we first analyze the set of basic rates we need to formulate the model by using the spanning tree concept taken from graph theory. Next, we present an efficient procedure for Monte Carlo simulation of the dynamics of the rates associated to LMM, avoiding the presence of the rates dependent drifts (drift-free simulation) and the presence of negative deflated bond prices and negative forward rates. The method is based upon a new parameterization of the martingales introduced by Glasserman and Zhao and it is extended to a Cross-Market Model for commodities. Finally, a particular example of commodity derivative (spread option) pricing problem is considered. Copyright © 2014 John Wiley & Sons, Ltd.

]]>The signature of a coherent system with independent and identically distributed component lifetimes is a useful tool in the study and comparison of lifetimes of systems. The signature of a coherent system with ** n** components is a vector whose

Firms are increasingly looking to provide a satisfactory prediction of customer lifetime value (CLV), a determining metric to target future profitable customers and to optimize marketing resources. One of the major challenges associated with the measurement of CLV is the choice of the appropriate model for predicting customer value because of the large number of models proposed in the literature. Earlier models to forecast CLV are relatively unsuccessful, whereas simple models often provide results which are equivalent or even better than sophisticated ones. To predict CLV, Rust *et al.* (2011) proposed a framework model that performs better than simple managerial heuristic models, but its implementation excludes cases where customer's profit is negative and does not handle lost-for-good situations. In this paper, we propose a modified model that handles both negative and positive profits based on Markov chain model (MCM), hence offering a greater flexibility by covering always-a-share and lost-for-good situations. The proposed model is compared with the Pareto/Negative Binomial Distribution (Pareto/NBD), the Beta Geometric/Negative Binomial Distribution (BG/NBD), the MCM, and the Rust *et al.* (2011) models. Based on customer credit card transactions provided by the North African retail bank, an empirical study shows that the proposed model has better forecasting performance than competing models. Copyright © 2014 John Wiley & Sons, Ltd.

This paper formulates a periodic inspection/upgrade model under free non-renewable warranty policy with a fixed length of warranty period for the second-hand product and derives an optimal number of inspections required and an optimal improvement level to minimize the expected total warranty cost from the perspective of the dealer during the warranty period. At each inspection, the product is taken an upgrade action in order to reduce its failure rate, which is characterized by an improvement level. It is true in general that the higher improvement level and more frequent inspections increase the warranty cost but lower the chance of product failures. In this paper, we propose an optimal strategy with respect to the inspection/upgrade action for the dealer to lower the total warranty cost. Firstly, we define an inspection/upgrade action, which reduces the product's failure rate proportionally to the improvement level at each inspection and then evaluate the expected total warranty cost. Secondly, the optimal solutions for the decision variables and its uniqueness are discussed in general. Numerical results are presented when the failure distribution follows a Weibull distribution under the proposed model. Copyright © 2014 John Wiley & Sons, Ltd.

]]>In this work, we suggest a novel quadratic programming-based algorithm to generate an arbitrage-free call option surface. The empirical performance of the proposed method is evaluated using S&P 500 Index call options. Our results indicate that the proposed method provides a more precise fit to observed option prices than other alternative methodologies. Copyright © 2014 John Wiley & Sons, Ltd.

]]>A concept of modular decomposition plays a crucial role for theoretical and practical evaluation of reliability performances of systems, because real systems are usually composed of modules, and the components of the systems are arranged to form these modules. In this paper, we give stochastic bounds for multi-state coherent systems by utilizing modular decompositions where the state spaces are assumed to be totally ordered sets. The presented bounds are shown to be better than those derived directly from the total systems and are extensions of the well-known bounds for binary state systems to multi-state cases. The results of this paper allows systems engineers or designers to have better stochastic approximations for the performance of a system by simply stacking stochastic bounds along with the hierarchy formed by the modules of the system. Copyright © 2014 John Wiley & Sons, Ltd.

]]>In view of the fact that start-up results of products are usually multi-state, based on the traditional two-state start-up demonstration tests, such as consecutive successes total failures (CSTF), total successes consecutive failures (TSCF), consecutive successes consecutive failures (CSCF) and total successes total failures (TSTF), four multi-state start-up demonstration tests are proposed in this paper. By using finite Markov chain imbedding approach, the acceptance and rejection probabilities, the probability mass function, the distribution function and the expectation of the number of the start-up tests to be terminated are given. We also provide a procedure to select the optimal parameter values. Besides, the estimations of possibly unknown probabilities are given by using maximum likelihood estimation. Finally, a numerical example that contains two tables is given to illustrate the advantages of multi-state start-up demonstration tests. The first table is presented to illustrate that the multi-state start-up demonstration tests are superior to two-state start-up demonstration tests. The second one is to illustrate that the four-state models (proposed in this paper) are superior to CSTF and CS ^{(1)} CS ^{(1,2)} TF (proposed by Smith and Griffith) with the same values of ** α** and

The analysis of multivariate time series is a common problem in areas like finance and economics. The classical tools for this purpose are vector autoregressive models. These however are limited to the modeling of linear and symmetric dependence. We propose a novel copula-based model that allows for the non-linear and non-symmetric modeling of serial as well as between-series dependencies. The model exploits the flexibility of vine copulas, which are built up by bivariate copulas only. We describe statistical inference techniques for the new model and discuss how it can be used for testing Granger causality. Finally, we use the model to investigate inflation effects on industrial production, stock returns and interest rates. In addition, the out-of-sample predictive ability is compared with relevant benchmark models. Copyright © 2014 John Wiley & Sons, Ltd.

]]>To use a control chart, the quality engineer should specify three decision variables, namely the sample size, the sampling interval and the critical region of the chart. A significant part of recent research relaxed the constraint of using fixed design parameters to open the way to a new type of control charts called adaptive ones where at least one of the decision variables may change in real time based on the last data information. These adaptive schemes have proven their effectiveness from economical and statistical point of views. In this paper, the economic design of an attribute *np* control chart using a variable sampling interval (*VSI*) is treated. A sensitivity analysis is conducted to search for optimal design parameters minimizing the expected total cost per hour and to reveal the impact of the process and cost parameters on the behavior of optimal solutions. An economic comparison between the classical *np* chart, variable sample size (*VSS*) *np* control chart and *VSI* chart is conducted. It is found that switching from the classical attribute chart to the *VSI* sampling strategy results in notable cost savings and in reduction of the average time to signal and the average number of false alarms. In most cases of the sensitivity analysis, the *VSI* *np* chart outperforms the *VSS np* chart based on economical and statistical considerations. Copyright © 2014 John Wiley & Sons, Ltd.

Network surveillance methods are becoming increasingly important as the ability to monitor a wide variety of data is rapidly expanding. Network traffic metrics are usually correlated count data that display a nonstationary pattern in their mean structures. We propose to model traffic counts using a generalized linear mixed model to capture these features. We then develop three tracking statistics proposed for anomaly detection. Two of the statistics are derived variants of a Bartlett-type likelihood ratio, which itself is not computationally tractable. The first of these variants is based on an approximation to the integrated likelihood while the second is based on the concept of ** h**-likelihood. We also consider a tracking statistic that is an exponentially weighted moving average. We compare the properties of the three tracking statistics from the point of view of FAR and detection power and contrast the proposed tracking statistics with current literature. Our comparisons show that the two generalized likelihood ratio variants are preferred choices as statistical process control tools for network surveillance. Computational aspects of the three procedures are also discussed. While our application focus is network surveillance, our proposed methods apply to other applications that have similar data characteristics. Copyright © 2014 John Wiley & Sons, Ltd.

Bivariate nonstrict Archimedean copulas form a subclass of Archimedean copulas and are able to model the dependence structure of random variables that do not take on low quantiles simultaneously; i.e. their domain includes a set, the so-called zero set, with positive Lebesgue measure but zero probability mass. Standard methods to fit a parametric Archimedean copula, e.g. classical maximum likelihood estimation, are either getting computationally more involved or even fail when dealing with this subclass. We propose an alternative method for estimating the parameter of a nonstrict Archimedean copula that is based on the zero set and the functional form of its boundary curve. This estimator is fast to compute and can be applied to absolutely continuous copulas but also allows singular components. In a simulation study, we compare its performance to that of the standard estimators. Finally, the estimator is applied when modeling the dependence structure of quantities describing the quality of transmission in a quantum network, and it is shown how this model can be used effectively to detect potential intruders in this network. Copyright © 2014 John Wiley & Sons, Ltd.

]]>In this paper, we develop a conditional likelihood based approach for estimating the equilibrium price and shares in markets with differentiated products and oligopoly supply. We model market demand using a discrete choice model with random coefficients and random utility. For most applications, the likelihood function of equilibrium prices and shares is intractable and cannot be directly analyzed. To overcome this, we develop a Markov Chain Monte Carlo simulation strategy to estimate parameters and distributions. To illustrate our methodology, we generate a dataset of prices and quantities simulated from a differentiated goods oligopoly across a number of markets. We apply our methodology to this dataset to demonstrate its attractive features as well as its accuracy and validity. Copyright © 2014 John Wiley & Sons, Ltd.

]]>Our paper presents an empirical analysis of the association between firm attributes in electronic retailing and the adoption of information initiatives in mobile retailing. In our attempt to analyze the collected data, we find that the count of information initiatives exhibits underdispersion. Also, zero-truncation arises from our study design. To tackle the two issues, we test four zero-truncated (ZT) count data models—binomial, Poisson, Conway–Maxwell–Poisson, and Consul's generalized Poisson. We observe that the ZT Poisson model has a much inferior fit when compared with the other three models. Interestingly, even though the ZT binomial distribution is the only model that explicitly takes into account the finite range of our count variable, it is still outperformed by the other two Poisson mixtures that turn out to be good approximations. Further, despite the rising popularity of the Conway–Maxwell–Poisson distribution in recent literature, the ZT Consul's generalized Poisson distribution shows the best fit among all candidate models and suggests support for one hypothesis. Because underdispersion is rarely addressed in IT and electronic commerce research, our study aims to encourage empirical researchers to adopt a flexible regression model in order to make a robust assessment on the impact of explanatory variables. Copyright © 2014 John Wiley & Sons, Ltd.

This study offers an exploratory statistical analysis of the persistence of annual abnormal returns across a sample of firms from different European Union countries. To this end, a hierarchical Bayesian dynamic model has been used that enables the annual behaviour of those profits to be broken down into a permanent structural component and a transitory component, while also distinguishing between general effects affecting the industry as a whole and specific effects impacting on each firm in particular. This breakdown of the behaviour of profits allows for a more accurate assessment of the relative importance of these fundamental components by country and sector. Furthermore, through the Bayesian approach, it is possible to test different hypotheses about the homogeneity of the dynamic behaviour of the aforementioned components with respect to the sector and the country where the firm develops its activity.

We find that although both the industry and firm effects are significant, the latter are more important to explain the dynamic evolution of abnormal returns. Specifically, firm effects account for 68% of total variation of the abnormal returns and display a lower degree of persistence with adjustment speeds oscillating at around 34%, while industry effects only account for 9% and have adjustment speeds oscillating between 7% and 8%. However, this pattern is not homogeneous and depends on the sector and country in which the firm carries out its activity. These results highlight the need to take into account both aspects simultaneously in order to analyse the dynamic behaviour of abnormal returns. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper we present an exact method for computing the Weibull renewal function and its derivative for application in maintenance optimization. The computational method provides a solid extension to previous work by which an approximation to the renewal function was used in a Bayesian approach to determine optimal replacement times. In the maintenance scenario, under the assumption an item is replaced by a new one upon failure, the underlying process between planned replacement times is a renewal process. The Bayesian approach takes into account failure and survival information at each planned replacement stage to update the optimal time until the next planned replacement. To provide a simple approach to carry out in practice, we limit the decision process to a one-step optimization problem in the sequential decision problem. We make the Weibull assumption for the lifetime distribution of an item and calculate accurately the renewal function and its derivative. A method for finding zeros of a function is adapted to the maintenance optimization problem, making use of the availability of the derivative of the renewal function. Furthermore, we develop the maximum likelihood estimate version of the Bayesian approach and illustrate it with simulated examples. The maintenance algorithm retains the adaptive concept of the Bayesian methodology but reduces the computational need. Copyright © 2014 John Wiley & Sons, Ltd.

This paper quantifies the asymptotic behavior of sample arc lengths in a multivariate time series. Arc length is a natural measure of the fluctuations in a data series and can be used to quantify volatility. The idea is that processes with larger sample arc lengths exhibit larger fluctuations and hence suggest greater volatility. Here, a Gaussian functional central limit theorem for sample arc lengths is proven under finite second moment conditions. With equally spaced observations, the theory is shown to apply when the first differences of the series obey many of the popular stationary time series models in today's literature, including autoregressive moving-average, generalized autoregressive conditional heteroscedastic, and stochastic volatility model classes. A cumulative sum statistic is introduced to identify series regimes of differing volatilities. Our applications consider log prices of asset series. Specifically, the results are used to detect nonstationary periods of stock prices. Copyright © 2014 John Wiley & Sons, Ltd.

This paper presents a queue-length analysis of ** Geo** ∕

We propose a numerical method to evaluate the performance of the emerging Generalized Shiryaev–Roberts (GSR) change-point detection procedure in a ‘minimax-ish’ multi-cyclic setup where the procedure of choice is applied repetitively (cyclically), and the change is assumed to take place at an unknown time moment in a distant-future stationary regime. Specifically, the proposed method is based on the integral-equations approach and uses the collocation technique with the basis functions chosen so as to exploit a certain change-of-measure identity and the GSR detection statistic's unique martingale property. As a result, the method's accuracy and robustness improve, as does its efficiency as using the change-of-measure ploy the Average Run Length (ARL) to false alarm and the Stationary Average Detection Delay (STADD) are computed simultaneously. We show that the method's rate of convergence is quadratic and supply a tight upper bound on its error. We conclude with a case study and confirm experimentally that the proposed method's accuracy and rate of convergence are robust with respect to three factors: (a) partition fineness (coarse vs. fine), (b) change magnitude (faint vs. contrast), and (c) the level of the Average Run Length to false alarm (low vs. high). Because the method is designed not restricted to a particular data distribution or to a specific value of the GSR detection statistic's head start, this work may help gain greater insight into the characteristics of the GSR procedure and aid a practitioner to design the GSR procedure as needed while fully utilizing its potential. Copyright © 2014 John Wiley & Sons, Ltd.

]]>This paper proposes a condition-based maintenance policy for deteriorating units, which are to be considered failed when their wear exceeds a critical threshold level, even if this exceeding is not associated to a sudden breakdown of the unit but only to degraded performance. Thus, the failure occurrence can be detected only at periodic inspections. In such a framework, given the unit age and state at inspection, a decision-making rule is proposed to choose the maintenance action to be taken, in order to extend the used life of the unit without significantly increasing its failure probability, so to reduce the life cycle cost. Both the case when all inspection times are planned and the case when an additional maintenance can be scheduled before the next planned inspection are considered. An application to a real case study referring to the wearing process of cylinder liners of some marine diesel engines is illustrated, with the further aim of highlighting the need to correctly model the degradation process. Copyright © 2014 John Wiley & Sons, Ltd.

]]>Most of the methods developed for hydrothermal power system planning are based on scenario-based stochastic programming and therefore represent the stochastic hydro variable (water inflows) as a finite set of hydrological scenarios. As the level of detail in the models grows and the associated optimization problems become more complex, the need to reduce the number of scenarios without distorting the nature of the stochastic variable is arising. In this paper, we propose a scenario reduction method for discrete multivariate distributions based on transforming the moment-matching technique into a combinatorial optimization problem. The method is applied to hydro inflow data from the Chilean Central Interconnected System and is benchmarked against results for the optimal operation of the Chilean Central Interconnected System determined with the selected subsets and the complete set of historical hydrological scenarios. Simulation results show that the proposed scenario-reduction method could adequately approximate the probability distribution of the objective function of the operational planning problem. Copyright © 2014 John Wiley & Sons, Ltd.

This paper studies the criterion of local risk-minimization for life insurance contracts in a financial market, which includes longevity bonds. The longevity bond is a bond specifying payments, which are linked to the current number of survivors in a given portfolio of insured lives. The number of survivors is modeled via a double-stochastic process, where the mortality intensity is driven by a time-inhomogeneous Cox–Ingersoll–Ross model. In addition to the longevity bond, the financial market is assumed to consist of a traditional bond and a savings account. We define the price process of the longevity bond by introducing a pricing measure. The paper extends previous work in the literature to the case where the traded assets are not martingales under the measure used for determining the optimal strategies. We compare our results under the real measure with the former results of globally risk-minimizing strategies, obtained using an equivalent martingale measure. Copyright © 2014 John Wiley & Sons, Ltd.

We consider a finite-buffer queue where arrivals occur according to a batch Markovian arrival process (*BMAP*), and there are two servers present in the system. At the beginning of a busy period, the low performance server serves till queue length reaches a critical level , and when queue length is greater than or equal to *b*, the high performance server starts working. High performance server serves till queue length drops down to a satisfactory level *a *( < *b*) and then low performance server begins to serve again, and the process continues in this manner. The analysis has been carried out using a combination of embedded Markov chain and supplementary variable method. We obtain queue length distributions at pre-arrival-, arbitrary- and post-departure-epochs, and some important performance measures, such as probability of loss for the first-, an arbitrary- and the last-customer of a batch, mean queue length and mean waiting time. The total expected cost function per unit time is derived in order to determine locally optimal values for *N*, *a* and *b* at a minimum cost. Both partial- and total-batch rejection strategies have been analyzed. Also, we investigate the corresponding *BMAP* ∕ *G* − *G* ∕ 1 ∕ ∞ queue using matrix-analytic- and supplementary variable-method. We calculate previously described probabilities with performance measures for infinite-buffer model as well. In the end, some numerical results have been presented to show the effect of model parameters on the performance measures. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, we describe the usefulness and the applications of the multivariate conditional hazard rate functions. First, we define these, as well as the accumulated hazard functions, and then give some properties of them. Using these definitions and properties, we describe the total hazard construction and its main traits. Using the technical tools described previously, we define and discuss various stochastic orders, various positive dependence concepts, and various aging notions that entail nonnegative multivariate random vectors. Copyright © 2014 John Wiley & Sons, Ltd.

]]>In this paper, we design a supply chain finance system with a manufacturer, a retailer and a commercial bank where both the retailer and manufacturer are capital constrained under demand uncertainties. We formulate a bi-level Stackelberg game for the supply chain finance system in which the bank acts as the leader and the manufacturer as the subleader. Considering the bankruptcy risks of the manufacturer and the retailer, we analyze the optimal financing interest rate for the commercial bank, the optimal order for the retailer and the optimal wholesale price for the manufacturer, respectively. We compare our model with two benchmark cases, that is, no financing scheme and infinite-credit-line financing scheme, to find out the important interactions between the operational and financial decisions in the supply chain finance system. It concluded that the different interest rates and credit lines would affect the supply chain operations. Finally, we demonstrate the impacts of different capital levels and interest rates on the optimal decisions through numerical studies that validate our theoretical analysis. Copyright © 2014 John Wiley & Sons, Ltd.

For mission-critical or safety-critical systems, redundancy techniques are often applied to satisfy the stringent reliability requirements of the system design. Warm standby sparing is a common redundancy technique, which compromises the high energy consumption of hot standby techniques and the long recovery time of cold standby techniques. This paper considers a more general model for warm standby systems, that is, the demand-based warm standby system, where each component bears a nominal capacity and the system fails if the total capacity of the working components cannot meet the system demand. Moreover, fault level coverage is considered to model the imperfect coverage effect in the standby system. A multivalued decision diagram based approach is proposed to evaluate the reliability of the demand-based warm standby system subject to the fault level coverage. Examples are given to illustrate the proposed method. Copyright © 2014 John Wiley & Sons, Ltd.

A two-component system is considered, which is subject to accumulative deterioration. Because of common stress, the components are dependent. Their joint deterioration is modelled with a bivariate nondecreasing Lévy process. The deterioration level of both components is known only through perfect and periodic inspections. By an inspection, components with deterioration level beyond a specific threshold are instantaneously replaced by new ones (corrective or preventive replacements). Otherwise, they are left as they are. Between inspections, failures remain unrevealed. This replacement policy is classical in a univariate setting, with deterioration modelled by a Gamma process. In the bivariate case, it leads to imperfect repairs at the system level, which highly complicates the study. The replacement policy is assessed through cost functions on both finite and infinite horizons, which take into account some economical dependence between components. Markov renewal theory is used to study the behaviour of the system, in a continuous and bivariate setting. Numerical experiments illustrate the study, considering a specific Lévy process with univariate Gamma processes as margins. Although technical details are not provided here for the numerical computations, the paper shows that there is a technical gap between the traditional one-dimensional studies and the present two-dimensional one, especially for the computation of the asymptotic distribution of the underlying Markov chain. Hence, there is a need for further development in the bivariate (or multivariate) setting. Copyright © 2014 John Wiley & Sons, Ltd.

]]>This paper presents an autoregressive model for a finite sequence of random variables that are observed at points equally spaced on the unit circle. The proposed model is an extension of the well-known autoregressive model of time series. We demonstrate that this model amounts to a linear transformation of a vector of independent and identically distributed random variables. The second-order properties of the multivariate distribution were examined. The least squares estimators of the model parameters were obtained. The connection between the proposed first-order model and a second-order, stationary, mean-square-continuous, real-valued random process on the unit circle was considered. We used the model presented to describe the fluctuations of hoop residual stresses in the rims of new railroad wheels. The stress measurement was performed using an ultrasonic method. The stress fluctuation model allowed us to determine the number of measurement points required to assess residual stress levels in the wheels. Copyright © 2014 John Wiley & Sons, Ltd.

Based on a new multiscale hybrid structure of the volatility of the underlying asset price, we study the pricing of a European option in such a way that the resultant option price has a desirable correction to the Black–Scholes formula. The correction effects are obtained by asymptotic analysis based upon the Ornstein–Uhlenbeck diffusion that decorrelates rapidly while fluctuating on a fast time-scale. The subsequent implied volatilities demonstrate a smile effect (right geometry), which overcomes the major drawback of the Black–Scholes model as well as local volatility models, and move to a right direction as the underlying asset price increases (right dynamics), which fits the observed market behavior and removes the possible instability of hedging that the local volatility models may cope with. Further, we demonstrate that our correction brings significant improvement in terms of fitting to the implied volatility surface through a calibration exercise. Copyright © 2014 John Wiley & Sons, Ltd.

]]>The primary aim of this paper is to expose the use and the value of spatial statistical analysis in business and especially in designing economic policies in rural areas. Specifically, we aim to present under a unified framework, the use of both point and area-based methods, in order to analyze in-depth economic data, as well as, to drive conclusions through interpreting the analysis results. The motivating problem is related to the establishment of women-run enterprises in a rural area of Greece. Moreover, in this article, the spatial scan statistic is successfully applied to the spatial economic data at hand, in order to detect possible clusters of small women-run enterprises in a rural mountainous and disadvantaged region of Greece. Then, it is combined with Geographical Information System based on Local Indicator of Spatial Autocorrelation scan statistic for further exploring and interpreting the spatial patterns. The rejection of the random establishment of women-run enterprises and the interpretation of the clustering patterns are deemed necessary, in order to assist government in designing policies for rural development. Copyright © 2014 John Wiley & Sons, Ltd.

The variable annuity product has many desirable features for retirement saving purposes, such as stock-linked growth potential, protection against losses in the investment, and guarantees of minimum payout amount at annuitization. Therefore, it is of great interest to study this product for designing next generation retirement solutions. Policyholder behavior is one of the most important profit or loss factors for the variable annuity product, and insurance companies generally do not have sophisticated models at the current time. This paper will discuss a new approach using modern statistical learning techniques to model policyholder withdrawal behavior with promising results. Copyright © 2014 John Wiley & Sons, Ltd.

In this paper, we establish closed-form formulas for key probabilistic properties of the cone-constrained optimal mean-variance strategy, in a continuous market model driven by a multidimensional Brownian motion and deterministic coefficients. In particular, we compute the probability to obtain to a point, during the investment horizon, where the accumulated wealth is large enough to be fully reinvested in the money market, and safely grow there to meet the investor's financial goal at terminal time. We conclude that the result of Li and Zhou [*Ann. Appl. Prob.*, v.16, pp.1751–1763, (2006)] in the unconstrained case carries over when conic constraints are present: the former probability is lower bounded by 80% no matter the market coefficients, trading constraints, and investment goal. We also compute the expected terminal wealth given that the investor's goal is underachieved, for both the mean-variance strategy and the aforementioned hybrid strategy where transfer to the money market occurs if it allows to safely achieve the goal. The former probabilities and expectations are also provided in the case where all risky assets held are liquidated if financial distress is encountered. These results provide investors with novel practical tools to support portfolio decision-making and analysis. Copyright © 2013 John Wiley & Sons, Ltd.

This article considers the modeling of count data time series with a finite range having extra-binomial variation. We propose a beta-binomial autoregressive model using the concept of random coefficient thinning. We discuss the stationarity conditions, derive the moments and autocovariance function and consider approaches for parameter estimation. Furthermore, we develop two new tests for detecting extra-binomial variation, and we derive the asymptotic distributions of the test statistics under the null hypothesis of a binomial autoregressive model. The size and power performance of the two tests are analyzed under various alternatives taken from a beta-binomial autoregressive model with Monte Carlo experiments. The article ends with a real-data example about the Harmonised Index of Consumer Prices of the European Union. Copyright © 2013 John Wiley & Sons, Ltd.

]]>Cure models represent an appealing tool when analyzing default time data where two groups of companies are supposed to coexist: those which could eventually experience a default (uncured) and those which could not develop an endpoint (cured). One of their most interesting properties is the possibility to distinguish among covariates exerting their influence on the probability of belonging to the populations’ uncured fraction, from those affecting the default time distribution. This feature allows a separate analysis of the two dimensions of the default risk: *whether* the default can occur and *when* it will occur, given that it can occur. Basing our analysis on a large sample of Italian firms, the probability of being uncured is here estimated with a binary logit regression, whereas a discrete time version of a Cox's proportional hazards approach is used to model the time distribution of defaults. The extension of the cure model as a forecasting framework is then accomplished by replacing the discrete time baseline function with an appropriate time-varying system level covariate, able to capture the underlying macroeconomic cycle. We propose a holdout sample procedure to test the classification power of the cure model. When compared with a single-period logit regression and a standard duration analysis approach, the cure model has proven to be more reliable in terms of the overall predictive performance. Copyright © 2013 John Wiley & Sons, Ltd.

Hybrid censoring scheme is a combination of Type-I and Type-II censoring schemes. Determination of optimum hybrid censoring scheme is an important practical issue in designing life testing experiments to enhance the information on reliability of the product. In this work, we consider determination of optimum life testing plans under hybrid censoring scheme by minimizing the total cost associated with the experiment. It is shown that the proposed cost function is scale invariant for some selected distributions. Optimum solution cannot be obtained analytically. We propose a method for obtaining the optimum solution and consider Weibull distribution for illustration. We also studied the sensitivity of the optimal solution to the misspecification of parameter values and cost components through a well-designed sensitivity analysis. Copyright © 2013 John Wiley & Sons, Ltd.

]]>Many service industries require high level of system availability to be competitive. An appropriate system unavailability metric is important for business decisions and to minimize the operation risks. In practice, a system can be unavailable for service because of multiple types of events, and the durations of these events can also vary. In addition, the data that record the system operating history often have a complicated structure. In this paper, we develop a framework for estimating system unavailability metric based on historical data of a fleet of heavy-duty industry equipment, which we call System A. During the useful life of System A, repairs and maintenance actions are performed. However, not all repairs or maintenance actions were recorded. Specifically, the information on event times, types, and durations is available only for certain time intervals (i.e., observation windows), instead of the entire useful life span of the system. Thus, the data structure is window-observed recurrent event with multiple event types. We use a nonhomogeneous Poisson process model with a bathtub intensity function to describe the recurrent events, and a truncated lognormal distribution to describe the event durations. We then define a conservative metric for system unavailability, obtain an estimate of this metric, and quantify the statistical uncertainty. Copyright © 2013 John Wiley & Sons, Ltd.

]]>Start-up demonstration tests were first discussed in the quality/reliability literature about three decades ago. Since then, many variations of these tests have been introduced, and the corresponding distributional characteristics and inferential methods have also been studied. All these developments, based on independent and identically distributed binary trials, have been further generalized to some other forms of trials such as Markov-dependent trials, exchangeable trials and multistate trials. In this paper, we provide a comprehensive review of all these results and highlight some unifications of the results. We also describe a general estimation method and then present several numerical examples to illustrate some of the models and methods described here. Finally, a number of open issues in this area of research are pointed out. Copyright © 2014 John Wiley & Sons, Ltd.

]]>This paper is concerned with the optimal design of queueing systems. The main decisions in the design of such systems are the number of servers, the appropriate control to have on the arrival rates, and the appropriate service rate these servers should possess. In the formulation of the objective function to this problem, most publications use only linear cost rates. The linear rates, especially for the waiting cost, do not accurately reflect reality. Although there are papers involving nonlinear cost functions, no paper has ever considered using polynomial cost functions of degree higher than two. This is because simple formulas for computing the higher moments are not available in the literature. This paper is an attempt to fill this gap in the literature. Thus, the main contributions of our work are as follows: (i) the derivation of a very simple formula for the higher moments of the waiting time for the M/M/s queueing system, which requires only the knowledge of the expected waiting time; (ii) proving their convexity with respect to the design variables; and (iii) modeling and solving more realistic design problems involving general polynomial cost functions. We also focus on simultaneous optimization of the staffing level, arrival rate and service rate. Copyright © 2013 John Wiley & Sons, Ltd.

]]>The preservation of reliability aging classes under the formation of coherent systems is a relevant topic in reliability theory. Thus, it is well known that the new better than used class is preserved under the formation of coherent systems with independent components. However, surprisingly, the increasing failure rate class is not preserved in the independent and identically distributed case, that is, the components may have the (negative) aging increasing failure rate property, but the system does not have this property. In this paper, we study conditions for the preservation of the main reliability classes under the formation of general coherent systems. These results can be applied both for systems with independent or dependent components. We consider both the case of systems with identically distributed components and the case of systems with components having different distributions. Copyright © 2013 John Wiley & Sons, Ltd.

]]>Estimation of retail demand is critical to decisions about procuring, shipping, and shelving. The idea of Poisson demand process is central to retail inventory management and numerous studies suggest that negative binomial (NB) distribution characterize retail demand well. In this study, we reassess the adequacy of estimating retail demand with the NB distribution. We propose two Poisson mixtures—the Poisson–Tweedie family (PTF) and the Conway–Maxwell–Poisson distribution—as generic alternatives to the NB distribution. On the basis of the principle of likelihood and information theory, we adopt out-of-sample likelihood as a metric for model selection. We test the procedure on consumer demand for 580 stock-keeping unit store sales datasets. Overall the PTF and the Conway–Maxwell–Poisson distribution outperform the NB distribution for 70% of the tested samples. As a general case of the NB model, the PTF has particularly strong performance for datasets with relatively small means and high dispersion. Our finding carries useful implications for researchers and practitioners who seek for flexible alternatives to the oft-used NB distribution in characterizing retail demand. Copyright © 2013 John Wiley & Sons, Ltd.

]]>Adding another fraction to an initial fractional factorial design is often required to resolve ambiguities with respect to aliasing of factorial effects from the initial experiment and/or to improve estimation precision. Multiple techniques for design follow-up exist; the choice of which is often made on the basis of the initial design and its analysis, resources available, experimental objectives, and so on. In this paper, we compare four design follow-up strategies: foldover, semifoldover, *D*-optimal, and Bayesian (*MD*-optimal) in the context of a metal-cutting case study previously utilized to compare fractional factorials of different run sizes. Follow-up designs are compared for each of a , , and Plackett–Burman initial experiments. Our empirical results suggest that a single follow-up strategy does not outperform all others in every situation. This case study serves to illustrate design augmentation possibilities for practitioners and provides some basis for the selection of a follow-up experiment. Copyright © 2013 John Wiley & Sons, Ltd.

In an earlier reference, we provided a review of the regular, sample path and strong stochastic concepts of stochastic convexity in both univariate and multivariate settings, jointly with most of their applications, and some other new results for analysing communication systems on the basis of biologically inspired models. This article provides a comprehensive discussion about the regular notion of stochastic increasing and directional convexity, denoted by *SI* − *DCX* introduced by Meester and Shanthikumar for a general partially ordered space. We study the connection of the *SI* − *DCX* property of *X*(** θ**) for

The chance-constrained programming (CCP) is a well-known and widely used stochastic programming approach. In the CCP approach, determining the confidence levels of the constraints at the beginning of solution process is a critical issue for optimality. On one hand, it is possible to obtain better solutions at different confidence levels. On the other hand, the decision makers prefer to simplify their choices instead of grappling with the details such as determining confidence levels for all chance constraints. Reliability is an effective tool that enables the decision maker to look over the system integrity. In this paper, the CCP is considered as a reliability-based nonlinear multiobjective model, and a simulated annealing (SA) algorithm is developed to solve the model. The SA represents different solution alternatives at the different reliability degrees to the decision makers by performing different confidence levels. Thus, the decision makers have the opportunity to make more effective decisions. Copyright © 2013 John Wiley & Sons, Ltd.

]]>This paper deals with pricing a contract under which a dealer buys back a car from a client, for a cash amount contained in a given depreciation table. The value of the car is supposed to depreciate according to a stochastic model with random repairs modeled by a Poisson process. Copyright © 2013 John Wiley & Sons, Ltd.

]]>