CLOUD computing



CLOUD computing (Grid or utility computing, computing on-demand) which was the talk of the computing circles at the end of 1990s has become once again a relevant computational topic. CLOUD computing, also considered as a fifth utility after water, electric power, gas, and telephony, is on the basis of the hosting of services on clusters of computers housed in server farms. This article reviews CLOUD computing fundamentals in general, its operational modeling and quantitative (statistical) risk assessment of its much neglected service quality issues. As an example of a CLOUD, a set of distributed parallel computers is considered to be working independently or dependently, but additively to serve the cumulative needs of a large number of customers requiring service. Quantitative methods of statistical inference on the quality of service (QoS) or conversely, loss of service (LoS), as commonly used customer satisfaction metrics of system reliability and security performance are reviewed. The goal of those methods is to optimize what must be planned about how to improve the quality of a CLOUD operation and what countermeasures to take. Also, a discrete event simulation is reviewed to estimate the risk indices in a large CLOUD computing environment favorably compared to the intractable and lengthy theoretical Markov solutions. WIREs Comp Stat 2011 3 47–68 DOI: 10.1002/wics.139

For further resources related to this article, please visit the WIREs website