Workload aware autonomic resource management scheme using grey wolf optimization in cloud environment

Autonomic resource management on cloud is a challenging task because of its huge heterogeneous and distributed environment. There are several service providers in the cloud to provide a different set of cloud services. These services are delivered to the clients through a cloud network, and it needs to satisfy the Quality-of-Service (QoS) requirements of users without affecting the Service Level Agreements. It can only manage through autonomic cloud resource managing frameworks. However, most of the existing frameworks are not much efﬁcient for managing cloud resources because of the varied applications and environments of the cloud. To defeat such problems, this paper proposed the work-load aware Autonomic Resource Management Scheme (WARMS) in the cloud environment. Initially, the clustering of cloud workloads is achieved by Modiﬁed Density Peak Clustering algorithm. Further, the workload scheduling process is done using fuzzy logic for cloud resource availability. The autonomic system uses Grey Wolf Optimization for virtual machine deployment to achieve optimal resource provisioning. The WARMS system focused on reducing the Service Level Agreement violation, cost, energy usage, and time, and providing better QoS. The simulation results of WARMS shows the system delivering the cloud services more efﬁciently by the minimized rate of violation and enhanced QoS.

parameters but fail to address the type of the task and category of workload [9,10]. The additional allocation of resources does not assure the performance due to the unawareness of workload type. The CPU intensive applications only require an additional processing unit to accomplish the tasks, but the present resource allocation schemes can additionally add a set of other resources which does not require to run the specific applications. The over-allocation of resources minimizes resource utilization as well as affects the scalability of resources.
Moreover, due to the dynamic behaviour of the cloud environment, the need for the resources demanded by the cloud users and the number of resources allocated for processing are frequently changing over time [11,12]. Hence, the autonomic cloud management system must adopt the dynamic behaviour of tasks and control the trade-off among task types and QoS parameters. To mitigate the above discussed issues, an intelligent autonomic resource management scheme must be developed for data centres to accomplish frequently changing task demands with the help of a limited available number of resources.
The distributed cloud environment consists of heterogeneous computing resources and different resource-intensive workloads. Therefore, clustering of workloads becomes significant as it can efficiently reduce the problems related to scheduling of the workloads. It can ease the process of scheduling the multiple workloads in distributed computing. Clustering is basically used in grouping of the objects exhibiting similar features. In conventional studies, K-means clustering technique is employed for the clustering process as it provides high performance in terms of grouping of objects, but the dynamic feature and the presence of large count of workloads make the traditional K-means clustering inefficient. Also the initial centroid point in this method is chosen randomly which may give inappropriate results. Moreover, the results of K-means clustering highly depend upon the initial centroid point and hence there will be a great influence in the solutions. The typical first-infirst-out (FIFO) and Fair-share scheduling techniques failed to effectively match the task to the hosting node. The proposed WARMS system uses a modified density peak clustering algorithm (MDPCA) for clustering which finds the cluster centre quickly by the help of kernel function further fuzzy logic is employed to schedule the workload. For selecting the optimized VM for executing the workload, GWO (Grey Wolf Optimization) is utilized; at this process, faulty or malicious VM in the network will be detected and removed.
The remainder of this paper is organized as follows: Related works on autonomic resource management are discussed in Section 2. A detailed description of our proposed autonomic resource management scheme is discussed in Section 3. The proposed schemes simulation results are demonstrated and discussed in Section 4. In Section 5, we provide a conclusion of this paper.

RELATED WORKS
Some of the recent related works related to autonomic resource management are listed below: Suresh et al. [13] presented a resource allocation method with different techniques. Right here, the KFCM algorithm becomes utilized to cluster to be had resources. In this method, the process of allotting tasks with all available resources is carried out with Modified Cloud Resource Provisioning Algorithm (MCRP). On MCRP framework, initially, the optimal cost had been selected by using the PSO algorithm. Then, the result from in step is utilized, and the resource had been allotted. The authors of this paper described that most of the traditional kmodes clustering techniques might have multiple varieties of destination clusters on its created cluster mode range.
Domanal et al. [14] presented a resource scheduling method with a HYBRID Bio-Inspired algorithm. Here, the researchers particularly centred on cataloguing the jobs using MPSO, the assignment of task, and dealing with the cloud resources with the use of MPSO, MCSO, and HYBRID Bio-inspired procedures. The time intervals of the jobs will stay steady even if multiple jobs arrive at the same periods. The resources needed to process have been allotted after correctly allotting the jobs to the corresponding VMs. The MPSO algorithm performs a crucial function in allotting the coming jobs to the VMs as successfully as available. Clustering depends on the variety of VMs taken for experimentation, and VMs should need to have sufficient resources for executing the customers' requests and providing it to VMs, which are needed to control by MPSO.
Ghobaei-Arani et al. [15] presented a technique with the base of resource provisioning for the applications of cloud, which are providing various services. Here, different concepts of two techniques (Reinforcement Learning [RL] and autonomic computing) were integrated by the authors. Additionally, they utilized IBM's control loop technique called MAPE for acquiring autonomic computing. The MAPE is a WARMS agent that perceives its environment with the usage of sensors. Along with those perceptions for establishing the activities to be performed within the surroundings, it has been processed constantly and controls the VMs, which are allotted to every service provider at unique time durations.
Marinescu et al. [16] suggested a self-managing framework for scaling the management schemes of cloud resources. This presented framework includes a market-based tool which acts as an agent for accomplishing the principles of resource managing systems of cloud and determines how to expand the system by making the management system to manage the resource automatically. Firstly, clear communication among the cloud consumer and provider will be assumed in this presented approach. At the run time of applications, the services are expressed as a workload herewith the nodes characterize the edges and present applications by distinguishing the service steps. The authors said that this process could greatly reduce the cost.
Sotiriadis et al. [17] presented a self-managing method for VM scheduling in a cloud environment. There, the researchers used a platform called Open Stack for public and private cloud establishment. From the actual structure of Open Stack, it chooses the host after the placement of VMs based on the offered memory before it overreaches the amount of VMs limit. These operations may exceed loads of utilized PMs and remain low RAM. The presented method is effective and adaptable to the utilization of historical data of PMs and VMs. Additionally, the machine learning technique is utilized on this self-managing scheduling method for pattern extraction, and it is done by examining the data and permanent inspection.
Filelis-Papadopoulos et al. [18] extended a framework of cloud simulation for making it support self-managing and organizing abilities with cloud resources. From the presented framework, the simulation process is carried out for altering the situation of systems more dynamic on every time step, and this process is done with the base of time advance looping technique by combining the components of the systems. When comparing this proposed hierarchical-based VM placement approach to the classical management approach, the hierarchical-based approach delivered much correct decision while placing the VMs.
Zuo et al. [19] introduced a method with the base of selfadaptive threshold called Dynamic Weighted Load Evaluation Method (SDWM). Initially, for estimating the state of the cloud resources, they presented the dynamic evaluation indicators for accurate estimation. Here, the SDWM separates a load of resources into three different situations; they are, normal state, idle state, and overloaded state. At last, the researchers exploited their method as a model for energy evaluation for describing the energy level utilized for migrating the resources based on the request from users. This method earned excellent adaptability, and even the resources leave or enter dynamically.
For Efficient Resource Management, Zahoor et al. [36] portrayed a Cloud/Fog based Smart Grid Model. To improve the systems' response time and to reduce delay, the Fog computing remains an aid, and hence in the SG domain, the resource management cloud-fog computing is used. Some of the load balancing algorithms such as ACO, PSO, and ABC were used with the proposed HABACO algorithm for optimal resource management in cloud and fog based environment. For load balancing, five algorithms are employed amongst an SG user's requests and service providers. The proposed work mainly determines a hierarchical cloud-fog computing structure to offer several computing services for SG resource management.
Xiong et al. [37] developed a Cloud/Fog Computing Resource Management and Pricing for Blockchain Networks. The price-based computing resource management was investigated to support the providers of offloading mining tasks to cloud/fog in proof-of-work based public blockchain networks. To investigate the profit maximization of cloud/fog provider and the utility maximization of miners, a two-stage Stackelberg game model was adopted. Also, the ideal resource management system having the discriminatory and uniform pricing for the cloud/fog provider was implied and studied. The proposed analytical model was validated by performing the real experiment. The network performance is evaluated using simulations which aid the cloud/fog provider to gain the highest profit and to achieve the optimal resource management.
Gai et al. [38] presented a Heterogeneous Cloud Computing based Resource Management scheme in Sustainable Cyber-Physical Systems. The sustainability of the system was increased by combining CPS with a heterogeneous cloud computing method. The task assignment is considered as an NP-hard problem in this paper. Tasks are assigned to the heteroge-neous clouds in the Smart Cloud-based Optimizing Workload (SCOW) model using predictive cloud capacities. The competitiveness of the enterprises is embarrassed due to unstable service demands. The Workload Resource Minimization Algorithm (WRM), Smart Task Assignment (STA) Algorithm, and Task Mapping Algorithm (TMA) algorithms were proposed to reach the optimization objective The following Table 1 summarizes the recent autonomic resource provisioning works; their techniques, objective, performance metrics, advantage, and disadvantages are discussed.
The significant capability of a VM deployment algorithm is to balance the workloads to the resources for optimal performance. So, varied task parameters are considered for VM deployment. Available resources should be effectively utilized to achieve fair resource utilization. As users increase, tasks to be scheduled also increase. So, better algorithms are needed to deploy VM on systems. VM deployment algorithms are service-oriented and vary in environments. Meta-Heuristic optimization algorithms are employed to improve the effectiveness or to reduce the cost of the scheduling process. To reduce the computational cost, most of the scheduling problems use various optimization techniques. Meta-heuristic algorithms like PSO and Simulated Annealing are powerful methods for solving many optimization problems. Apart from that, GWO algorithm is a well-recognized optimization technique to resolve the various non-linear problems in numerous research areas due to its well suitable parameter settings and to achieve the optimal solution in a minimum number of iterations.

DESIGN OF WARMS ARCHITECTURE
The architectural design of WARMS comprises three different phases, that is, workload submission, optimal resource provisioning, and service monitoring. In the workload submission phase, cloud users submit the works based on their demand. An optimal resource provisioning phase manages the cloud resources, and the workload is coming from the cloud users more efficiently with the help of clustering and VM deployment. Service Monitoring phase maintains the QoS parameters and self-management properties of Service Level Agreement (SLA). The proposed WARMS architecture is shown below.
a. Cloud Users: The users are the consumers present under the provider, who request for the specific resource based on their requirements. The requests collected from the users are considered as the workload and used for further process. b. Cloud Provider: The cloud provider offers a set of services and manages the resources that are stored on their database. These resources will be distributed based on the requirements of the users. c. QoS Management: It manages the possible QoS requirements to access the workloads. Based on the QoS needs of the workload, it will be managed. To scale up quickly to meet rising demand but also scale down by shutting down excess servers Number of SLA violations, overall cost of the system Reduce the cost of provisioning infrastructure Self-protecting and self-healing attributes are not considered Learning automata-based resource provisioning Aslanpouret al. [32] Dynamic resource provisioning

Response time, cost and allocated virtual machines
To reduce the cost of resource rental for the cloud gaming provider Self-protecting is not considered Self-configuration and self-healing in resource management Gill et al. [21] To enhance the quality of cloud services

Workload clustering
The clustering process helps to analyse the workload and describe the important features and patterns available on the cloud workload. In our work, the MDPCA is used to cluster the workload [20]. MDPCA works based on three parameters, data subset, centre selection, and similarity measurement. Initially, the centroid will be selected from the workload randomly. After that, the Euclidean distance is calculated, which follows kernel-based similarity Measure for all data points. After calculating the distance, the local density point will be grouped and create the clus-ter. The Gaussian kernel is used instead of the cut-off kernel to calculate the local density of the data points.
The Gaussian kernel can be expressed as follows: In MDPCA to compute the distance between two data points ⇀ x i and ⇀ x j , as follows: In Equation (1), ⇀ x i , ⇀ x j represents the two data points and K ( ⇀ x i , ⇀ x j ) denotes the kernel function of two data points, which is a constant value. The mean value for every cluster will be estimated, and based on this mean value, the centroid is moved along the graph.

ALGORITHM 1 Workload clustering
The number of clusters needed completely will be decided by K.
Step 1: The subclasses of objects K which are non-empty, will be separated randomly.
Step 2: At present, the centroids of clusters are the separating seed points of clusters.
Step 3: An object will be allotted with an object in which one has the nearer seed points.
Step 4: Until getting an unchangeable assignment, step 2 will be repeated. An important issue in clustering is how to determine the similarity between two objects so that clusters can be formed from objects with high similarity to each other. Euclidean distance computes the absolute differences between the coordinates of a pair of objects. A distance function yields a higher value for pairs of objects that are less similar to one another. The kernel function based distance between two items is the sum of the differences of their corresponding components.
The maximum number of the centre point is taken depending upon the minimum value of the objective function. Then, the clusters are formed using those centre points represented as Gang. The members in the clusters with centre point surely belong to that cluster. The members other than the centre point could not be assigned to any cluster; therefore, it will belong to two or more clusters. Algorithm 1 describes workload clustering process.
The clustered workload final set is given in Table 2:

Workload scheduling
The workload scheduling process is conducted based on fuzzy logic [25]. Here, the resource for executing the task is served by the general expectation vector of the task. This general expectation vector is defined during the task classification. The task classification is held based on the QoS requirements and SLA agreements of users. The general expectation vector of the task H is given by, From Equation (3), Vw i1 , Vw i2 , Vw i3 represent the weights of No. of CPU, memory, and bandwidth. Another function on a task called Justice Evaluation Function (JEF) helps to detect and prevent resource shortage from starting the task. It is calculated by the ratio of actual allotted resources and expected resources. If the justification value is lower than 1, it represents the task needs more resources to process.
The JEF for task h is estimated by, where, AR and ER represent the allotted resource and the resource expected by the cloud user. The actual time needed for the execution of the task is estimated by, where, the size of the resource or the file is denoted as FZ and the MIPS denotes the transmission speed. (i.e. Million Information per Second).
To process the workload, the eligible VMs will be selected by comparing the ATE and the time expected for the execution of the task. Additionally, the memory needed for the execution of the task and the size of the VM memory and the bandwidth needed for the execution and availability of every VMs are also compared. The justification of every class type of the task is estimated by JfCt i and JfBw i (JfCt i , completion time and JfBw i , bandwidth; both values are assumed as 0.9).
The JEF parameters are fuzzified for getting better mathematical associations which are passed through the rule base (QoS). The fuzzification is performed by using Mamdani Fuzzy inference system, and defuzzification is performed by the centroid method. It provides the output in the range of [0, 1]. The defuzzified result is described as F_res, and it is used to update the classified task. Algorithm 2 describes the task classification process.

Begin
Step 1: Initialize the tasks to be executed, Step 2: Classify tasks based on QoS.
Step 3: Time needed for the execution is estimated by (23) Step 4: Choose the VMs which are eligible to execute the task h using GWO algorithm.
Step 5: Estimate the Euclidean Distance of VM.

VM deployment and selection
This section presents the deployment and the selection of VMs for processing the workload with the cloud resources.

Configuration VM
In this work, VMs are deployed randomly; the configuration of VM is shown in Section 4. Once the VMs are deployed, it then goes for the sleep state. It will wake and start to process after getting the workload from the users. These are managed by the self-management system present under the SLA; the optimal VM to process the workload is chosen by using an optimization algorithm.

VM selection
Here, we used GWO algorithm for selecting the optimal VM based on the workload information. It follows a hierarchical style to explain the positions of grey wolves dominantly. The hierarchical patterns of the grey wolves are built by way of four levels of search agents, Alpha ( ), Beta ( ), Delta ( ), and Omega ( ) [22]. Some decisions, such as hunting (processing), sleeping, and time to walk (awake) are made by initial level Alpha ( ). The next level Beta ( ) are the subordinate wolves, which helps to decide for . The Delta ( ) level is designed to help and . The Omega ( ) is described as lower level. The first three levels ( , , ) are considered as the best candidate solution, it chooses the best VM based on energy, memory, and QoS needed to process the workload. The encircling process is defined as, From Equations (6) and (7), ⃗ W indicates the location of wolf, ⃗ W p represents the location of prey, the current iteration is represented as "ite", and ⃗ E, ⃗ F are the coefficient vectors of every iteration.
The coefficient vectors ⃗ E and ⃗ F are calculated by, Here, the linearly reduced components in the range of 2-0 is denoted as ⃗ m. The ⃗ r 1 and ⃗ r 2 are the random vectors in the limit of 0 to 1.
The travelling or hunting procedure of wolf will be given by, GWO is updating the parameter ⃗ m and is linearly reorganized in every iteration to range from 2 to 0. The updating has been done through, The GWO selects the VM mainly based on the QoS requirements, power needed for processing, and memory based on Algorithm 3.

QoS management
The QoS data provide information about the metrics used to estimate the weights for the clustered workloads. The QoS metrics used in this work are explained below: a. Availability: It provides the ratio of mean time of failure (mTF) to the addition of mean time of failure and mean of Update the location of current VMs using (7) End for

Update search agents
For all V M i do Calculate fitness function using (12) End for End while time taken to repair (mTR).
The mTF is estimated by, mTF = Total uptime No. of breakdowns .
The mTR is estimated by, mTR = Total downtime No. of breakdowns .
b. Reliability: Before scheduling the resources, the reliability of every resource will be checked for fault tolerance. The reliability of the resources will be estimated by, From Equation (4), indicates the failure time of the given time and t represents the time utilized on resources to address its request for the execution of any workload.
c. Resource utilization: The utilization of resources is estimated by the ratio of time taken to execute the workload by the specific resource to the overall uptime of that resource. It is given by, Execution time of ith resource Overall uptime of ith resource .
From Equation (5), 'i' indicates the specific resource and indicates a number of workloads. d. Energy consumption: The overall utilization of energy is estimated based on the energy utilized in the processor (p), transceiver (t), memory (m), and in other hardware (oh). Which is given as, On Equation (7), "n" is the number of workload.
f. SLA violation rate: It is estimated by, where, w i indicates the weight of every SLA and "FR" indicates the failure rate.
The "FR" will be calculated by the below equation, FR = Rate of workload failure No. of workload .
g. SLA cost: The overall cost of SLA will be estimated by, where, "i" indicates specific resources and "n" indicates a number of workload.

SLA management
The SLA management is responsible for preparing SLA documents based on the information confirmed between the cloud client and the provider [26,27]. These prepared documents include SLA violation information such as deviation values and penalty. With these values, the QoS deviation will be predicted. The penalty will be executed when the deviation value maximizes than permitted for workload [21]. The penalty will be provided as a free cloud service for several seconds or minutes based on the delay time. It is estimated by, The rate of a penalty and minimum penalty s/min will be decided by the service provider.

3.5.1
Fault or malicious detection The SLA management system finds any malicious attacks or faulty VM present in the network during the optimization.
a. Malicious detection: The malicious node detection process is shown in Algorithm 4. This system extracts various malicious features described in [23,28].
The collected features (Z) is described as "n", s is the number of servers (SR), VMs in the "SR" is denoted as v. The collected features are grouped (Z i j ) and sent to machine learning module (mo). It includes some pre-trained modules to identify the attacks based on the malicious programs.
b. Fault detection: The anomaly VMs will be detected by fault detection scheme in SLA based on Self-organizing maps [24]. The faults are detected by comparing the performance of VMs with weight vectors and neurons present in the network. The similarity among the VMs is measured by using Euclidean distance. The threshold "th" will be assumed at the initial state and if the state of VMs is lower, then that VM is considered normal, and others are decided as abnormal.
The faulty VMs are detected by the following equation: From Equation (20), VM n represents the VMs in the network, each neuron weight is described as W t i j , and the performance value of every VM is indicated as Pv.
If the fault in VM is detected during the execution of any task, the task in that VM will be migrated to another non-faulty VM.
Algorithm 5 details the procedure of task migration for reducing migration times. Initially, based on the upper and lower threshold, all virtual machines are grouped. The virtual machines with the SLA rate greater than the upper threshold are added to the faulty group, and the virtual machines with the SLA rate lower than the lower threshold are added to the onfaulty group. Further, the tasks in the faulty group is selected to migrate to the non-faulty group. To minimize migration times, the tasks in the faulty virtual machine are sorted in descending order. Finally, the task will be migrated to the least SLA rate virtual machine.

SIMULATION SETUP AND RESULTS
The architecture of WARMS is shown in Figure 1. The simulation of WARMS technique is implemented by JAVA. The workload (Client request) used for the experimentation includes about 500-3000 requests. The results are compared based on QoS parameters such as availability, reliability, resource utilization, energy consumption, execution time, SLA violation rate, and SLA cost. The detailed description of QoS parameters are given in Section 3.4. The configuration of host and VM is shown in Tables 3  and 4.

Simulation results
The proposed WARMS has been simulated with the configuration shown in Tables 3 and 4. Initially, the performance of workload clustering and scheduling carried out by MDPC with fuzzy logic is evaluated and compared with FIFO, Fair, and K-means Clustering-based Task Classification and Scheduling based on the makespan and load standard deviation metrics. Further, the achieved performance of proposed WARMS is compared with TARNN [35], RADAR [21], and LARPA [32]. Figure 2 shows that MDPC-Fuzzy techniques relatively outperforms in the 3000 workloads similar in other situations   Figure 5 shows the reliability rate with respect to number of workloads; our WARMS technique achieved up to 94% with 3000 workloads and it is much better when compared with the existing systems. The existing system earned up to 87%, 80%, and 79% with 3000 workloads. With 500 workloads,

Resource utilization
Resource utilization while processing the workload is shown in Figure 6. The WARMS techniques got a maximum resource utilization rate of 92% with 3000 workloads. Other existing systems earned 86%, 83%, and 76% on resource utilization with higher workloads. With a smaller amount of workload, both the proposed and existing systems achieved a lower utilization rate. It was caused because of the minimal amount of workload; available resources cannot be utilized by the least workloads. By increasing the workloads, the resources available on the cloud are mostly utilized completely.

Execution time
The execution time comparison of different methods and workloads is shown in Figure 8. The execution time is measured in terms of seconds. The WARMS system utilized about 5600 to 6200 s to execute 500-3000 workloads. It is better when compared with the existing techniques.  Figure 9 illustrates the SLA violation rate based on number of workload. On the violation of SLA, our system earned lower violation rates than the existing systems. The existing techniques got a minimum violation rate of SLA of 3% to 16%. The proposed technique got a minimum of 2% on 500 workloads and a maximum of 9% on 3000 workload.

SLA cost
The SLA cost comparison of different methods and workloads are shown in Figure 10. The maximum cost utilized by the WARMS for 3000 workloads is up to $190, but the existing systems utilized $155-240 for executing 2000 workload. From the evaluation of metrics, namely availability, reliability, resource utilization, energy consumption, execution time, SLA violation The ability of the GWO based VM deployment algorithm is analysed with many PMs and VMs throughout the simulation. The performance achieved is related to three VM deployment algorithms, such as ACO and PSO, and Whale optimization algorithm. Table 5 represents the results obtained based on VM allocation and migration time with different number of iteration on various techniques.
The comparative analysis is evidence that the proposed technique performed well in its respective parameters and made the cloud system as a better one. Ultimately the proposed metaheuristic technique based cloud system with optimal VM deployment and fault tolerance became a suitable option for better cloud service. The experiment is reiterated 10 times and the performances are evaluated in terms of availability, reliability, resource utilization, energy consumption, execution time, SLA violation rate, and SLA cost, which achieved better results compared to other schemes.

ANOVA test for statistical evaluation
The strength of proposed WARMS scheme is evaluated based on two significant performance metrics (resource utilization and reliability), which are relatively important for cloud resource management. To evaluate the statistical implication of the proposed scheme, we are using one-way analysis of variance (ANOVA) test. The ANOVA test has been conducted on selected parameters, which are resource utilization and reliability. Further, the outcome of hypothesis test was compared with the existing schemes, namely, TARNN, RADAR, and LARPA.
In the ANOVA test, the mean value of two different schemes are taken and compared whether it is similar or not. If the samples mean value of two or more schemes are similar, then accept it for null hypothesis, and if the samples mean value of two or more schemes are dissimilar, then accept it for alternate hypothesis. The parameters employed in ANOVA test reveal the results in the form of F-statistic. The below two conditions are satisfied to reject the null hypothesis.
(i) The p-value should be less than the significance level.
(ii) The value of F-statistic must be higher than the F-critical value.
Also, the alternative of hypothesis Halt can define as Equn.32 to counter the null hypothesis Hnull.
To perform the ANOVA test, the number of a trial, taken for evaluation is 5 for all schemes and other critical parameters such as significance level value α = 0.05 and confidence interval (CI) = 95%. The results of the ANOVA for the schemes are shown in Tables 6 and 7. It can be observed in Tables 6 and 7 that the F-statistic value is greater than the F-critical value and simultaneously chosen level (0.05) is also greater than the p value. Hence, the null hypothesis (H null ) can be rejected and therefore, it can be claimed that there are statistically significant differences in the means of the resource utilization and reliability metrics by these four schemes.

CONCLUSION
In this paper, the autonomic resource handling system called WARMS has been proposed for managing the cloud resources. The WARMS technique efficiently schedules the cloud resources automatically by the self-management system controlled under the SLA based on the QoS parameters of cloud users. Different algorithms such as MDPCA, GWO, and fuzzy are utilized to handle the self-management based system. During the experimentation of WARMS technique, we have analysed the effect of different QoS parameters including Availability, Reliability, Resource Utilization, Execution time, Energy consumption, SLA violation rate, and SLA cost with the differed amount of workload. We compared the efficiency of WARMS system with two existing cloud resource management system in terms of QoS parameters. The experimental outcome proved that the WARMS technique is better in terms of energy, cost, and time when compared with existing systems. The proposed WARMS framework successfully schedules the cloud resources automatically and maintaining the QoS and SLA during the execution of workloads, which provide more client satisfaction.