Stochastic modeling and scalable predictive control for automated demand response

Automated demand response (ADR) is a utility program that is designed to achieve electricity conservation. An ADR program is regarded as the problem of controlling the power consumption of a set of consumers. In this article, we propose a control‐theoretic approach for an ADR program. First, a mathematical model of the power consumption is proposed. This model can express complex behavior by switching a Markov chain. Its effectiveness is illustrated by modeling the power consumption of an air‐conditioner. Next, a new method of model predictive control for a set of consumers is developed using the proposed model. The control strategy at each time is chosen from a given finite set by solving a mixed integer linear programming (MILP) problem. The advantage of the proposed method is that the MILP problem is scalable with respect to the number of consumers. To show its effectiveness, we present a numerical example.

Our target system F I G U R E 1 Our target system in an energy management system, where the negawatt power is a power saved by increasing efficiency or reducing consumption [Colour figure can be viewed at wileyonlinelibrary.com] not been validated sufficiently by datasets. In addition, the control problem is reduced to a mixed integer linear programming (MILP) problem. The computation time for solving an MILP problem exponentially grows with the number of decision variables. In Reference 17, a Gaussian process model, which is accurate and highly flexible, has been used. The control problem is reduced to a nonconvex linear optimization problem, and solving it is generally hard in the case of large-scale systems. In References 18 and 19, a simple feedback controller has been proposed based on the utility function using the benefit and the power consumption. The dynamics on the power consumption have not been considered. In Reference 20, the equivalent thermal model of an air-conditioner, which is a kind of first-principles models, has been proposed. In addition, a control method has also been proposed. Several methods have been proposed so far. However, a unified method for modeling the complex behavior of the power consumption and controlling a set of many consumers has not been developed. Thus, we propose a new framework of modeling and control for realizing a system that an aggregator controls a set of consumers. We consider the control system consisting of an aggregator and a set of consumers (see Figure 1). An electric power company calculates the target power consumption based on an imbalance affected by renewable energies and so on (see, e.g., Reference 21 for further details). Instead of an electric power company, an ISO may calculate it. An aggregator receives the target power consumption from an electric power company, and decides a control signal for each consumer by solving the control problem. After the obtained control signal is sent to a local controller in each consumer, heaters, air-conditioners, and so on, are controlled based on the control signal.
First, we propose a mathematical model of the power consumption. In ADR, it is necessary to forecast the power consumption of electrical equipments. It is appropriate to consider a stochastic model, because there are several influences such as sun irradiance and temperature, which are not always measured. In addition, since the behavior of the power consumption is complicated, it is not appropriate to express the power consumption as a simple linear system. The mathematical model proposed in this article is based on discrete-time Markov chains. In energy management systems, Markov chains are frequently used (see, e.g., . Motivated by these existing results, we propose a Markov chain-based model. The transition probability matrix in discrete-time Markov chains can be easily derived from the experimental dataset through discretization of the power consumption. In the propose stochastic model, at each discrete time, one discrete-time Markov chain is chosen from the candidates of discrete-time Markov chains under certain constraints. Its effectiveness is shown by using experimental data of an air-conditioner, where a given dataset is partitioned into a learning dataset used in identification of the model and a test dataset. Next, using the proposed model, we consider the problem of finding a control strategy based on the target power consumption for a set of consumers. In the existing methods, the scalability with respect to the number of consumers (or electrical equipments) has not been focused. In this article, we propose a scalable control method based on optimization. In the proposed method, the set of consumers is partitioned into some groups. Each group is one-to-one correspondence with the discretized value of the power consumption at the current time. In other words, the number of consumers in each group is changed in time. For each group, the level of electricity conservation is assigned by solving the control problem. In the proposed method, the level of electricity conservation cannot be assigned to each consumer. However, if we consider controlling the average behavior of a set of consumers, the proposed method is enough. The control problem is formulated as the finite-time optimal control problem. In this problem, the power consumption is predicated by using the model, and the optimal control strategy, which is chosen from a given finite set, is derived. This problem can be equivalently rewritten as an MILP problem. In the obtained MILP problem, the dimensions of binary and continuous variables do not depend on the number of consumers. In this sense, the proposed method is scalable. Furthermore, an online algorithm based on model predictive control (MPC) is presented (see, e.g., References 25 and 26 for details of MPC). The effectiveness of the proposed method is presented by a numerical example.
The main contributions of this article are listed as follows.
1. A stochastic model expressing the power consumption of a power-intensive household is proposed. 2. Using experimental data of an air-conditioner, the effectiveness of the proposed stochastic model is shown. 3. Using the proposed stochastic model, a scalable method of MPC for a set of consumers is proposed.
This article is organized as follows. In Section 2, a stochastic model proposed in this article is defined, and its effectiveness is shown by using experimental data of an air-conditioner. In Section 3, the finite-time optimal control problem is formulated, and is reduced to an MILP problem. An online algorithm and a numerical simulation are also presented. In Section 4, we conclude this article.
Notation: Let  denote the set of real numbers. Let ∅ denote the empty set. Let 1 m × n denote the m × n matrix whose elements are all one. Let 0 m × n denote the m × n matrix whose elements are all zero. For the matrix M, let M ⊤ denote the transpose matrix of M. For two events A, B, let E(A|B) denote the conditional expected value of A given B.

STOCHASTIC MODEL OF POWER CONSUMPTION
In this section, we propose a stochastic model for expressing the power consumption of a power-intensive household appliance such as an air-conditioner. The power consumption is discretized, and its behavior is modeled by a set of discrete-time Markov chains. The discrete-time Markov chain is switched at appropriate time. First, we explain experimental data as a motivating example. Next, we define the proposed stochastic model, and explain the procedure of deriving it. Third, we derive this model from experimental data. Finally, we introduce switching rules to control the power consumption.

Motivation: Experimental data of air-conditioners
First, we explain experimental data of air-conditioners as a motivating example. Needless to say, control of the power consumption of an air-conditioner is very important in design of building/home energy management systems (see, e.g., Reference 27). Here, experimental data in Reference 28 is used. The experiment was done in one room in Uji-shi, Kyoto, Japan. The experiment period was from February 24, 2014 to March 3, 2014, and the experiments were performed at any time during the day or night. One experiment time is 2 hours, and the target temperature is set as follows: Thus, 50 samples of the temperature trajectory shown in Figure 2 were obtained, where the sampling interval is 1 minute. Figure 3 shows the average trajectory of 50 samples. From this figure, we see that when the target temperature is changed from 24 • C to 20 • C at 60th minutes, the power consumption changes in almost a step state. We see also that when the target temperature is changed from 20 • C to 24 • C at 60th minutes, the power consumption is quickly increased, and after that, it is decreased.
When we consider deriving a mathematical model, such behavior should be considered. In addition, from Figure 2, we see that each behavior is different due to several influences such as sun irradiance and outside temperature, which are not always measured. Hence, it is appropriate to consider a stochastic model.

Definition
Let x(k) denote the discretized value of the power consumption of an appliance such as an air-conditioner at time k, where the ith element of (k) is the probability that x(k) = i holds. In order to model the time evolution of (k), the following stochastic model is defined: where I(k) ∈ {1, 2, … , M} is called the mode, and A I(k) ∈ {A 1 , A 2 , … , A M } is a given transition probability matrix. The vector a I(k) ∈ {a 1 , a 2 , … , a M } is called an affine term, and expresses a certain discrete probability distribution. The mode corresponds to the label for each pair of the transition probability matrix and the affine term. In a I(k) , each element is included in the interval [0, 1], and a sum of all elements is equal to 1. The vector a I(k) is used in the case where the power consumption changes in almost a step state. If the vector a I(k) is not a zero vector, then the corresponding A I(k) must be a zero matrix. Conversely, if A I(k) is not a zero matrix, then the corresponding vector a I(k) must be a zero vector. In addition, using (k), the expected value of x(k) can be obtained by where x 0 is the initial value given in advance. The procedure for deriving the proposed stochastic model (1) is summarized as follows.
Procedure for deriving the stochastic model (1): Step 1: Collect a set of finite time series data with a certain sampling time. Discretize each value based on observation of the dataset.
Step 2: Divide the dataset into a learning dataset and a test dataset.
Step 3: Divide a time interval into multiple time intervals based on trends of the learning dataset.
Step 4: For each time interval obtained by dividing, derive a discrete-time Markov chain.
Step 5: Validate the obtained model using the test dataset.
In Section 2.3, the above procedure will be explained by using experimental data on an air-conditioner. We may utilize a method for identifying a switched linear regression model (see, e.g., References 29 and 30).
Remark 1. We utilize the affine term in (1). However, instead of the affine term a I (k), the transition probability matrix [a I(k) a I(k) … a I(k) ] may be utilized. Then, all affine terms can be deleted. To stress the case where the probability distribution is reset, affine terms are introduced in this article.

Validation with experimental data of air-conditioners
Consider deriving the stochastic model (1) of an air-conditioner from experimental data. To derive the stochastic model (1), experimental data explained in Section 2.1 is used. According to the derivation procedure in Section 2.2, we derive the stochastic model (1). First, Step 1 is explained. The dataset has been already explained. From observation of the trajectories in Figure 2, we set d = 5. If the power consumption is included in the interval [0, 200), then x = 10 (= 1 ). In Table 1, we show i , i = 2, 3, 4, 5 obtained.
In Step 2, 40 samples among 50 samples are set as a learning dataset. The rest 10 samples are set as a test dataset. In Step 3, from observation of the average trajectory in Figure 3 In Step 4, based on frequency analysis of transitions between x(k) and x(k + 1) using the learning dataset, we can obtain the following transition probability matrices: where all affine terms a 1 , a 2 , a 3 , a 4 are given by a zero vector. As an example, we focus on the first column of the matrix A 1 . At k = 0, 1, … , 5, the number of samples satisfying both x(k) = 1 and x(k + 1) = 1 is 18. In a similar way, the number of samples satisfying both x(k) = 1 and x(k In the proposed stochastic model, the initial probability distribution i (0) is given by Figure 4 shows the trajectories of the learning dataset and the model. From this figure, we see that the stochastic model (1) can represent the learning dataset. In this example, the mean absolute error was 16 W.
Finally, Step 5 is explained. We consider applying the obtained stochastic model to the test dataset. Also in this case, the initial probability distribution i (0) is given by i (0) = [1 0 0 0 0] ⊤ . Figure 5 shows the trajectories of the test dataset and the model. From Figure 5, we see that although the accuracy decreases, we can conclude that the power consumption is expressed. In this example, the mean absolute error was 32 W. Thus, we can derive the stochastic model (1) based on the derivation procedure in Section 2.2. In the last of this subsection, we discuss the effectiveness of an affine term. Instead of A 3 of (3) and A 4 of (4), we use the following affine term: From these values, we see that in this example, the difference between the proposed stochastic model with/without the affine term is small. As seen from the above, the stochastic model (1) is useful for representing experimental data of the power consumption. In order to obtain a simpler model, it is appropriate to utilize an affine term depending on behavior of the power consumption.

Switching rules using a directed graph
In this subsection, in order to develop a control method using the proposed model, we consider introducing switching rules of the mode. As was explained in the previous subsection, to expressing the behavior of the power consumption, it is appropriate that the transition probability matrix is switched. Furthermore, in the case of an air-conditioner, it is appropriate that the transition matrix is switched depending on also the target temperature. Then, switching rules are useful from the viewpoints of both modeling and control. First, sequential constraints for transitions of modes are imposed. In the previous subsection, the stochastic model (1) with four modes is derived. We remark here that after the mode 1 is chosen, the mode 2 must be chosen. It is not appropriate to choose a mode except for the mode 2. In a similar way, after the mode 3 is chosen, the mode 4 must be chosen. Next, as shown in Figure 1, consider controlling a set of consumers by an aggregator. Then, the level of electricity conservation should be set. Here, based on experimental results (see, e.g, Reference 13), we suppose that the level is chosen from a given finite set. For each level, a set of discrete-time Markov chains is assigned.
Based on the above discussion, we present the directed graph in Figure 6 as a simple example. In this graph, we suppose that the number of levels is four. The pair of the mode 1 and the mode 2 models the normal case. The transient response is modeled by using two modes. The modes 3, 4, and 5 models the case where electricity conservation is needed. We suppose that the mode 5 corresponds to the tightest electricity conservation. Depending on a situation, we may use a more complicated graph. One of the simple methods to set a directed graph is that each mode is one-to-one correspondence with the set temperature of an air-conditioner.
Thus, the stochastic model (1) with switching rules given by a directed graph is useful as a mathematical model for control of a set of consumers. The control problem, which will be explained in the next section, is solved by the aggregator (see Figure 1). In other words, the mode is determined by the aggregator. We remark that the mode is not uniquely determined from switching rules given by a directed graph. For example, when the current mode is 2 in the directed graph F I G U R E 6 Example of directed graphs expressing switching rules in Figure 6, the candidates of the next mode are 2, 3, 4, and 5. By solving the control problem, the optimal mode presented to each consumer is obtained as the control variable. In the ADR device of each consumer, a certain adjustment (e.g., switching of the target temperature in air-conditioners) is performed based on the mode.
Remark 2. In Section 2.3, the temperature change is not directly reflected to the obtained model. We can consider the temperature change by improving switching rules as follows. First, the range of temperature is decomposed to some intervals. Next, for each temperature interval, the stochastic model (1) with switching rules is derived. By collecting the temperature at each sampling time, we can choose the model. We may use the forecast temperature.

MODEL PREDICTIVE CONTROL OF A SET OF CONSUMERS
In this section, using the proposed stochastic model, consider the control problem of a set of consumers. When our target system is regarded as a control system (see Figure 1 for our target system), the aggregator and a set of consumers play the role of the controller and the plant, respectively. The aggregator solves the control problem, and sends control signals to consumers. After that, the aggregator collects the power consumption of each consumer and solves the control problem again. By repeating this procedure, the aggregator controls the power consumption. The control purpose is that the power consumption of all consumers is close to a given target value in the sense of the expected value. Here, we assume that the power consumption of each consumer is represented by the same stochastic model. In other words, we assume that the stochastic model is obtained based on the dataset of all consumers. The obtained stochastic model provides an average and smoothed behavior of each consumer. For example, in Reference 31, an approximation of dynamics in the power consumption for heterogeneous consumers has been proposed. Such an approach is appropriate for controlling a complex system. Under this assumption, we propose a scalable algorithm of MPC for a set of consumers, where the control variable is the mode in the directed graph expressing switching rules. First, an overview of the proposed method is presented. Next, after the control problem is formulated, details of the proposed solution method are presented. Finally, a numerical simulation is presented to show the effectiveness of the proposed method.

Overview
Consider n consumers. In the proposed model (1), the power consumption is discretized, and is chosen from the finite set with d elements, that is, Hence, it is not necessary to assign the level of electricity conservation (i.e., the mode) to each consumer. Then, the set of consumers {1, 2, … , n} can be partitioned into d groups, that is,  j , j = {1, 2, … , d} (see also Figure 7). For each group, consider setting the level of electricity conservation from the finite set  ∶= {1, 2, … , M}. Applying the level obtained by solving the control problem, the power consumption for each consumer is changed (details on the control problem will be explained in the next subsection). Based on the changed power consumption, groups are reorganized (see also Figure 7). By repeating the above procedure, the power consumption of a set of consumers is controlled. Since the level of electricity conservation is calculated for each group (not each consumer), the dimension of decision variables in the control problem does not depend on the number of consumers n. In this sense, the proposed method is scalable. However, this dimension depends on d, M, and the prediction horizon N.

Problem formulation
We formulate the finite-time optimal control problem. For consumer i ∈ {1, 2, … , n}, let x i (k) ∈ { 1 , 2 , … , d } denote the discretized power consumption at time k, where i ∈  and 1 < 2 < … < d . The time evolution of x i (k) is expressed by the following stochastic model: and a I i (k) ∈ {a 1 , a 2 , … , a M } are the discrete probability distribution of x i (k), the transition probability matrix, and the affine term, respectively. Switching rules of the mode I i (k) ∈  are given by a directed graph.
Based on the initial state x i (0), the set of consumers {1, 2, … , n} is partitioned into d sets  j , j ∈ {1, 2, … , d} satisfying the following conditions: The mode (i.e., the level of electricity conservation) is set to each set  j . Then, from (5), we can obtain the following stochastic model for  j : Under these preparations, consider the following finite-time optimal control problem.

Problem 1.
Suppose that the initial state x i (0) = x i0 , the target power consumption x * , the prediction horizon N, and the initial mode I j (0) are given. Then, find a mode sequence I j (1), I j (2), … , I j (N − 1), j ∈ {1, 2, … , d} minimizing the following cost function J: subject to the stochastic model (6).
Problem 1 is one of the typical problems in demand response. See, for example, References 6 and 32 for further details. We can impose a linear constraint with respect to Λ ∑ i∈ j i (k) (the expected value of the power consumption of the group j). For simplicity of discussion, we consider the case of no constraints. Since the behavior of the power consumption is changed by switching the mode, the power consumption can be controlled by solving the above problem. We remark here that in the above problem, re-organization of groups is not considered. In an online algorithm, the above problem is solved at each discrete time. In the setting of the problem at each discrete time, re-organization of groups is performed. See also the online algorithm explained in the next subsection.

Transformation of Problem 1 into an MILP problem
Consider transforming Problem 1 into an MILP problem. Hereafter, the condition x i (0) = x i0 of the expected value in the cost function (7) is omitted.
in the cost function (7) can be rewritten as See (2) for the definition of Λ. Hereafter, the condition x i (0) = x i0 in the expected value is omitted. From (8), we see that the time evolution of E[ ∑ n i=1 x i (k)] can be calculated by the time evolution of ∑ i∈ j i (k), that is, (6). Next, consider the stochastic model for  j (6). The binary variable j,l (k) ∈ {0, 1}, j ∈ {1, 2, … , d}, l ∈ {1, 2, … , M} is defined by 1 the mode at time k for the set  j is l, 0 otherwise.
Using j,l (k), (6) can be rewritten as For the binary variable j,l (k), two constraints are imposed. First, the following equality constraint is imposed: which implies that for the set  j , only one mode is chosen at time k. Next, change of the mode is constrained by a given directed graph. Using binary variables, such graph-based constraint can be described by where and Φ is the adjacency matrix in a given directed graph. See Reference 33 for further details. Since the initial mode I j (0) is given in advance, j (0) is also given.
As an example of (11), consider the directed graph in Figure 6. Its adjacency matrix Φ is given by and continuous variables minimizing the cost function J ′ of (14) subject to the constraints (10)- (13).
We remark here that ∑ i∈ j i (k), k ∈ {1, 2, … , N} in (12) can be eliminated under the assumption that i (0), i ∈ {1, 2, … , n} are given. The above problem has both binary and continuous variables as decision variables. The objective function and constraints are linear with respect to these decision variables. Hence, Problem 2 is an MILP problem, which can be solved by using a suitable solver. In addition, the dimensions of binary and continuous variables in Problem 2 are given by dM(N − 1) and (d 2 M + 1)N + 1, respectively. From these dimensions, we see that the dimension of decision variables does not depend on the number of consumers n. In this sense, the proposed method is scalable. On the other hand, the dimension of binary variables depends on d (the cardinality of the finite set expressing the discretized power consumption), M (the number of nodes in a given directed graph), and N (the prediction horizon). These parameters must be appropriately chosen depending on not only control specification but also the computer environment.
Finally, based on the conventional MPC method (see, e.g., References 25 and 26), an online algorithm for MPC of a set of consumers is proposed as follows.
Online algorithm of the aggregator for controlling a set of consumers: Step 1: Set t := 0, and give x i (0) (i.e., i (0)) and i,l (0).
Step 4: Apply only the level at time t to consumers.
Step 5: Collect the power consumption of each consumer, and determine x i (t + 1). Set t := t + 1, and return to Step 2.
Applying the online algorithm, the level of electricity conservation can be obtained based on the current power consumption and the forecast using it.
Remark 3. We comment about implementation. In each consumer, both a smart meter and a unit for ADR are necessary. A smart meter is an electronic device that records information such as the power consumption and communicates the information to the aggregator such as retailers. A unit for ADR controls electric supply to each electric devise (see, e.g., Reference 35). We suppose that in this unit, the level of electricity conservation is set in advance. For each level, electric supply of each devise is determined in advance. Depending on the level given from the aggregator, this unit controls devises. In the aggregator, the information from all smart meters is collected. Based on the power consumption, Problem 2 is solved. After that, the level of electricity conservation is provided to each consumer. Hence, for the aggregator, the functions of communication and computation that realize the above functions are required. In the current step, the above environment has not been realized completely. For consumers with home energy management systems (HEMS), there is a possibility that the proposed method can be implemented.

Numerical example
To show the effectiveness of the proposed method, we present a numerical example. Consider two cases that the number of consumers is given by n = 100 and n = 10, respectively. We suppose that the directed graph is given by that in Figure 6, respectively. The matrices A 1 and A 2 in Section 2.3 are used, but A 5 1 and A 5 2 are newly defined as A 1 and A 2 , respectively. In other words, the sampling interval in the control problem is 5 min. In addition, d and j , j ∈ {1, 2, … , d} are given by d = 5 and Table 1, respectively. The matrices A 3 , A 4 , and A 5 are given by respectively. In this numerical example, all affine terms are given by a zero vector, but the vectors b 3 , b 4 , and b 5 play the role of affine terms. The pair of the modes 1 and 2 corresponds to the lowest level of electricity conservation. The mode 5 corresponds to the highest level of electricity conservation. Next, parameters in Problem 1 are given as follows: 250n if t > 20, N = 6 (= 30 min), Finally, instead of measuring the power consumption, x i (t + 1) is randomly generated based on the discrete probability distribution i (t + 1) obtained.
We present the computation result. For each case, 100 samples are randomly generated in this numerical simulation. First, we present the computation result in the case of n = 100. Figure 8 shows the average trajectory of 100 samples in the entire power consumption. From this figure, we see that the expected value of the entire power consumption approaches to the target value. Figure 9 shows the average trajectory of 100 samples in the mode, where the average of the modes for Average of five modes five groups (i.e., ∑ 5 j=1 ( ∑ 5 l=1 j,l (k)∕5)∕5) is calculated in each sample. From this figure, we see that depending on the target power consumption, the mode is appropriately calculated. We also present some samples. Figures 10 and 11 show 10 samples of the power consumption and the mode. From these figures, we see that the power consumption is controlled. It is future work to consider decreasing the variance of the power consumption. Figure 12 shows change of the number of consumers for each group in one sample. From this figure, we see that depending on the target power consumption, groups are reorganized dynamically. Next, we present the computation result in the case of n = 10. Comparing n = 10 with n = 100, only the scale on the number of consumers is different. Here, we present only the change of the number of consumers for each group in one sample. Table 2 shows the computation result in the time interval [0,10]. From this table, we see that the number of consumers in the group 4 is eight at t = 1. Because the power consumption is increased by using the dynamics at t = 0 (in this simulation, j (0) = [ 1 0 0 0 0 ] ⊤ , that is, A 5 1 is given at t = 0). After that, the power consumption is controlled for achieving to the target power consumption.
Finally, we comment about the computation time for solving Problem 1 (i.e., the MILP problem). In this numerical simulation, the MILP problem was solved 4000 times. In the case of n = 100, the worst computation time was 41.50 seconds, and the mean computation time was 3.79 seconds, where we used IBM ILOG CPLEX 12.7.1 as an MILP solver on the computer with the Intel Core i7-6700K 4.0 GHz processor and the 16 GB memory. In the case of n = 10, the worst computation time was 37.17 seconds, and the mean computation time was 1.84 second. Moreover, in the case of n = 1000, Since the sampling interval in this simulation is 5 minutes, we see that Problem 1 is solved in online. From the sampling interval, we conclude that the computation time is almost the same in these cases. Thus, the proposed method is scalable with respect to the number of consumers.

CONCLUSION
In this article, we proposed a control-theoretic method for demand response. First, we proposed a new mathematical model for expressing the dynamics of the power consumption. The proposed model consists of multiple discrete-time Markov chains. Switching rules given by a directed graph may be imposed. The effectiveness of this model was shown by the example on an air-conditioner. Next, based on this stochastic model, we proposed a scalable online algorithm for MPC of a set of consumers. The number of decision variables in the finite-time optimal control problem does not depend on the number of consumers. The effectiveness of the proposed algorithm was shown by a numerical example. The proposed method provides us one of the fundamentals in design of demand response. Future work is as follows. In Section 2.3, the parameters d and M were manually derived based on observation of a given dataset. Using a clustering method, these parameters will be derived automatically (but, in determining of M, the control specification must be also considered). Details are one of the future efforts. In addition, we assumed that the power consumption of each consumer is represented by the same stochastic model. In Reference 20, a method for categorizing consumers into some groups using a clustering method has been proposed. Combining the proposed method with the method in 20 is future work. It is also important to validate the proposed model using several datasets in practical systems. To solve the finite-time optimal control problem faster, it is also important to develop a parallel and distributed algorithm for MPC. In the proposed method, the level of electricity conservation cannot be assigned to each individual consumer.
Since the past achievement of electricity conservation of each consumer is known, it is also significant to introduce a heuristic method for giving an incentive from the viewpoint of fairness and comfort. Furthermore, privacy and security are also one of the important topics. We will consider utilizing the privacy masking method. 36