Towards multiobjective optimization and control of smart grids

The rapid uptake of renewable energy sources in the electricity grid leads to a demand in load shaping and flexibility. Energy storage devices such as batteries are a key element to provide solutions to these tasks. However, typically a trade-off between the performance related goal of load shaping and the objective of having flexibility in store for auxiliary services, which is for example linked to robustness and resilience of the grid, can be observed. We propose to make use of the concept of Pareto optimality in order to resolve this issue in a multiobjective framework. In particular, we analyse the Pareto frontier and quantify the trade-off between the non-aligned objectives to properly balance them.


INTRODUCTION
The energy transition has triggered a rapid uptake of renewable energies in the electricity grid, which may lead to severe problems in the energy supply, see, eg, the introductory paragraphs in the works of Ratnam et al 1 and Worthmann et al 2 and the references therein. In this paper, we focus on the integration of energy storage devices, 3 eg, batteries, in microgrids, 4 see also the work of Parhizi et al 5 for a review article and the work of Lotfi and Khodaei 6 for questions concerning AC/DC. Based on forecasts, 7,8 consumption and production peaks can be anticipated, such that a receding horizon strategy, typically realized via a model predictive control (MPC) scheme, [9][10][11] is ideally suited to tackle this task, see also the works of Grüne and Pannek 12 and Rawlings et al 13 for an introduction to (nonlinear) MPC. An aspect of particular interest is whether the optimization-based control algorithm is realized in a decentralized, distributed, or centralized fashion, see, eg, the works of Worthmann et al 2 and Hidalgo-Rodríguez and Myrzik. 14 A major concern from a grid operator's perspective is the volatile power demand due to residential energy generation. In recent papers, the authors exploit inherent flexibility provided by energy storage devices to smoothen the aggregated power demand profile by approximating a given reference value. 2 Here, a key issue was the realization of the global optimum by using distributed control-both in a cooperative 15 and a noncooperative setting. 16 Another-potentially conflicting-objective is to stay within time-varying tubes as introduced in the work of Braun et al, 17 which may not contain the reference value enforced by the first optimization goal. The motivation behind this approach is to provide flexibility to the grid operator, which may be used to counteract other shortcomings within the network. 18 The contribution of this paper is to embed this problem into a multiobjective framework, link it to the scalarized objective function, and investigate the connection to the Pareto frontier. Furthermore, we also introduce a more restrictive optimality concept-proper optimality in the sense of Geoffrion. By doing so, it becomes possible to quantify the trade-off between the different objectives, which is an essential prerequisite for structured decision making process.
Moreover, we provide numerical simulations to investigate the MPC closed loop within our multiobjective framework. Hereby, we use data from an Australian distribution network dataset, see the work of Ratnam et al 19 for further details.
Notation. For two integers m and n with m ≤ n, we use the notation [m ∶ n] ∶= {m, m + 1, … , n}.

PROBLEM FORMULATION
In this section, we firstly introduce a mathematical model of a prosumer, ie, a residential energy system equipped with both solar-photovoltaic panel (or any other device for energy generation) and an energy storage device like a battery. Secondly, we consider a (potentially) large network of ,  ∈ ℕ, such subsystems. The individual subsystems are connected via a common point of coupling, the grid operator (simply termed Central Entity [CE]), see Figure 1 (left). So far, we essentially follow the modeling approach presented in the works of Ratnam et al 1 and Worthmann et al 2 and further developed in the work of Braun et al. 17 An alternative would be to also model the underlying network structure as graph and then to incorporate the physical connections within the microgrid, see, eg, the work of Le Floch et al. 20 The CE may pursue different objectives, which leads to a multiobjective optimization problem (MOP) since we assume that the subsystems are cooperative. Here, we exemplarily present two criteria and numerically investigate the network performance w.r.t. each of the two objectives.

Modeling of the system dynamics and the constraints
The ith subsystem, i ∈ [1 ∶ ], is given by where x i (k) describes the state of charge (SOC) of the battery and z i (k) the power demand at time instant k ∈ ℕ 0 . The parameter T > 0 is the length of one time step in hours, eg, T = 0.5 corresponds to a half-hour window. The variables u + i (k) and u − i (k) represent the charging and discharging rate, respectively, while w i (k) is the net consumption (load minus generation). The constants i , i , i ∈ (0, 1] model efficiencies w.r.t. to self-discharge and energy conversion, respectively. The initial SOC at k = 0 is denoted by x 0 i , ie, x i (0) = x 0 i holds. The system variables are subject to the constraints where C i ≥ 0 denotes the battery capacity. Constraint (2d) ensures that the bounds on charging (2c) and discharging (2b) hold, even if charging and discharging occur during the same time interval, see Figure 1 (right). For a concise notation, we define the state constraint i ∶= [0, C i ] and the set of feasible control values i ∶= {(u − i , u + i ) ⊤ ∈ ℝ 2 |(2b)-(2d) hold}. Furthermore, we use the notation· to denote the average value over all subsystems, eg,w(k) = 1

First objective: peak shaving
The CE is located at the common point of coupling, where the aggregated sum ∑  i=1 z i (k) is collected. A positive sign corresponds to a power demand, whereas a negative sign indicates a surplus, which reflects that the aggregated local generation exceeds the aggregated local load at time instant k. Hence, the CE requires balancing energy to match the aggregated power demand. While that is comparatively easy for a constant power demand/supply, it is significantly more demanding if the power demand exhibits large fluctuations. Hence, a typical goal is to flatten the energy demand of the network in consideration.
For a mathematical formulation, we first compose an overall system by setting up the system dynamics the constrains x(k) ∈ ∶= 1 × · · · ×  ⊂ ℝ  and u(k) ∈ ∶= 1 × · · · ×  ⊂ ℝ 2 . Hence, for given data the power demand at time k ≥ 0 resulting from a charging behavior (u(n)) k n=0 = ((u − (n), u + (n))) k n=0 ∈ k+1 and initial Here, we emphasize that the output depends on the time-varying load and generation data given by (3), which has to be predicted for load shaping, see, eg, the works of Baliyan et al 21 Note that̄(n) is the average over the previous N time steps if the data history is sufficiently long, which is reflected by taking the minimum. Then, for given data w(n), n = max{0, k − (N − 1)}, … , k, … , k + N − 1, and state x(k), x(k) ∈ ℝ  , the CE wants to minimize the objective function J 1 ∶ N → ℝ ≥0 , , subject to the system dynamics (1) and the state constraint x i (n) ∈ i for all n ∈ [k ∶ k + N] and i ∈ [1 ∶ ], which penalizes the deviation of the average power demand from the sequence of reference values given by the vector̄= (̄(k), … ,̄(k + N − 1)) ⊤ . Here, we call N the optimization horizon, on which it is assumed that the load and generation data can be reliably predicted. The optimal control value of this optimization problem is unique and is attained as shown in the work of Braun et al. 15 Hence, we can define a state feedback law by using the first element u ⋆ (k) of an optimal control sequence u ⋆ = (u ⋆ (n)) k+N−1 n=k , which is then implemented at the plant. Then, based on the new state x(k + 1) and the new input data (in particular w(k + N) and̄(k + N)), the procedure can be repeated ad infinitum to generate a closed loop in a receding horizon fashion as explicated in Algorithm 1 solely based on J 1 (and thus ignoring the-potentially conflicting-second objective J 2 , see, eg, the work of Worthmann et al 2 or of Grüne and Pannek 12 for a more detailed introduction to [distributed] MPC). Numerical results illustrating the outcome of such an MPC scheme are depicted in Figure 2. We point out that the reference values (4) cannot be traced due to the constraints (2).

Second objective: flexibility via tube constraints
Besides flattening the power demand profile, another goal of the grid operator may be to provide flexibility to the lower/higher layers of the overall system. The idea is to introduce time-varying tube constraints on the average power demand, ie, we would like to achieve for the lower and upper bounds c(n) andc(n), n ∈ [k ∶ k + N − 1]. However, since the inclusion of the tube constraints (5) may render the problem infeasible, we minimize the violation of the tube constraints by using the objective function , subject to the system dynamics (1) and the state constraint x i (n) ∈ i for all n ∈ [k ∶ k + N] and i ∈ [1 ∶ ]. According to this construction, a positive value of J 2 (u) reflects a violation of (5), which causes a loss of flexibility in the system. Note that this cost function exhibits some complementarity: for a given time index n, strict positivity of one term implies that the other term is equal to zero, which reflects that a violation of Condition (5) can only occur on one side. Moreover, for a given time index n, the objective function is convex. Numerical solutions are displayed in Figure 3 to illustrate our modeling approach.

Multiobjective optimization problem and MPC scheme
Combining the nonaligned (conflicting) objectives introduced in the preceding subsections leads to the following MOP: For this kind of MOPs, see, eg, the work of Jahn 27 ; the following optimality concept is typically used.
Before we proceed, we reformulate the MOP described above in order to facilitate its solution in Section 3. To this end, we begin with the objective function J 2 . Here, we introduce the stacked auxiliary variable s = (s ⊤ , which was already used in the work of Braun et al. 17 Then, we may introduce the relaxed constraints and minimize the Euclidean norm of the vector s. To this end, we introduce the cost function h ∶ ℝ 2N ≥0 → ℝ ≥0 , In summary, the additional signed (slack) variables, which are stacked in the vector s, allow to represent the optimization objective J 2 by the strictly convex function h at the expense of incorporating the constraints given by (7). Doing so ensures feasibility of the optimization problem, eg, by not using the storage devices at all. If desired from an implementational point of view, the complementarity constraint s(n)s(n) = 0, n ∈ [k ∶ k + N − 1], may be included while both feasibility and the value of the objective function are maintained. Furthermore, we introduce the auxiliary variablez = (z(k), … ,z(k + N − 1)) ⊤ ∈ ℝ N and the additional equality constraintsz Usingz and the constraint (8) allows us to replace J 1 by the function g ∶ ℝ N → ℝ ≥0 , Next, we want to rephrase the constraint set such that the output variables can be used as an optimization variable instead of u. To this end, we define the set = 1 × · · · ×  with Note that, for all i ∈ [1 ∶ ], the set i and thus the set are convex and compact in view of the linearity of the constraints (1) and (2). Then, pluggingz into the constraint (7) yields the following MOP: where the set is convex. In (MOP), the optimization variables and thus the objectives are coupled via the constraint . If, in addition, also individual objectives of the residential energy systems have to be taken into account, the vectors z 1 , … , z  have to be used as optimization variables.
Finally, we explicate the MPC scheme to be applied.
In general, it is quite difficult to provide a meaningful analysis of the MPC closed loop resulting from a multiobjective problem formulation. However, adding a few additional assumptions allows to provide some guarantees if a stabilization task is considered, see, eg, the work of Grüne and Stieler. 28 In our analysis, however, we restrict ourselves to the analysis of the static MOP to be solved in Step 2 of Algorithm 1 and provide numerical simulations to investigate the MPC closed loop.

SCALARIZATION AND NUMERICAL SOLUTION VIA ADMM
In the following, we consider the scalarized MOP (SMOP) using the scalarization parameter , ∈ [0, 1]. For details on the scalarization of MOPs, we refer to the work of Eichfelder. 29 Overall, we obtain the following SMOP.
In this section, we firstly present a numerical case study. Then, we briefly recall alternating direction method of multiplier (ADMM) before we apply this optimization technique to solve (SMOP). ADMM is an algorithm for distributed optimization of convex, large-scale systems, which allows to solve (MOP) even for a very large number  of subsystems. To this end, we exploit the reformulation of the original (MOP). We show that the proposed algorithm yields, for each ∈ [0, 1], not only the optimal value of the scalarized problem but that the corresponding values of the components given by g and h are also unique. Then, based on the results derived in this section, we investigate the connection of the parameter to the Pareto frontier in the subsequent section.

Numerical simulations
Increasing the value ∈ [0, 1] shifts the focus from penalizing the violation of the tube constraints to flatten the aggregated power demand profile. In Figure 4, the corresponding closed-loop performance at time instant k = 0 is visualized. Here, we set  = 10, T = 0.5 (time step of 30 minutes), N = 48 (one day prediction horizon), x i (0) = 0.5, C i = 2 (battery capacity), and −u i =ū i = 0.5 (maximal discharging/charging rate) and vary the tube constraints from (c 1 ,c 1 ) = (0.2, 0.4) to (c 2 ,c 2 ) = (0.6, 0.8) on a time window of three days. Note that the choice of the tube constraints seems inappropriate since it is not aligned with the uncontrolled power demand profile (dotted black line in Figure 4). However, doing so allows us to ensure that the objectives are potentially conflicting goals. Since weather forecasting is not topic of this paper, we use real-world data on load and generation provided by an Australian distribution network for the prediction of the future net consumption w i , i ∈ [1 ∶ ]. The choice of T corresponds to this data. Braun et al 17 chose a different cost function g and simply summed the objectives, which complies with a weighting parameter = 0. 5.

Alternating direction method of multipliers
There are two natural approaches to solve the optimization problem (SMOP). The first one is to calculate a centralized solution by optimizing the overall system at once. Here, a large rate of communication within the network is needed and the CE has to know all data of every prosumer, including possible charging rates, battery capacity, and current SOC, in detail. The second one renounces the communication by optimizing the behavior of each prosumer separately. In this case, the overall optimum cannot be guaranteed, see, eg, the work of Worthmann et al. 2 A remedy is given by distributed optimization algorithms like dual decomposition or ADMM, see, eg, the work of Braun et al 30 for further details.
ADMM is an algorithm used to solve large-scale optimization problems of the form Next, we outline the ADMM algorithm used in the following.
The great advantages of distributed optimization algorithms such as ADMM are scalability and plug-and-play capability. Since Step 1 in Algorithm 2 can be parallelized, the number  of subsystems does not affect the performance. Moreover, the specific data of the individual systems mentioned above is not known to the CE, which ensures plug-and-play capability.

ADMM yields the solution of the scalarized problem
To apply ADMM in our framework to find efficient points of (MOP), we first scalarize the problem in order to obtain (SMOP). As a second preliminary step, we introduce the auxiliary variables a i = (a i (k), a i (k + 1), … , a i (k + N − 1)) ⊤ ∈ ℝ N , i ∈ [1 ∶ ], and we use the notation a = (a ⊤ 1 · · · a ⊤  ) ⊤ ∈ ℝ N to rewrite (SMOP) as In ADMM, each prosumer first minimizes the augmented Lagrangian w.r.t. its own power demand for some > 0. Then, the CE minimizes  w.r.t. the aggregated power demand and the auxiliary variable s based on the aggregated optimal power demand provided by the prosumers.
The following theorem recalls convergence properties of ADMM, see theorem 3.1 and subsection 3.2 in the work of Braun et al. 17

Theorem 1 (Convergence of ADMM). Let the augmented Lagrangian be defined by (9). Then, if there exists a saddle point of the unaugmented Lagrangian
∀(z, a, s) and , ADMM converges for all z 0 ∈ ℝ N and Π 0 ∈ ℝ N in the following sense.
subject to (ā, s) ∈ and are equal to those in (SMOP2).
Proof. Theorem 1 (i) and (iii) imply that the (optimal) penalty term is constant, from which the assertion follows.
Next, we work out that the optimal value of (SMOP2) also uniquely determines the optimal values of the functions g and h.
Proof. First consider ∈ (0, 1) and the modified problem (SMOP2). We show that f is strictly convex. Since f is twice differentiable, it suffices to show that its Hessian H is positive definite, see, eg, subsection 6.4 in the work of Luenberger. 31 It holds and therefore, Furthermore, since ∈ (0, 1), the Hessian H is positive definite, and thus, f is strictly convex. Hence, there is a unique minimizer (ā ⋆ , s ⋆ ) for (SMOP2). Now, consider ∈ {0, 1}. Since f is at least convex in this case, there exists a unique optimal value g(ā ⋆ ) or (1 − )h(s ⋆ ), and hence, g(ā ⋆ ) or h(s ⋆ ) is unique, respectively. Furthermore, strict convexity of h (g) yields uniqueness of s ⋆ for = 0 (ā ⋆ for = 1).

Due to Proposition 1, we can reformulate (SMOP2) as
where the optimization is w.r.t. the set of feasible function value pairs. A fundamental observation on existence and uniqueness of solutions of (SMOP3) can be directly deduced from Proposition 1.

PARETO FRONTIER
This section is dedicated to the Pareto frontier of the optimization problem (MOP). Firstly, we characterize the Pareto frontier and provide a brief sensitivity analysis w.r.t. the initial SOC and the size of the tube constraints. Then, we quantify the trade-off between the two objectives using the concept of proper optimality.

Determination of the Pareto frontier
We study the Pareto frontier of (MOP) for the open-loop optimal control problem to be solved at an arbitrary but fixed time instant k ∈ ℕ 0 within the MPC scheme presented in Algorithm 1. We illustrate our results for k = 0 emphasizing that the numerical findings can be directly transferred to other time instants.
Since we scalarize (MOP) using a weighted sum, we are able to use the following characterization of efficiency given as propositions 3.9 and 3.10 in the work of Ehrgott. 32
(ii) If x ⋆ is a unique optimal solution of (11), then x ⋆ is efficient for (6).
(iii) Let, in addition,  be a convex set and let f i , i ∈ [1 ∶ m], be convex functions. If x ⋆ is weakly efficient for (6), then there exist i ≥ 0, i ∈ [1 ∶ m], such that x ⋆ is an optimal solution of (11).
The main result of this section, a characterization of the Pareto frontier of (MOP), is stated in the following proposition. Note that the optimal vector s ⋆ and the optimal vectorz ⋆ are unique for = 0 and for = 1, respectively (see Proposition 1). To find efficient points for = 0 and = 1, we consider the auxiliary problems min g(z) and min s h(s) respectively.

Proposition 3.
Let us consider the MOP. Then, the following statements hold true.

(i) The set of weakly efficient points of (MOP) is compact and connected.
(ii) Consider ∈ (0, 1) and the unique optimal solution (g ⋆ , h ⋆ ) of (SMOP3). Then, there exists a unique weakly and an optimal solution (g ⋆ , h ⋆ ) of (SMOP3). Then, there exists a unique weakly efficient s ⋆ is the unique optimal vector such that (z, s ⋆ ) is solution of (SMOP) for somez with (z, s ⋆ ) ∈ andz ⋆ is the unique optimal solution of (12), if = 0 z ⋆ is the unique optimal vector such that (z ⋆ , s) is solution of(SMOP) for some s with (z ⋆ , s) ∈ and s ⋆ is the unique optimal solution of (13), if = 1 is well defined, and the component P 1 strictly decreases, whereas P 2 strictly increases. Furthermore, P is injective and the image of P coincides with the Pareto frontier of (MOP).
Proof. We show the single claims of the proposition consecutively.
(i) As a subset of the compact feasible set of (MOP), the set of weakly efficient points is bounded. Moreover, theorem 4.6 in the work of Luc 33 provides closedness and connectedness of the set of weakly efficient points of (MOP) and, hence, compactness, which shows the assertion. (ii) Due to construction of (SMOP3), Statement (ii) follows directly from Proposition 2 (ii). (iii) Similar to the second claim, the first part of Statement (iii) can be deduced from Proposition 2 (i). To prove the second part, assume without loss of generality = 0. Due to Proposition 1, there exists a minimizer (ā, s ⋆ ) of (SMOP2) and hence a minimizer (z, s ⋆ ) of (SMOP) where s ⋆ is unique. Moreover, since g is strictly convex, the optimal solutionz ⋆ of Problem (12) is unique. By construction, (z ⋆ , s ⋆ ) is efficient for (MOP). (iv) Well-definedness of P is directly inherited from g and h. The monotonicity properties can be seen as follows.
Since the components P 1 and P 2 are strictly monotone, P is injective. Furthermore, Proposition 2 (iii) yields that each efficient point (z ⋆ , s ⋆ ) of (MOP) can be found by varying ∈ [0, 1] and solving (SMOP). Hence, by construction, which completes the proof.
To determine the Pareto frontier numerically, we run Algorithm 2 for different ∈ [0, 1]. Here, we set c i andc i , i ∈ [1 ∶ ], according to the values depicted in Figure 6 (top right). Let z i (k) ∈ i , i ∈ [1 ∶ ], and Π 0 ∈ ℝ N be given. For given ∈ [0, 1], the unique optimal values P 1 ( ) and P 2 ( ) are denoted by g and h , respectively. The results can be found in Figure 5. Note that, for = 0.6, the maximum of f is obtained.
Next, we display the values in the table in Figure 5 in a (g − h)-plane to get an approximation of the Pareto frontier, which is visualized in Figure 6 (top left). Note that, for → 0 and → 1, the arcs asymptotically go to the minimum value of the optimization problem min (ā,s)∈ h(s) and min (ā,s)∈ g(ā), respectively.
The convergence of the Pareto frontier in dependence of number of iteration steps within Algorithm 2 (ADMM) is depicted in Figure 6 (bottom left). Here, we consider the functions g ∶ ℝ N → ℝ and h ∶ ℝ 2N → ℝ as defined above, and in addition, we introduce functionsĝ,ĥ,̂∶ ℝ N → ℝ witĥ Due to Theorem 1, the sequence (z , a , s ) ∈ℕ 0 converges to the optimal solution (z ⋆ , a ⋆ , s ⋆ ) of (SMOP2) fulfilling g(ā ⋆ ) =ĝ(z ⋆ ) and h(s ⋆ ) =ĥ(z ⋆ ). Regarding Figure 6 (bottom left), however, it can be observed that the convergence rate ofz →z ⋆ is much higher than the convergence rate of (ā , s ) → (ā ⋆ , s ⋆ ). Note that the points generated by g and h (before convergence) are not feasible while those generated byĝ andĥ are not optimal.
The remaining graphic of Figure

Sensitivity analysis
In Figure 6, we set the average initial SOCx(0) = 1.0 and the tube sizec − c = 0.2. Next, we explore how the Pareto frontier changes if we modify these parameters.
In Figure 7 (left), different Pareto frontiers depending on the tube size are plotted. Increasing the size of the tube is equal to a relaxation of the subproblem min h(s). The larger the differencec − c, the easier it gets to keep h small. Small tube sizes do not affect the performance of g drastically. If the tube becomes big enough to keep h(s) small without effort, however, the focus of the optimization shifts to g(ā).
The impact of the initial SOCx(0) on the Pareto frontier is visualized in Figure 7 (right). At first sight, the performance w.r.t objective g seems to improve with increasingx(0) while the performance w.r.t. h deteriorates and vice versa. To interpret this behavior, one has to take the time seriesw(n), n ∈ [k ∶ k + N − 1], into account, see Figure 6 (top right, dotted, black line). If the batteries are completely discharged at the beginning of the time interval, there is no possibility to reduce the energy demand by discharging the batteries. Hence, the tube constraints are violated due tow(n) >c(n) for small n. To compensate ∑ k+15 n=k̄( n) > ∑ k+15 n=kw (n), on the other hand, one needs to charge the batteries. Hence, the higher the SOC at the beginning, the harder it gets to trace the desired trajectory.

Proper optimality
Considering an efficient point, improving the performance w.r.t. one objective is only possible at the expense of the performance w.r.t. the other. We investigate the trade-off between (potentially) conflicting objectives using the concept of proper optimality. Note that there are different definitions given by several authors. 32,34,35 In this paper, however, we focus on proper optimality in the sense of Geoffrion, which expands efficiency by introducing an upper bound on the trade-off between two objectives. Definition 2 (Proper efficiency). A point x ⋆ ∈  is called properly efficient (in the sense of Geoffrion) of the MOP (6) if x ⋆ is efficient of (6) and there exists some L > 0 such that, for all i ∈ {1, … , m} and x ∈  with f i (x) < f i (x ⋆ ), there exists some index j ∈ {1, … , m} with f j (x ⋆ ) < f j (x) and Remark 4. In Figure 8 (top right), we consider the optimal solution (g 0 , h 0 ) of (SMOP3) corresponding to a given 0 ∈ [0, 1]. If we want to improve the performance w.r.t. objective g by the amount of g 0 − g 1 , the performance w.r.t. objective h worsens by the amount of h 1 − h 0 . Using function H as defined in Remark 3 and the relation g = P 1 ( ), ∈ [0, 1], Inequality (15) can be written as For the particular case g 1 → g 0 , this reads as where the left-hand side is the negative of derivative of H w.r.t. g at g 0 if existent. The latter indicates the cost of an infinitesimal improvement w.r.t. the performance of g. Hence, proper optimality provides an upper bound to this cost, which is an essential information for a decision maker.
The following theorem provides a characterization of the properly efficient points of (6) using a weighted sum scalarization as in Proposition 2.
Theorem 2 (See theorem 3.15 in the work of Ehrgott 32 ). Consider MOP (6) with a convex feasible set  and convex component functions f i , i ∈ [1 ∶ m]. Then, the following holds true: A point x ⋆ ∈  is properly efficient for (6) if and only if x ⋆ is an optimal solution of (11) with positive weighting parameters i > 0, i ∈ [1 ∶ m].
This result can be directly applied to (MOP).
We conclude this section with an example of calculating the bound L for = 0.2 numerically. Example 1. Consider 0 = 0.2. We provide an intuitive way to determine an upper bound L at the corresponding optimal value pair (g 0 , h 0 ) = (1.452, 0.442). Since g and h result from an optimization routine for which there is no explicit formula, we cannot compute L explicitly.
Instead, we calculate for all 1 in the table in Figure 5 (left). Additionally, we compute L 0 (0.01) and L 0 (0.99) to illustrate the behavior near the extrema. Then, we choose max{L 0 ( 1 )| 1 ∈ K}, where K = 0.05 · [0 ∶ 20] ∪ {0.01, 0.99}, as an approximation of the upper bound L 0 in (15). The results can be found in the table in Figure 8. One can observe that L 0.2 ≈ 4.74 holds, ie, to gain one quantity in g direction, one has to spend at maximum 4.74 quantities in h direction and vice versa. In our case, ie, 0 = 0.2, the bound on the second relation is active while the first one can replaced by 1∕L 0.2 = 0.211, ie, to gain one quantity in g direction, one has to spend at maximum 0.211 units in h direction.
In Figure 8 (bottom right), the evolution of the bound L is illustrated depending for ∈ [0.05, 0.99].

CONCLUSIONS AND OUTLOOK
In this paper, we have analyzed the Pareto frontier of a MOP, in which a trade-off between peak shaving and providing flexibility has to be made. To this end, we first considered the corresponding scalarization, linked it to ADMM in order to (numerically) compute its solution for a given scalarization parameter, and rigorously showed that, by doing so, we get the whole Pareto frontier. Moreover, by using the concept of proper optimality introduced by Geoffrion, we further investigated the Pareto frontier, which allowed to quantify the trade-off between the conflicting objectives. There are several ways to extend our model. One could easily consider the impact of controllable loads as introduced in the work of Braun et al 36 or varying lower/upper bounds on (dis)charging rates. Another possible modification of our optimization problem is to additionally penalize the use of the batteries, eg, in form of ‖u i ‖. The presented approach, especially ADMM still works, as long as the sets i of feasible solutions z i , i ∈ [1 ∶ ], stay polyhedral.
Throughout this paper, we assumed the net consumption to be given. Finding a suitable prediction method is an interesting topic for future research. For the goal of load shaping, the robustness of MPC schemes w.r.t. inaccurate forecasts was numerically investigated in the work of Worthmann et al. 2 Moreover, there are interesting approaches to estimate the energy generation within the prediction window more thoroughly using artificial neural networks as pointed out in the works of Mabel and Fernandez 37 and Chow et al. 38 A good starting point would be the work of Faulwasser et al 39 and the references therein.