Tracing spinning reserve inadequacy risk via hybrid importance sampling with an optimised partially collapsed Gibbs sampler

From the simulation perspective, spinning reserve risk evaluation of power system is commonly a rare-event assessing issue, for which importance sampling is an appealing solution technique. This paper proposes a hybrid importance sampling method for tracing spinning reserve inadequacy risk of power system integrating with renewables. The proposed method is oriented towards a generic high-dimensional integral calculus, which allows for continuous and discrete random variables mixed together for casting kinds of risk indices. To improve the robustness in high-dimensional applications, a partially collapsed Gibbs sampler is devised to help in exploring adaptive training samples which feature typical rare outage events. Interestingly, the originators engendering extreme spinning reserve inade-quacy events can be traced using the sampling information of the proposed method. The performance of the proposed method is tested in an updated RTS-79 system vis-à-vis several existing methods. Spinning reserve inadequacy risk tracing is explained by uncovering a scenario frequency distribution concerning the short-term expected energy not supplied.

Profiting from the rapid development of advanced computation techniques and equipment, contemporary power systems can be safeguarded with operating reserve of coordinated response abilities against a broad spectrum of contingencies, that is precisely why hazardous blackout has been compressed to be much rare. However, outage events occasionally reported by either developed or developing countries over recent years indicate there is still a long way for operating reserve planning to go with the penetrating renewables, and also rise points to the current solicitation of vanward comprehensive risk evaluation to advise how to reinforce operating reserve planning to adapt to the new operating environment.
There are various ways to label the operating reserve, amongst, spinning reserve is a crucial member of operating reserve family, also known as the secondary frequency regulation. As a requisite ancillary service, it is supposed to respond within 15 min or so to eliminate the remaining frequency deviation after the primary frequency regulation, that is the automatic generation control (AGC). High cost of sustaining the spinning reserve determines its planning issue bound to be a fundamental reliability and economic balance issue. Thus, it is impossible to achieve the optimum unless all the random influencing factors can precisely be captured. The spinning risk evaluation is valuable to provide information on imperceptible vulnerability overlooked by optimal generation preplanning, thanks to its prominent acceptivity for as many random factors as possible. Many methods to quantify the spinning reserve risk have been studied. Liang et al. [9] evaluated the operational reliability of wind power integrated power systems by considering spinning reserve response and frequency control processes. Min et al. [10] described a risk evaluation method for ramping capability shortage, wherein two major uncertainties in power systems are considered: failure of generating units and net-load forecast error. A dynamic sizing method proposed in [11] determines the required reserve capacity on a daily basis, using system imbalance risk estimation based on historical observations of system conditions by means of machine learning algorithms. Obando et al. [12] proposed to use extreme values through Monte-Carlo simulations to quantify operating reserves. Note that, besides the widely known random factors of unit faults and generation/demand fluctuation, specific factors may also influence the operating reserve capacity of a power plant considerably [13]. It can also be noted that the random factors taken into account spinning reserve planning are diversifying over recent decades, not only because of the presence of the foregoing renewables but also the responsive demand [14,15] and energy storage [16] as well as electric vehicles [17].
In [14] a hierarchical structure of users providing operating reserves to the power system through load aggregators is proposed. The methodology emphasises the uncertainty of users' beliefs about the other users' true willingness for providing the operating reserve. A method to size operating reserve by calling for flexible demand is proposed in [15], where transmission line failure and uncertainties existing in reserve capacities of contracted providers are analysed taking into account the customers' behaviour, random failures of information and communication system, as well as different load types. Reference [16] focuses on the optimal provision of the flexible ramping product by a battery energy storage aggregator in the dayahead joint energy and ancillary service markets. Two sources of uncertainty are considered, that is the uncertainty of ramping product offers being accepted in the day-ahead market and that of the accepted offers to be deployed in the real-time market. A model proposed in [17] serves in optimising day-ahead spinning reserve requirement considering plug-in electric vehicles' contribution in providing operating reserve, and the uncertainty of traveller behaviour following Gaussian distributions is simulated, yet events of two or more unit failures are neglected in terms of their rarity.
As can be noted, the uncertainty factors considered in spinning reserve planning research disperse among the literature for the sake of reducing computational complexity to various degree, as a result, a conclusion made with reduced uncertainty factors is questionable to scale from system to system. A comprehensive risk evaluation considering major uncertainties is imperative to validate a plausible scheme of reserve planning. Note, the spinning reserve risk evaluation falls into the scope of rare-event evaluation of a high-dimensional system, such a special issue makes importance sampling techniques appealing to conducting the evaluation task. Indeed, adaptive importance sampling methods accounting for generating unit force outage and statistical fluctuation of wind power and load demand are proposed to deal with the spinning risk evaluation [18][19][20][21][22]. However, these methods are developed without considering implicit kinds of random variable correlation in a spinning reserve planning issue. Copulas, due to their high flexibility, are popular to employ to model random variable correlation in power system research [23,24], and many importance sampling methods dealing with copulas are proposed in similar disciplines, for instance, the subset simulation, cross-entropy method integrating with Gaussian kernels [25], sequential Monte Carlo [26]. However, almost all these methods count on classical Monte Carlo methods in their first steps to gather enough failure event samples to ensure a successful launch of the algorithms. Note that the special rarity of spinning reserve inadequacy events mainly attributes to low failure probability of generating units in a short-term horizon. Consequently, shortterm outage events formulated by unit failures are too rare to be sampled with classical Monte Carlo methods. In sum, it is still a lack of an efficient spinning reserve evaluation method which counts for both kinds of random variable correlation and the extremely low failure probability of generating units.
To fill this gap, based on the author's previous work [20] and [21], this paper contributes to the renewable energy system spinning reserve risk evaluation a new importance sampling method. The salient features of this method are three-fold: • The integrand in the generic expression of risk indices proposed in [21] is reformulated to adapt to categorical distributions and Gaussian densities in the standard space, thus facilitate to train dedicated proposal densities for simulating discrete and correlated continuous random variables respectively. • The robustness in high-dimensional application is further enhanced by an optimised partially collapsed Gibbs sampler for the reformatted generic expression of risk indices. • Critical scenarios leading to extremely rare outage are viable to be traced as per a scenario frequency distribution with respect to a reliability index. The scenario frequency distribution is uncovered by mining the sampling records of the proposed method.
The remaining part of this paper is organised as follows: Section 2 reviews concepts and rareness feature of the spinning reserve risk evaluation and formulates the risk index assessing problem as an integral over stochastic rare events. For the generic integral expression, the proposed hybrid importance sampling method is presented in Section 3. Specific performance analysis is made in Section 4 with several simulation methods involved in the comparison, and a typical application is exemplified in the end of this section. Finally, Section 5 concludes this paper.

Spinning reserve risk evaluation
The concept of spinning reserve varies from country to country and is subject to diversity in the present of responsive demand and dispersing renewable energies and energy storage blending into the generation array. Roughly, spinning reserve is generally defined to be a bulk of synchronised capacity of a power supplying system, and mandatorily left spared on tap, at all time, to offset a supply-consumption imbalance stemming from load demand fluctuations accompanied by stochastic failures of supporting facilities [21]. Thus, spinning reserve risk evaluation concerns the system outage risk induced by an unexpected spinning reserve insufficiency in a lead time, Δt , short to 5-15 min.
To quantify the risk, all the stochastic behaviours in Δt of influencing factors should be modelled. In particular, forced outage rate U is commonly used to approximate the component short-term failure probability, as expressed by (1): where is the component failure rate. For the sake of accuracy, multi-state transition rates [18] or dynamic failure rate [27,28] are also possible to derive for specific components so long as the database at hand is sufficient. Besides, other uncertain factors, for example power demand, renewable resources etc., in the same Δt should be described with certain probabilistic distributions as required.
For regular operation, ought to be maintained to be small enough, for example 1 occ./y, consequently, U ≤ 2.85 × 10 −5 holds in a lead time below 15 min, hence the probability of an outage event resulting from common or higher-order outage of multiple such components comprising a minimal cut set would be extremely small. Thus, from the perspective of simulation, the spinning reserve risk evaluation of renewable energy systems falls into the scope of high-dimensional rare-event evaluation in terms of a myriad of dispersing small generating sources.

Generic importance sampling framework
For power system rare-event evaluation, any risk/reliability index (RI) can be generically cast as a recourse probability measurement multiplied by an exponential constant [21], that is formally, where S (X) is a real-valued function to quantify the outcome of n-dimensional random X, an all-too-familiar exemplar of S (X) to the reliability community may be the power not supplied in response to X, and the recourse probability measurement is defined by where I(condition) is the 0-1 indictor function which takes value of 1 if condition is satisfied. Also, by definition, which can also be referred to as the limit state function in the structural safety subject. Note, is a predefined positive con- is a n+1 dimensional random variable vector with u being the standard uniformly distributed variable independent of X. f (Z) denotes the joint probability density function of Z.
In the context of power system spinning reserve research, discrete random variables, for example representing stochastic operating states of generators and transmission lines, can rationally regarded to be independent of continuous variables which could represent generating/demand power. As for the grouped continuous variables, copulas can be employed to govern kinds of inter-correlation [23,24]. In this regard, this paper assumes that the two categorical variables do not correlate with each other, and readily the integrand of (3) can be further decomposed as (5) where Z is decomposed into two groups of discrete Z and continuous Z U . Without loss of generalisation, each element of Z is assumed to follow an independent categorical distribution, and the joint probability mass function is denoted by P, and Z U consists of continuous variables conforming to the standard multivariate normal density of , and T stands for a bijection is probabilistic transformation, for example the Nataf transformation, from the physical space X into the standard space Z U , thus, plays an important role to abstract all the inter-correlation of X into I(condition).

Features of the generic importance sampling framework
From the perspective of importance sampling simulation, the foremost advantage of (5) lies in that it spares the trouble of considering the inter-correlation of each random variable in the original continuous Z in the decision process of parameterising the proposal density. In specific, no matter how RI is defined, the best proposal density can always be assumed proportional to multiplied truncated univariate Gaussian densities and categorical distributions. Moreover, parameters of the proposal density, to be more precise, consisting of the univariate Gaussian location (or mean) and scale (or standard variance) and category probabilities, can unexceptionally be resolved analytically [19]. In this regard, such a strategy for parametrising the proposal density, as compared to other schemes like kernel density [29] or copula density [30] or non-parametric density [31], is not only straightforward but also conducive, as shown later, for cheaply generating apt training samples to well fit an elegant proposal density.
The following section will first review some methodologies for estimating (5), and then, a new mechanism to collect training samples to construct a proposal density indispensable to the importance sampling is inspired by some pros and cons analyses of the reviewed methods.

Adaptive importance sampling based on cross-entropy
The cross-entropy method is a well-documented adaptive importance sampling method and has been extensively used in power system reliability studies [18][19][20][21][22]32]. It succeeds by calibrating the proposal density from an assumptive prototype in a data-driven manner [33]. For estimating (5) via the crossentropy method, let us assume a parametric proposal density of the structure like where B P : R [0,1] → R + stands for a Beta density, P P and p represent the joint probability densities coming from the same families of P and respectively, and all the parameters pending determination are abstracted into a vector of Θ. Then, the optimal proposal density can be approached from this parametric structure with optimal Θ * , that is Intuitively, g IS (Z, u, Z U ; Θ * ) can never be known prior to P ( ). Fortunately, this paradox can be tackled by conducting the iteration of initialising Θ first to simulate samples which are in turn fitted to update Θ and so on. Thus, many adaptive importance sample methods, like the cross-entropy, count on the crude Monte Carlo to gather enough effective samples necessarily satisfying I {H [T −1 (Z U ), Z, u, ] ≤ 0} = 1. Due to this mechanism, one of the reasons leading to its failure in estimating (5) is explained by the setup deficiency of collecting such requested training samples of extremely rare events [20,34].
A specific partially collapsed Gibbs sampler recently proposed in [20] to integrate with the cross-entropy theory for improving the efficiency of gathering rare training samples to prove its efficacy of overcoming the outage event rareness, and moreover, requires less effort for regulating hyperparameters. The core idea of this method is to gather the requested training samples from proper marginal densities via Markov chain Monte Carlo. The advantages include that there is no need to initialise Θ, but also outage events can always be simulated no matter how rare they are. Generally speaking, to conduct such a method, we need to select one or more variables, for example the unit states or load demands, to be marginalised from to get a reduced joint density, then the other variables, all or partially, are sampled from this reduced density via any Gibbs or Metropolis-Hastings method.
Note, to ensure the Markov chains faithfully coming from , it is required that the selected marginalised-out variable(s) must always be sampled after those variables following the reduced marginal distribution, meanwhile, before variables, if any, conditioned on it. Although the mechanism proves to be able to accelerate the Markov chain mixing, performance decay is still observed in the case of high-dimensional application, for instance, convincing evidence can be found in [35]. In a nutshell, the variety of limited number of high-dimensional training vector samples is bound to lock locally, or equivalently, the obtained Markov chains look like being not irreducible in the limited sample pool. In this regard, we need to find a way to help the marginalised-out variable escape the blockade, so as to increase the ergodicity of a limited number of high-dimensional training vector samples. Not only that, the new sampler is welcomed to be applicable to the case of the generic proposal density defined by categorical distributions compounded with the standard multivariate Gaussian densities.
To achieve these goals, the sequel section elaborates a hybrid importance sampling method as the main contribution of this paper.

Proposed partially collapsed Gibbs sampler
The sampler deficiency discussed above gives birth to the following devised partially collapsed Gibbs sampler. Some concepts following [36] need to be clarified prior to the presentation of the detailed method. Given a joint density of f (X ), reduced univariate marginal (categorical/Gaussian) distribution is defined as where x j is said to be a (categorical/Gaussian) variable marginalised out of f (X ), and the notation of '∼' is introduced to highlight a given sample of the corresponding random variable which is conditioned by x i . Then, the hybrid importance sampling method is put forth as follows: Algorithm Preparation: 1. Given λ and probability mass or density functions of all the random variables, build specific (4) and (5) for an index. Initialise a large value of satisfying ≥ sup X ln S (X), and make decision to select the marginalised variable from the Gaussian variables, then find the reduced univariate marginal distributions for the categorical variables. Appendix A lists the detailed formulas of the reduced univariate marginal categorical distributions resulting from marginalising a Gaussian variable or the recourse variable u for easy reference respectively.
univariate categorical distributions and/or Gaussian densities are fitted via the maximum likelihood method. As aforementioned, closed-form solutions can be found rather than inefficient numerically gauging. The fitted mass or density functions should be multiplied together to formulate the intact proposal density. 3. Conduct the importance sampling-based the proposal density to estimate (5), until a prescribed ceasing criterion is satisfied. Commonly employed criteria, for instance, are total number N IS of the simulated likelihood-correcting samples and coefficient of variance . 4. At last, the desired index should be estimated as P ( )e .
Remark: the strategy of selecting a Gaussian variable to be marginalised together with the principle of defining O i is very different from [20] and [21]. For the sake of suitability for more general condition, reduced univariate marginal categorical distributions are capitalised on because they can always be deduced without much efforts. However, such a marginalisation strategy does not rule out other possible strategy which may be more efficient in special application. Case in point, the recourse variable u is also a candidate for doing the marginalisation for discrete variables (see (A2) in the Appendix), however, as demonstrated by later case study, marginalising u is not necessarily better than a Gaussian viable. Nevertheless, a trick still exists in the extreme case that X is completely defined to follow categorical distributions, then, u would come to the play.
Finally, to recapitulate briefly, focal marginalising manipulation and conditional simulation of the proposed partially collapsed Gibbs sampler supporting the hybrid importance sampling method are generic to implement under the unified framework expressed by (5), since every time to determine either a reduced marginal distribution or draw a univariate sample, it just needs to refresh the boundary solutions of the corresponding truncated univariate Gaussian or categorical distribution, by substituting the latest vector sample into H [T −1 (Z U ), Z, u, ] ≤ 0.

Test environment setup
An updated RTS-79 system proposed in [37] is adopted as the testbed to analyse the algorithm performance. The generating array is composed of 96 conventional and 57 solar and four wind power units plus one storage system. Except for the three synchronous generators, the 93 conventional units of total capacity 9076 MW are used with specifications of forced outage rate and initial/nominal active power and ramp rate unmodified. It is envisaged that the storage system has been fully charged at the present time which is aligned with the time stamp of 'period 108' in Jan-01-2020, and the tested lead time is 'period 109'. For the given lead time, the power output of a wind or solar generating unit (excluding the 'CSP' type) is forecasted by a fitted four-state categorical distribution and regarded to be unadjustable, while the three power demands forecasted from fitted Gaussian distributions. All the forgoing models are derived from the given power records respectively on the same timestamp (period) from day to day in 2020. Summarily, the number of random variables involved in the evaluation task at the lead time is 157, and the total active generating power is available at a wide range from 3745 to 14,550 MW, to cover average load demand of 4235 MW. To simplify the illustration, any possible power correlation and power transmission constraints are omitted. Nevertheless, such simplifications are not necessary for applying the algorithms in practice in terms of the generic framework expressed by (5). Performances of the proposed method will be compared with the algorithm (here abbreviated as 'EX-MICEM') proposed in [21], and yet, with several self-variants of the proposed method, which differ on decided marginalised variables.
All the pilot algorithms are coded in Julia language (ver. 1.5.3), and yield the following simulation results from a laptop featured by Intel ® Core™ i7 7700HQ and 16GB RAM.

Performance comparison
For impartial comparison, all the algorithms are tested by being assigned to the same spinning reserve risk evaluation task of estimating the expected energy not supplied (EENS) for the lead time labelled 'period 109', where all the 154 generating sources have been committed. Given such a hypothesis, outage events would be extremely rare because of both the inordinate generation capacity and low failure rate of conventional units in the short-term horizon. The following results are yielded from 50 independent trials of each simulation methods. A graphical depiction of the dispersion of the EENS estimations delivered by the proposed method is first shown in Figure 1. Those delivered by the EX-MICEM are shown in Figure 2. As it may be widely known, error of a simulation method can be indicated by the coefficient of variance of its claimed estimation. For an unbiased importance simulation method, simulation error or the coefficient of variance can be reduced by increasing the budgets of training samples and correcting samples, but at the cost of aggravated simulation burden. By comparing Figures 1 and 2, the EENS estimations in this case manifest to be rather small, leading to failure of traditional simulation methods including the Monte Carlo and cross-entropy. It should be emphasised that the small estimations do not mean that they can trivially be discarded, because within the scope of extreme outage evaluation, it is not legitimate to explain an extreme short-term EENS estimation with reference to an annual value while being unaware of the wide disparity of outage scenario probabilities between the two very different time scales.
By referring to Figures 1 and 2, the superiority of the proposed method can be concluded with unanimous approval, namely, the proposed method consumes the same amount of training and likelihood-correcting samples to achieve a much lower coefficient of variance, thus delivered more concentrated estimations near each other, in contrast, the EX-MICEM outputs much smaller EENS estimations, and thereon is subject to the curse of dimensionality due to the budgeted 1000 Markov chain training samples far from being irreducible. The superiority of the proposed method also corroborates much better mixed high-dimensional extremely rare outage samples, of which a small amount proves to be adequate to deserve an elegant proposal density. Larger N T and N IS can help increase accuracy of the two algorithms, but do not overturn the conclusion.
Based on the above simulation results, it can clearly be understood that the huge efficiency gains in the case of high-dimensional extreme rareness benefits greatly from the marginalised Gaussian variable rather than the categorical variables. Through extensive experiments in this work, there is no clear evidence to assert which Gaussian variable is best for doing the marginalisation, since nuances of the efficiency gain exist among different Gaussian variable candidates.
Note that, recourse variable u is also a potential candidate to do the marginalisation, Figure 3 reports the corresponding simulation results for comparison, and turns out that u is not a wise option in this application notwithstanding, still behaves slightly better than the EX-MICEM.

Efficiency sensitivity to the distribution models of Z
To analyse the performance sensitivity of the proposed method concerning the model structure of f (Z), herein, we slightly alter the model hypothesis to be that the array of power generation from the 56 photovoltaic generation units is assumed to follow a Gaussian or Student's copula with freedom degree of 0.5, of which each marginal density being a special Weibull density fitted with the historical daily records of the corresponding photovoltaic generation, and yet all the 56 solar power variables are assumed to be correlated by sharing a common correlation coefficient for the sake of clarity. Appertaining to these model hypotheses, the specific T can be implemented as shown in Appendix B. The performances of the proposed method against under different copula hypotheses are summarised in Table 1.
It can be noted that the difference of the EENS estimation results is not obvious in this case study, implying that correlation features of small PV generation sources affect the system spinning reserve risk not too much. In addition, the performance of the proposed method slightly decays as compared to the foregoing case of independent PV generation. This is because we need to find a numerical solution to the boundary of Z U,i from the non-linear inequality of H [T −1 (Z U ), Z, u, ] ≤ 0 given a lat-estZ U,≠i . However, the performance of the proposed method demonstrates to be well stable under the two tested copulas by reaching = 0.03 with approximately identical CPU time consumptions.

Performance under increasing dimension
In this subsection, the above test system is enlarged as follows: Each generating resource is copied three times, and also each lumped power demand. As a result, the total number of the random variables involved in the simulation soars up to (154 + 3) × 3 = 471. Such a modification does not aim at emulating practical systems, but mainly serves for inspecting scalability of the hybrid importance sampling applied to other spinning reserve risk evaluation tasks of similar high dimensionality and rarity essence. The simulation results are reported in Figures 4 and 5, with different parameter settings, respectively. These figures reveal that the rareness of outage events in the enlarged system gets even exacerbated, leading to the EENS dramatically decline to 10 -125 order of magnitude. As expected, the proposed method must budget larger N T and N IS to yield a more accurate estimation. How to balance N T and N IS to achieve the best performance is problem-specific, could get answered through off-line parameter adjustment analogous to [20]. A rule-of-thumb is directly generalised herein: On the premise of N T preset large enough to guarantee the inclusion of typical outage samples, the estimation accuracy requirement can be fulfilled by increasing N IS .
In summary, the simulation results demonstrate the potential advantages of the hybrid importance sampling employed to make quantitative risk analysis of a high-dimensional spinning reserve evaluation task. Moreover, due to the nature of friendly integrating with parallel or GPU acceleration, it can consider as many influencing factors neglected by the original spinning reserve scheme as possible to release real-time advisory risk indices.

Renewable energy system spinning reserve risk evaluation
For spinning reserve management of a renewable energy system, a major concern would be about the influences of emerging renewables on the system response-ability to real-time power disturbances. This section discusses some spinning reserve risk evaluation results based on the updated RTS-79, via the proposed hybrid importance sampling method. Figure 6 shows the curves of the EENS estimations in the lead time, varying with increasing load levels, where the curve marked '+' denotes the outcome from the original update RTS-79 system with all the generating sources committed, while the curve marked '○' signifies without any wind or solar energy. The straight line marked '※' is imaginary to be a prescribed warning cordon of system spinning reserve reliability in terms of adequacy or security or both aspects. Advised by such a specific warning cordon, the wind and solar energy can work together to carry an additional load capacity up to about 1.95 -1.6 = 0.35 multiple of the original average demand, without wreaking havoc on the spinning reserve reliability. Notice that, should the warning cordon be put down much lower in practice, small EENS magnitudes are required to be accurately estimated, so as to be possible to be compared with the warning cordon.
For easy reference, the EENS estimations under several multiples of the average load are reported in Table 2. A common parameter setting of the hybrid importance sampling method adopted to yield these results reads: N T = 1000, N IS = 500,000. As such, each coefficient of variance of these estimations is found to be below 0.005 within 14 sec.
By summarising the above-studied case, it can be noted that fast and accurate overall analyses for real-time decision-making count on an efficient computation-light evaluation method, like the proposed hybrid importance sampling method, to tell possibly marginal differences between small values of the indices accounting for plentiful random factors.

4.6
Tracing extreme spinning reserve inadequate scenarios Another interesting aspect could be to get insights into what kind of extremely rare outage scenarios leading to the extreme outage quantified by the EENS indicator. To answer such a question, the budgeted 10 5 likelihood-correcting samples to summarise P ( ) via the proposed method applied to the updated RTS-79 with mean load increased by a multiple of 2.2 are employed to produce a sketch of scenario frequency distribution FIGURE 7 Frequency distribution of scenarios simulated via the proposed method. The first 94 generating resources are traditional generators. State ID #1 to #4 represent the gradually increasing available state capacity from 0 to the maximum accounting for spinning reserve as shown in Figure 7. It can clearly be observed that all the traditional units with spinning reserve are all sampled to output their maximal power, while the extremely rare outage events are mainly ascribed to the possible derating output states of the wind and solar power. Amongst, #95 to #98 wind units manifest themselves in contributing their derating states the most considerably to formulating outage scenarios on the given load condition.
Until now, it is suitable to summarise the significant advantage of the employed frequency distribution to trace the risky scenarios that, as compared with the commonly used distribution of energy not supplied samples, there is no necessity to weight the probability against the loss degree of an outage event to recognise scenario importance to the EENS, specifically, the scenario frequency information to summarise P ( ) is an onedimensional data utilised to advice such importance distinction without resorting to the severity dimension of e since which is a common quantity for all the inadequate scenarios to finally summarise the EENS. Moreover, the frequency distribution yet differs from another well-known distribution of loss of load probability, on its particular dependence upon parameter which is predefined to capture the upper limit of outage loss degree. Thus, P ( ) can adapt to definition of the indices, so does the importance of extreme scenarios acknowledged by the scenario frequency distribution.

CONCLUSION
This paper contributes a hybrid importance sampling method for accelerating spinning reserve risk evaluation of a renewable energy system. The proposed method is featured by a partially collapsed Gibbs sampler with special marginalisation mecha-nism and sampling order fused into the cross-entropy method. Performance analyses are made against existing methods on the updated RTS-79 system penetrated with many small renewable sources. Independent simulation results conducted on a personal laptop summarise that the algorithm is able to accomplish the rare-outage evaluation task involving 157 random variables within 10 sec to set down the coefficient of variance to circa 0.02, and in the case of 471 random variables involved, within 6 min to set down the coefficient of variance to circa 0.02. The EX-MICEM participating in comparison remains outclassed in terms of the computation efficiency and accuracy. Yet, spinning risk management is exemplified by surveilling EENS estimations and tracing risky scenario contributions released by the proposed method.

APPENDIX A
Given (5), the ith reduced univariate categorical marginal distribution by marginalising a Gaussian variable Z U, j out of (5) can be found as where, P G k (Z i ) represents the reduced univariate categorical marginal probability mass of the kth category of the ith discrete random variable with a finite discrete support of On the other hand, given (4) and (5), the reduced univariate marginal probability mass P u k (Z i ) of the kth category of the ith discrete random variable resulting from marginalising the recourse variable u out of (5) can be found as

APPENDIX B
Remind of (5), let us slightly extend the assumption that Z can follow any kind of elliptical copula C (F 1 (Z 1 ), … , F i (Z i ), … ; R) with F i denoting the marginal cumulative distribution of com-ponent Z i and R the correlation matrix. Then, given a standard space sample of Z u , we can obtain each element of the corresponding physical space sample through a specific implementation of T −1 as (8), which is derived as per the generalised Nataf transformation: where L is the low triangular matrix of Cholesky decomposition of R, namely, LL T = R, and Φ is the cumulative distribution function of the standard univariate elliptical distribution.