Robust stock assortment and cutting under defects in automotive glass production

We address an assortment‐and‐cutting problem arising in the glass industry. The objective is to provide minimum waste solutions that are robust against such raw material imperfections as those possibly occurring with float glass production technology. The stochastic realization of defects is modeled as a spatial Poisson point process. A mixed integer program in the classical vein of robust optimization is presented and tested on data taken from a real plant application. Defective final products must in any case be discarded as waste but, if a recourse strategy is adopted, faults in glass sheets can sometimes be recovered. Closed forms for the computation of faulty item probabilities are provided in simple cases, and obtained via Monte Carlo simulation in more complex ones. The computational results demonstrate the benefits of the robust approach in terms of the reduction of back‐orders and overproduction, thereby showing that recourse strategies can enable nonnegligible improvements. Encouraged by this result, the management is presently evaluating the possibility of adopting the proposed model in plant operation.

a real production plant.The industrial process in the title, in fact a standard in the production of automotive glass parts, makes use of so-called float technology, which consists of two main phases (see Figure 1).In the first phase, large rectangular glass sheets of various sizes and types are manufactured by widening/narrowing and, after cooling, cutting a ribbon of molten glass; the sheets produced are then sent to a warehouse: We will refer to them as the large items, with heights (lengths) roughly ranging between 2 and 3 m (4-6 m).In a second phase, small rectangular sheets (the small items) are cut from large ones taken from the warehouse, and sent to downstream departments for bending and refinishing.
An example of the optimization of such a process, reported by Arbib andMarinelli (2007, 2009), refers to a plant operated by Pilkington Italia and now owned by Nippon Sheet Glass Co. Ltd.The purpose is the simultaneous control of the assortment of large items produced in the first phase and of the trim loss generated in the second.The word assortment refers to the number of distinct large sizes that are stored in the warehouse at any time: this number must be limited to a certain amount p to reduce holding cost and setups, and to ease handling operations.To give an idea of the problem scale, the production data set includes about 50 distinct small sizes and over 6500 distinct large sizes varying in the min-max height and length ranges with 1 cm pitch: As the desired assortment level p ranges around 20, over 10 57 different assortment combinations can in principle be considered.
In Arbib and Marinelli (2009), the described stock assortment-and-cutting problem is formulated as a 0-1 linear program and solved by column generation or by an asymptotically exact heuristic.In this model, perfect knowledge of data is assumed.However, the glass ribbon may now and then come out from the furnace with defects (e.g., bubbles).The defective area of a large item cannot be recovered and is instead recycled, with a resulting reduction in yield: The quoted research, however, did not tackle this issue.

Handling defect occurrence
Different reactions to the presence of defects (or, more generally, of forbidden areas) can be implemented depending on available technology, production line equipment, and work organization.In the literature, see Section 2 below, most papers on cutting optimization assume that defects are known in advance and given as input data.Consequently, the main focus is on the design of defect-free cutting patterns in a deterministic setting.In this paper, instead, we regard defect occurrence as a random process and approach uncertainty by using a robust optimization model.The model aims at fulfilling a given small item demand with a minimum volume of large items, and defect occurrence is modeled via a suitable uncertainty set, which impacts the solution by reducing the yield of the cutting phase.
Uncertainty is managed by mixing worst-case analysis typical of adversarial models with some stochasticity assumptions: The total number of defects in the planning horizon is a model parameter, and how many defects are placed in each large item is an adversarial decision; however, the adversary does not control where to place defects in each large item.Yield reduction is therefore modeled, as an expected value, via the probability that the defects present in a large item cannot be recovered.
This setting finds its justification in both problem-specific and practical reasons.On the one hand, uncertainty management is required because large sizes are not given beforehand, being part of the assortment decision: The optimal cutting patterns are in fact defined before the production of large items and hence before defects are detected.On the other hand, online reaction to defects, once these are spotted in a large item, is not practical either: In fact, although defectfree cutting patterns can in principle be computed just before loading large items into the cutting machines, the yield gain achieved would be completely absorbed by the additional costs of material handling, machine setups, and downstream departments operation scheduling.
To date, the plant management ignores any potential and even simple reaction, such as putting the defective large items aside and treating them separately, or changing the cutting patterns to reduce defect incidence: Although defects are detected already at the end of phase 1, they are not considered before the cutting stage, and small items affected by faults are simply discarded before the bending and refinishing stage.However, production numbers would definitely justify a policy against defectiveness.Glass imperfections cause in fact a loss of material, currently estimated around 8%-9%, a considerable magnitude given that the management deemed worth of optimization a trim loss estimated around 4%-5%.It is important to stress that discarded items have

Production and Operations Management
an impact not only on the efficiency with which raw material is used but also, and more importantly, on the completion of order batches.
A question then arises whether an optimization model that is able to handle this type of uncertainty, perhaps coupled with a suitable recourse strategy, can reduce backorders effectively while preserving efficiency in raw material usage.The result we obtained on a set of real-world instances provided by the plant shows that (i) by protecting against defects, robust optimization substantially reduces back-orders; and (ii) to be achieved, this result does not need the implementation of any particular (and possibly expensive) recourse strategy.

Justification of the approach
In this section, we briefly discuss the advantages and costs of possible alternatives to our approach.Indeed, the problem we address is operational and does not consider investments in hardware.However, the specific plant operation should not limit the set of options one can consider.Hence, an analysis of technological and/or operational alternatives is necessary to demonstrate our methodological contribution.This analysis is to be carried out while recognizing that the productivity potentially achievable via technological updates (e.g., defect detection systems, new cutting machines) must, on one hand, justify important strategic investments and, on the other hand, provide a broad assessment of the impact on the whole organization.
• The main technological, but also operational, specificity of the plant considered in this work is that the production process is organized in two phases that are physically separate and time decoupled.In some plants, see, for example Durak and Tüzün (2017), the float line is equipped with defect sensors and bridges, perpendicular to the direction of the glass flow, which can perform cuts directly on the glass ribbon.Defect-free cutting patterns can therefore be designed and applied on-line, avoiding in this way assortment and overproduction issues on the large sizes.However, being integrated with the float line, the cutting system has only a few cutting wheels (≤ 5 in the cited work) that must be positioned at fixed coordinates during the whole production run.The result is a very limited number of feasible cutting patterns and, consequently, potentially high waste levels.A technological update of this type, then, would not imply a reduction of process costs.• Another specificity of the plant is the way small items are cut from large ones.Downstream cutting machines allow only one size per large sheet.This configuration features a very short operation time and a single out-buffer, meaning high speed, elementary management of the output, and thereby a large plant throughput.Nevertheless, more complex cutting patterns could in principle reduce trim loss.However, complex patterns would make recourse strate-gies very hard to be computed (and even implemented), and the robust model would become very complex and certainly impractical to solve.Moreover, should one attempt to address this complexity, the trim loss reduction would most likely be negligible.In fact, we find that the solutions obtained with the actual configuration are clearly feasible for machines able to perform more complex cuts, and that they represent a very good benchmark (trim loss below 1%).• Checking ex post the demand not met because of defects, and recovering the requirements with an adequate extraproduction, on-the-fly, of the necessary large items, is not viable in float glass production plants.The float production campaign is long, has a costly setup, and is not synchronized with the cut of small sizes.In other words, the small items cut at a given time are obtained from large sheets of a glass type (i.e., color and other technical features), which generally differs from the type the furnace produces at that time, and a float switch to that glass type is materially impossible.This is why any solution approach should take into account the uncertainty of defects realization.• Finally, any operational reorganization of a plant like the one here considered cannot neglect set-up issues both at the furnace and in the cutting stage.

Content
The paper is organized as follows.The literature on cutting problems with defects is briefly discussed in Section 2.
Then, for convenience of the reader, the assortment-andcutting problem described in Arbib and Marinelli (2009) is reported in Section 3 along with the relevant integer program.In Section 4, we introduce the uncertainty model that, in Section 5, is implemented in terms of robust optimization.Section 6 refines the description of the robust model, and an extensive computational experience performed on real-world instances is described in Section 7. The paper is completed with Supporting Information EC.2, where we describe the linearization of the robust model used in numerical tests, and the closed formulas devised to compute fault probabilities for large items with ≤ 2 defects.

LITERATURE REVIEW
Due to both combinatorial richness and relevance in a variety of industrial contexts, a very large amount of theoretical and application-oriented papers addressing cutting optimization problems appeared in the scientific literature during the last six decades, that is, since the seminal papers by Gilmore and Gomory (1961), see, for example, the surveys by Wäscher et al. (2007) and Delorme et al. (2016).Generally speaking, raw material is always potentially affected by imperfections: knotholes in wooden trunks and boards; bubbles in glass; contaminated areas in steel coils; holes, stains, or streaks in paper and leather sheets.This

Production and Operations Management
means we find the topic addressed in such industrial processes as lumber and furniture, see Ghodsi and Sassani (2005) and Wenshu et al. (2015); shoes and textile, see Özdamar (2000) and Sarker (1988); and paper, see Aboudi and Barcia (1998).However, relatively few contributions in the cutting and packing literature focus on the problems that arise when cutting defective items.
One of the earliest publications on cutting problems with defects dates back to 1968 (Hahn, 1968).Several variants concerning the presence of one (Carnieri et al., 1993;Neidlein et al., 2009) or more (Afsharian et al., 2014;Wenshu et al., 2015) defects per stock item have been studied since then, both in one and two dimensions.Defect kinds range from simple points to rectangular or convex areas, and from faults causing immediate scrap of the items, to defects which only reduce (at various grades) production quality.A related issue, defect handling, is addressed in several ways: from the optimal use of clear zones (i.e., areas unaffected by faults) to cut optimization that assumes product values varying with fault content and/or placement in the stock items (see Sarker, 1988).Those problems are almost always modeled as mixed integer linear programs and solved by dynamic programming (Durak & Tüzün, 2017;Rönnqvist & Åstrand, 1998), Lagrangean relaxation and subgradient optimization (Rönnqvist, 1995).
In most contributions we are aware of, including those referenced above, the defect list, size, and location are given and known in advance.In some cases, the cutting pattern is also known, and the problem calls for finding a new layout that avoids as many defects as possible.This problem is hard even in the one-dimensional case, which is known as the Minimum Defective Subset Sum, see Aboudi and Barcia (1998).To our knowledge, only the relatively old paper by Sculli (1981) handles defects with a stochastic approach: Here, the size of a one-dimensional roll with fringe defects caused by winding is treated as a normal random variable.Expected waste is expressed as a given cumulative density function of the unusable parts at the edges of the roll, and then the optimal position of the knives is obtained analytically.An online optimization setting in glass manufacturing was introduced by Durak and Tüzün (2017); here, cutting bridges directly operate on the glass ribbon, and cuts are performed a few seconds after defects are detected.Therefore, at any given time, defect size and positions are known only for a limited number of stock items, precisely those between the sensor detectors and the cutting bridge.Nevertheless, that partial information allows the authors to address glass defectiveness by solving a sequence of deterministic cutting problems over a rolling horizon.In summary, Sculli (1981) assumes random defectiveness that only affects the edges of one-dimensional roll items, while the application described in Durak and Tüzün (2017) is very similar to our setting, but the authors treat defect positions as known data.
Rather than defect handling, papers on cutting problems deal with uncertainty especially to address the stochastic behavior of costs and demand.For example, robust optimiza-tion is adopted in Alem and Morabito (2012) to address a combined lot-sizing and cutting-stock problem, and in Ide et al. (2015) with application in the wood cutting industry.Uncertainty in customer demand is addressed by two-stage stochastic optimization in Alem et al. (2010) and Beraldi et al. (2009).A cutting stock problem with random customer demand and random cutting times is investigated in Krichagina et al. (1998) and approximated by linear programming and a dynamic control problem involving Brownian motion.Finally, a branch-and-price method for robust optimization is proposed in Schepler et al. (2022) to address one-dimensional bin packing with items of uncertain sizes.

THE STOCK ASSORTMENT-AND-CUTTING PROBLEM
The glass cut process described in Arbib andMarinelli (2007, 2009) involves a first phase that produces large rectangular items, which are to be cut, during a second phase, into desired amounts of small rectangular items of given sizes.The 0-1 linear programming model proposed in Arbib and Marinelli (2009) to optimize the cut process can be summarized in the following way.
Let S, L, respectively, denote the sets of small sizes required, and of all the feasible large sizes that can in principle be produced in the first phase.Let also d i , i ∈ S, indicate the number of small items of the i-th size that are demanded, and have therefore to be produced, in a given planning horizon.We define an optimal assortment as a set of no more than p large sizes that, among all the possible choices, allow the cut of all the demanded small items with a minimum trim loss.
The rules adopted in the plant and the technological constraints of the downstream cutting machines greatly simplify the cutting patterns, that are limited to those obtaining small items of the same size from each single large item.In particular, • the small items of a pattern have all the same orientation and, of the two possible orientations, we always select the most productive one; • the first guillotine cut is always performed horizontally, that is, along the width of the large item, and trim loss is always located on the bottom right side of the large item.
Therefore, referring to a subset P ⊆ L of large sizes, the minimum glass area c i P used to cut the required quantity of items of the i-th small size from large items of sizes k ∈ P can be computed via the following integer knapsack problem: where is the area of the k-th large item, a i k is the maximum (known) number of small items of the i-th size that can be obtained by cutting one large part of the k-th size, and y i k is the number of large items of the k-th size used to get the required number of items of the i-th small size.
Once the c i P are known for all small sizes i ∈ S and for all sets P ⊆ L with no more than p elements, the problem of finding an optimal assortment reads min In problem (2), variables are implicitly bound to get values in {0, 1}.x i P = 1 means that the whole requirement of the items of the small size i is produced by cutting large items of sizes in the set P and u k = 1 indicates that at least one large item of the k-th size in L is used.The goal is to minimize the total glass area used for the whole production.The first set of constraints ensures that the requirement of each small size is covered by some set P of large sizes.The subsequent inequalities require that if a set P containing the k-th large size is used for production, then that size is part of the assortment chosen.The last inequality limits this assortment to p distinct sizes.
Since the number of x i P variables grows rapidly, being O(|L| p ) very large, problem (2) is solved in Arbib and Marinelli (2009) by column generation, pricing variables via (1).In the same paper, an efficient heuristic is also proposed.The idea is to solve integer program (2) only with the variables x i P = x i k that corresponds to singleton sets P = {k}.In this way, each small item is produced by large items of the same size (while a given large size can still be used to produce different small sizes).The problem boils down to a p-median (see (13) in Section 5) since it reduces to cluster the small sizes into at most p subsets, and one can prove that its optimum asymptotically approaches the true one, as the demand of the least required small item increases.
In practice, the p-median heuristic is very effective and permitted a reduction of the plant trim loss to one third of the initial amount.In the sequel, we will then refer to this simplified model, that is, to program (2) written with x i k for x i P , and call it [P] for short.To limit the number of indexes in the models described in the following sections, we define the set of all possible production lots of items in S via items of L: (3) therefore setting x j ≜ x i k , a j ≜ a i k .Then, we introduce to denote all the lots using large size k and all those returning small size i, respectively.Finally, we indicate as the optimal solution of (1) for P = {k}.That is, for j = (i, k), ȳj denotes the amount of large items of size k that are used when fulfilling the whole demand of items of size i.

UNCERTAINTY MODEL AND RECOURSE STRATEGIES
Problem (1), and indirectly problem (2), assumes parameters a i k to be known integer constants.Due to the nature of the process, however, those parameters are in fact uncertain.
The most widely known approaches to deal with uncertain data are stochastic and robust optimization, see Ben-Tal et al. (2009) and Birge and Louveaux (2011).
The former requires that the probability distributions and possible scenarios are known in order to describe the random variables, and can lead to very large models that are hard to solve.For this reason, it will not be considered here.In the latter, the range of uncertainty can be derived from historical realizations of the random variables.Robustness can be addressed in various ways.The strict concept early proposed by Soyster (1973), where an optimal solution is always feasible for every realization of the random parameters, was later mitigated by Bertsimas and Brown (2009) and Bertsimas and Sim (2004), who proposed an approach where optimal solutions are protected against the change of a given number of uncertain coefficients, generally related to each other in some mathematical way.Compared to other proposals in the literature, and according to Bertsimas and Sim (2004), one of the benefits of a robust approach is that the robust counterpart remains computationally practical independently of the number of uncertainty parameters, meaning that at least it remains in the realm of integer linear programming.This is the main reason that led us to adopt this approach.

Production and Operations Management
Let V F be an estimate of the total float volume necessary to produce the required small items (a simple lower bound to , where v i = w i h i is the area of the i-th small size).A stricter lower bound can be computed by solving problem (2).According to the indications provided by the plant management, defects can be considered point shaped, statistically independent, and uniformly distributed in V F .Their location can then be usefully modeled as a spatial point process X, see Daley and Vere-Jones (1988).In particular, we assume that the process is simple, that is, no two points ever coincide, and stationary, that is, for any fixed point with coordinates x, the distribution of the point process obtained by shifting each point of X by x is identical to the distribution of X.
More precisely, let  > 0 be the expected number of defects per unit area and T(k) be a random variable giving the number of defects within the k-th large item, whose area is and that defect realizations in disjoint regions (and hence in distinct large items) occur independently.Hence, the random process can be described as a spatial Poisson process with uniform intensity , that is, a point process in I R 2 such that: Due the conditional property of a Poisson point process, the distribution of T(⋅), conditioned to a total number f of defects in a float campaign, is binomial: Using ( 6) we can easily evaluate, for each large item, the number t of defects that one can reasonably expect for a given f .As we will see in Section 7, practical cases are essentially covered by values of t between 0 and 6.Accordingly will be used to estimate the faulty fraction of large items of size k.
The random variable T(⋅) is, however, not sufficient to describe yield reduction, which also depends on (i) the position of defects in the large sheet, (ii) the small item produced with that sheet, and (iii) what kind of response to the presence of defects is implemented.
Concerning the third issue, depending on cut feasibility, we consider three degrees of pattern reconfigurability (see Figure 2): • No reconfiguration, NR.Patterns remain unchanged: A defect always causes a yield loss, unless it falls into the scrap area.This is the policy presently adopted in the plant.• Constrained reconfiguration, CR.Patterns can be reconfigured provided that no more cuts are employed than the minimum used in normal operation: That is, strips of scrap can be moved from the bottom or right edge of the large item, but cannot be split up.At present, the plant management is evaluating this option.• Unconstrained reconfiguration, UR.Patterns can be reconfigured also by splitting scrap strips with additional cuts.This recourse strategy is more general and effective (but more complex).
A maximum of 187 small items (laid out on 17 strips with 11 items each) can be obtained from a large item in the instances provided by the plant.A CR that avoids as many faults as possible can easily be obtained by checking all the possible positions of the horizontal and vertical scrap strips.If the cutting pattern consists of n horizontal and m vertical strips of small items, the horizontal and vertical scrap strips can be placed in n + 1 and m + 1 places, respectively.Consequently, there are no more than (n + 1)(m + 1) possible reconfigurations, among which one chooses that with the least number of small items damaged by defects occurred in the large item.
By contrast, computing the best UR requires a tailored approach, which is beyond the scope of this work.
A defect still causing a yield loss after reconfiguration is called critical.Let Q denote a random variable giving the number of critical defects in a large item.For any production lot j = (i, k) ∈ J and any integer 0 be the probability that a single large item of size k used for the production of small items of size i has q critical defects out of t.We use (8) to parameterize the uncertainty set U, which the yield parameter a j belongs to, with respect to the expected number f of defects in a float campaign, see Section 5.In particular, for a pair j = (i, k) with zero trim loss (i.e., W k and H k integer multiples of w i and h i ) under NR strategy we have where are the largest number of small items of type i contained in a large item of the k-th size and the probability that one of those small items is hit by a random fault in the large item, respectively.The third factor in the above expression denotes the Stirling number of II type, recursively computed as with initial conditions for all s.In fact, { s q } counts the different ways of partitioning s defects into q nonempty subsets, which in turn can be chosen in ( a j q ) different ways and associated to the defect sets in q! distinct ways.
For positive trim loss, we use formula (9) for s = t, t − 1, … , q, introducing a factor ( t s ) p s j (1 − p j ) t−s to take into account the probability that exactly s out of the t defects hit a small item.We obtain in this way For more sophisticated recourse strategies the expres-

ROBUST MODEL
Formulation (2) with separate pricing (1) is not suitable for modeling uncertainty in the vein of robust optimization, because delayed column generation cannot manage the bundle of uncertain coefficients that goes across the pricing problems.However, the good features of the p-median heuristic described in Arbib and Marinelli (2009) can help in two ways.On one hand, we can directly modify it to try and address the losses due to defects.On the other hand, we can use it as a starting point to define the (more sophisticated) robust counterpart.
Let us first focus on the first approach.According to the notation in Section 3, for any lot j = (i, k) ∈ J, let x j ∈ {0, 1} be a binary variable indicating that the whole demand of items of small size i ∈ S is fulfilled by cutting large size k ∈ L. One can then correct the objective function of the pmedian model to take into account the expected loss due to defects.The model reads: [P] min ⌉ is the total expected area used by lot j.
In particular, by overestimating (resp., underestimating) by t k = ⌈V k ⌉ (resp., by t k = ⌊V k ⌋) the expected number of defects in large item k, one obtains the expected number āj of defective items: and then the expected number of nondefective small items of the i-th size obtained by one large part of the k-th size, (a j − āj ), to be plugged in the definition of c j .

Production and Operations Management
Generally speaking, the extra small items required to compensate those lost by defects can be produced by keeping a pool of additional large items of a generic size.Observe however that such a simple strategy is largely dominated, in terms of loss minimization, by the solutions of [P] that already provide a suitable amount of large items of a specific size for each small size.As a reference figure, the trim loss of instance I 15 obtained by [P] is 1.09% (see column V T of Table 2 in  Section 7).An alternative solution computed by the deterministic p-median model, that is, with āj = 0 ∀j, and then using the large items in the pool to recover the underproduction (of size 610 × 320 for instance I 15 , that is, the largest size in L) has a total trim loss of 2.56%, more than twice that of [P].
With the second, robust approach, let  j be a nonnegative integer variable that denotes the number of extra items of size k ∈ L that must be cut to compensate the items i ∈ S lost due to manufacturing defects.Finally, let t z j be positive integer parameters that count the number of those large items of size k with t defects that are involved in the production of the items of small size i (independently of the chances that the glass imperfection has to reduce yield, i.e., that a defect hits a small item).We can parameterize the total number f of faults that occur in the production period and, inspired by the budgeted uncertainty à la Bertsimas-Sim Bertsimas and Sim (2004), we define the uncertainty sets where the budget controlling robustness f i , an upper bound to the number of possible defects in the production of the items of small size i, is set to That is, it is assumed to be proportional to the area required by i (in (15) and the following, for simplicity of notation, we assume t z j = 0 for t > f i ).
Let now  jq be the number of large items, out of the ∑ f t=1 t z j defective ones, where faults reduce yield by q units.The relevant uncertainty set U  depends on z and reads: where, by definition (8), t  jq is the probability that a generic large item in the j-th production lot outputs q faulty items when affected by t ≥ q faults.The robust counterpart of problem (2) then reads min x j , u k ∈ {0, 1},  j ≥ 0 and integer, ( 23) The goal is to minimize the total glass volume used for production.For any given lot j = (i, k), the terms V k and ȳj in the objective function ( 17), respectively, indicate the area of the k-th large size and, as per ( 5), the number of items of large size k required to fulfill the demand of small size i.Inequalities (18) ensure fulfillment of small item requirements, discounting the losses due to large items imperfections.In these constraints,  jq is an uncertain coefficient varying in U  .Inequalities (19) state that the defective large items of size k cannot be more than those actually produced.Condition (20) limits the production of each small size to a single large size.The use of large size k is triggered by inequalities (21).Finally, inequality (22) limits the assortment of distinct large sizes used.
Unlike Bertsimas-Sim's, the uncertainty sets U i z and U  are not polyhedral as the parameters z and l are integer valued.To overcome this difficulty while preserving the degree of robustness, that is, all the integer points of U i z and U  , we first relax the integrality of  jq and, according to the definition of U  , replace its occurrences in (18) by ∑ f t=q t  jq t z j .Furthermore, indicating by  i ≥ 0 the maximum loss due to defects in the production of small size i, constraints (18) are rewritten: The worst-case scenario occurs when, for i ∈ S, the number of defective small items of size i Production and Operations Management is maximized over the nonpolyhedral (discrete) uncertainty set thereby removing ( 19) from ( 17)-( 23).The first inequality limits to f i the number of defects in the production of small parts of size i.The second limits to the portion estimated by ( 7) the number of defects in a generic production lot j that uses the k-th large item.Maximizing (26) subject to (27) gives, for each i ∈ S, an integer optimization problem (as defined on a discrete uncertainty set) and, to our knowledge, general methods for discrete uncertainty sets only work for very few scenarios.However, an upper bound to the maximum  i can be obtained by the continuous relaxation of ( 26) and ( 27) assuming x j and  j momentarily fixed.Since such problems are feasible and bounded for any x j and  j , by strong duality the bounds exist for any i ∈ S and can then be computed solving the LP dual of ( 26) and ( 27): Now, any solution of (28) provides a dual bound to the loss in the worst case.Then, the max term in (18) can be replaced by the above dual objective function, which results in the following conservative approximation of the robust model ( 17)-( 24): [R] m i n The resulting model has bilinear terms x j  j and  j  j that can however be linearized by standard techniques, see Supporting Information EC.2.3.
While it is not among the purposes of this paper to undertake a stochastic programming approach, a two-stage SP formulation is sketched below in the interest of discussion.This is stated as min subject to the constraints of [P] and the binary nature of x k and u k variables, where we define subject to constraints ( 18) and ( 19).Here, one has to deal with a two-stage model involving integer variables in both stages in addition to approximating (if one is unable to compute it accurately) an expected value over random variables t z (the number of large items with t defects) and  q (the number of large items where faults reduce yield by q units).Without entering into the details of working with such a formulation, it is debatable whether it would lead to a more practical result or to superior performance in both computational and economical terms.There are several authoritative references on stochastic programming dealing with complexity issues and approximation procedures, for example, Shapiro et al. (2009).In terms of complexity, our approximate (robust) model remains in the realm of integer linear programming.It can be argued that a similar statement will be true for the case of SP model.However, it is not our place to develop this point further here.Nevertheless, we hope that the present paper inspired others to undertake further work along these lines.

TECHNICAL DETAILS OF THE ROBUST FORMULATION
The complexity of model [R] rapidly grows with the number |L| of distinct large sizes and of alternative patterns considered.To make resolution viable by commercial solvers, it is crucial to control the size of those sets.The latter set is in fact controlled, as model [R] adopts only one pattern per lot.Pattern features are summarized by parameter a j , j = (i, k), which counts the largest number of size i small items one can obtain by a single large item of size k.In this sense, each cutting pattern is a maximal one.
Although the probability of a faulty item decreases when a maximal pattern is turned into a nonmaximal one by removing one or more items, the adoption of maximal patterns is justified by the following result.
Proposition 1.If an NR or CR strategy is implemented, see Section 4, model [R] always has an optimal solution that consists of maximal patterns only.
Proof.Let us focus on a pair j = (i, k) ∈ J and, for the sake of conciseness, drop the indexes of variables and parameters.Suppose that mn small items of size w × h, arranged in n horizontal strips of m items each, are obtained by a maximal

Production and Operations Management
pattern applied to a large item of size W × H. Let r = W − mw be the width of the vertical waste, and s = (H − nh) the height of the horizontal one.After the CR of a maximal pattern, all the potential coordinates of each item's bottomleft corner are in Z = (s + wi, r + hi), with i = 0, … , m − 1, j = 0, … , n − 1, and ,  ∈ {0, 1} (see Figure 2b).Let Z be the analogous set of a nonmaximal pattern.Clearly, Z ⊆ Z, and the same trivially holds for the NR strategy.Therefore, the space left free by the items removed from a maximal pattern cannot be used to avoid defects on the remaining items.□ As for the set L used in the formulations, in the original problem by Arbib and Marinelli (2007), candidate large sizes are preprocessed, for each instance, according to the following procedure.First, define the set L 0 of ideal large sizes for which there exists at least a small item i that can be cut without trim loss.Formally, where  i ,  i are positive integers ranging in and W min , W max , H min , and H max , respectively, denote upper and lower bounds on the large sizes in L. A restricted set of large sizes sufficient to guarantee minimum trim loss solutions in the deterministic setting, that is, in the absence of defects, can be computed from L 0 in view of the following property proved in Arbib and Marinelli (2007): Proposition 2. For any h, k ∈ L 0 , let l hk be the smallest size in L containing both h and k.Let L = {l hk : h, k ∈ L 0 , h ≠ k}.Then, there is an optimal solution that uses only large sizes in L.
In our data set, L contains a number of sizes between 13 and 461, much smaller than the potential 6560 large sizes in L (see Table EC.1 in the Supporting Information Section EC.1), thus making viable the solution of the integer programs described in the above sections.
However, once the aleatory presence of defects is introduced, it is possible that L does not contain enough distinct sizes to warrant trim loss minimization.In fact, with more sizes available, the expected loss can be reduced at the expense of some additional waste.This consideration deserves a brief comment.
Consider the simple case of d = 36 items required in the size 0.9 × 2.5, and suppose for simplicity that no more than one defect per large item ever occurs.Suppose also that L consists of the single ideal size 2.7 × 5 (hence a = 6 and ȳ = 6), and that f = V F = 9 defects are expected (i.e., f is sufficiently large to ensure  1 = 1 for each large item employed).With these values, the probability of a critical defect under the CR strategy is 1  1 = 1 (i.e., k is an ideal size), and an optimum is achieved by adding large items, for a total employed area of 108 square meters.But if one adopts a large size one decimeter wider, namely, 2.8 × 5, the probability of a critical defect decreases to 1  1 = 0.857 (see Section EC.2.1 in the Supporting Information), the additional large items needed reduces to  = 1 and the total area used decreases to 98 square meters (−9.26%).
Although in the example a minimum is achieved by a large size not in L, a preliminary numerical test shows that only 3 out of 15 instances benefit of a super-set of L, and the gain is always very marginal (≤ 0.004%).

COMPUTATIONAL EXPERIENCE
The models discussed above were tested on real data provided by the plant.Besides a comparison of CPU time, the test aims to evaluate the benefits, in terms of expected reduced loss and under/overproduction, obtained with random defects when the solutions of the robust model are implemented with different recourse strategies (see Section 4): NR, that is the action currently adopted in the plant, and CR.Because UR is quite far from the current practice and since, as noted in Section 4, computing the best UR is not straightforward, we left the UR strategy out from our numerical test.
The test compares four approaches: • the deterministic p-median model [P 0 ] obtained by setting āj = 0 in (13), • the p-median models [P + ] and [P − ] derived from ( 13) by updating demand according to an over-and an underestimate of the expected number of defects per large item, • the linearized version (described in Section EC.2.3, see the Supporting Information) of robust model [R].
Solutions were tested by means of (i) a Monte Carlo simulation that generates defects according to a spatial Poisson process and (ii) an adversarial deterministic model that generates the maximum expected number of critical defects.

Setting
The test was carried out on 15 problem instances, each one representing a production campaign: 10 instances are those used in Arbib and Marinelli (2007); the remaining five, namely, I 2 , I 3 , I 7 , I 8 , and I 14 in Table EC.1 in the Supporting Information, were newly provided by the plant.The details of test instances are reported in the Supporting Information Section EC.1.All the integer programs were solved using Gurobi

Production and Operations Management
9.1.1 with default setting on a QUEMU virtual CPU 1.5.3 machine (8 cores) at 2.26 MHz with 8 GB of RAM.Gurobi precision parameter mipgap was set to 1e-9.A limit for the running time was set at 7200 s.Since, according to (6), seven or more defects in the same large item are relatively rare (a priori assumptions attribute to this event a probability of 4.14 ⋅ 10 −3 ), the probability that constraint (31) for t ≥ 7 must be satisfied by solutions that already fulfill the same constraint for t < 7 is negligible.Indeed, we already obtained good results by limiting the model to t ≤ 2. The probabilities t  jq for the NR strategy were computed by (12).For CR strategy, 1  j1 and 2  jq were, respectively, obtained by (EC.1) and (EC.2) in the Supporting Information, while for t = 3, … , 6 the t  jq was estimated by Monte Carlo simulation.As we observed a strong correlation between number of items produced and area used (both in terms of overproduction and expected defective small items), in the following, we will comment on area results only.

Analysis of results
Let (x, 0) and (x, ξ) be the solutions computed by the three [P] models and by [R], respectively, and let Ȳj = (ȳ j xj + ξj ) be the total number of large items used in lot j ∈ J. Let us then introduce the following figures: • VF : total float area used: • V R , VP : total area of small items required and produced: • VD : expected area of defective small items: where t  jq is the probability that a large item in lot j = (i, k) is affected by t defects, q ≤ t of which are critical.This is given by the compound probability t  k t  jq , see ( 6) and (8).In the evaluation of these probabilities, we used f =  VF and, for the sake of numerical precision, limited the binomial in (6) to t ≤ 10.
The solution quality of (x, ξ) and (x, 0) is evaluated through the percent area overproduced V O and the percent trim loss V T .Namely, The impact of defects on the solutions is taken into account by evaluating, for (x, ξ) and (x, 0): • the percent deviation E S of the expected area of sound items obtained versus the total area V R required: • the percent expected waste E W , including both trim loss and discarded defective items: • the percent expected number E B of back-orders, that is, the percent of small sizes whose requirement was not completely fulfilled.
We computed VD not only by the above formula, but also via Monte Carlo simulation and by an adversarial integer program that models the worst-case scenario.

Monte Carlo simulation
Computing VD requires to estimate the impact of defects on the solutions (x, 0) and (x, ξ).In a Monte Carlo simulation, each iteration reproduces the cut process according to a realization of glass defects, where defect number and positions are regarded as input random variables sampled from the appropriate probability distributions.Since in all the instances solved the small sizes are treated as integers, we can as well assume integer coordinates for all defect positions.Moreover, we can also assume the large items of any given solution sequenced in any arbitrary order.The outcome of each iteration (i.e., the defective area) is measured compliantly with the reaction adopted, that is, by just counting how many small parts are hit by a defect in case of NR strategy, by finding a reconfigured pattern that minimizes the faulty items (as briefly described in Section 4) in case of CR strategy.
The number of defects f is sampled from a Poisson distribution with mean  VF and process intensity  set to 0.1 points per square meter; then ⌊ f ⌋ positions, each indicating the two coordinates of a defect, are independently sampled from a uniform distribution in VF .
In particular, the large item k and the coordinates ( ŵ, ĥ) within k of the sampled defect are obtained by first generating a random value p ∈ [0, ∑ j∈J W j H j Ȳj ] and then mapping p into k and ( ŵ, ĥ) in the following way.Let ( ĵ, r) be the last index such that ∑ j∈J ∑ Ȳj r=1 W j H j < p and let p = ∑ĵ j=1 W j H j Ȳj + rW rH r.Then,

Production and Operations Management
Once all the defects have been generated, realizations of the area VD of defective small parts are computed and the statistic VD updated.The simulation is iterated N = 10 times at each call, terminating when | VD − VD | < 10 −6 .

Worst-case simulation
Given a solution X = (x, ξ) and being J = {j ∈ J : xj = 1} the set of the production lots chosen with X, the worst realization of defects with respect to X can be computed by the following adversarial integer linear program: max As in [R], t z j indicates how many large items in a produced lot j are hit by t defects.The objective function ( 46) equals the expected number of critical defects, to be maximized.By inequality (47), faults are distributed among large items respecting the maximum number f i of defects allowed per small size.By constraint (48), the defective large items in each lot j cannot exceed, as a whole, the expected number  k Ȳj of large items with faults employed by the solution.

Solution analysis for a small instance
Before giving the aggregate experimental results, let us compare two solutions of instance I 2 , computed via [P 0 ] and [R], to illustrate the improvement obtained by the robust model.The demand of this problem, one of the smallest of our benchmark set, must be manufactured using no more than p = 4 large sizes.Given the float volume necessary for this production, we estimated f = 476 defects as a whole (see Table EC.1 in the Supporting Information).The first four columns of Table 1 detail request, size, and defect estimation for each small item i ∈ S (the latter parameter being used in [R]).Next, section [P 0 ] reports the large sizes selected by the p-median model to form the production lots j, with the associated cut productivity a j and cut number ȳj .The rightmost section reports the same information for the solution computed by [R], and indicates also the extra large items ξj used to compensate losses due to defects.The large sizes selected by model [R] partly differ from those chosen by model [P 0 ]: In detail, to produce small size 76 × 122, [P 0 ] uses the ideal large size 304 × 610 and so there We launched a Monte Carlo simulation on both solutions, obtaining, • for [P 0 ], an expected waste E W = 10.05%, of which 2.08% is trim loss; the remaining 7.97% is the discarded defective area, not compensated at all by glass area overproduction (V O = +0.85%).In fact the solution entails high levels of expected sound item underproduction (E S = −7.37%)and back-orders (E B = 83.33%).
• For [R], +8.65% additional glass area (with V O = +9.73%),mild expected overproduction of sound items (E S = +0.82%)and no back-orders; the total waste E W is reduced to 9.90%, 1.93% of which is trim loss.
In conclusion, the main result obtained with [R] is that a high back-order level is reduced to zero with no increase (actually, with a slight reduction) of total waste and trim loss.

Results for NR strategy
For each instance, Table 2 describes the solution, respectively, computed by the deterministic p-median model [P 0 ], by the probabilistic p-median models [P + ], [P − ], and by the robust model [R] when an NR strategy is adopted.In particular, the CPU columns report the running time (s) to get (x, ξ): A "-" indicates that the solver has reached the time limit; in that case, the subsequent column reports a nonzero optimality gap (%).The remaining columns give the overproduction V O and the trim loss V T found.Tables 3 and 4 describe the protection that solutions offer against defects.This feature is evaluated by the expected values E S , E W , E B , computed by Monte Carlo simulation and worst-case analysis.Table 5 shows the computational details of Monte Carlo simulations.For the worst-case analysis, we just observe that all instances of model ( 46)-(48) were solved in less than 0.125 s.
The p-median models (both deterministic and probabilistic) are much easier to solve than the robust counterpart of

Production and Operations Management
TA B L E 2 Solutions with no reconfiguration strategy.In bold are the averages over instances I 1 -I 11 .[P 0 ]: all the [P] instances were solved in less than 4 s (0.36 on average), whereas only 10 [R] instances were solved to optimality within the time limit.For the remaining instances, we obtained solutions with optimality gap of 3.18% on average.In the tables, we reported the values averaged on I 1 -I 11 only, that is, on instances for which we could find optimal or near-optimal solutions with both models.The probabilistic and robust solutions exhibit, as expected, a better global behavior.Being unaware of defects occurrence, model [P 0 ] provides for a quite small overproduction Production and Operations Management TA B L E 4 Results of worst-case scenario.In bold are the averages over instances I 1 -I 11 .It is interesting to observe that the models behave very similarly in terms of trim loss (precisely, that of [P − ] is on average slightly less than [R]: 1.14% vs. 1.19%).Robustness is however achieved, besides overproduction, by employing different assortments of large items to locally mitigate the Production and Operations Management impact of critical defects (see Section 7.3).In detail, in [R] solutions the assignment of small sizes to large ones differs from [P 0 ] solutions by 15.25% on average, with a peak of 53% in I 7 .
We also observe that [R] almost fully balances the negative and positive net productions of the probabilistic p-median models.The overproduction V O , in fact, represents a price of robustness that is completely paid back by a null mean percent deviation E S of the expected faultless area versus the total area required.Indeed, the initial overproduction of [R] solutions is fully absorbed by defects realization.Both simulation and worst-case analysis show that, though the expected waste E W is in practice the same (9.20% vs. 9.18% on average), the net production obtained via [R] is just a little above demand in the simulation (mean E S = +0.06%)and below demand in the worst case (the mean E S = −0.09%).Instead, [P 0 ] solutions end up with a definitely more pronounced underproduction (7.98% on average), a value that is consistent with the company estimate of losses due to imperfections.
A direct consequence, very detrimental for production costs, is that in all instances but I 2 model [P 0 ] does not allow the completion of any order batch (E B = 98.48% on average and 7.71% mean underproduction per order).On the other hand, the mean percent of back-orders recorded by robust solutions is reduced to 42.7%.This value is still large but in this case the mean per-order underproduction is as small as 0.32%, a value that greatly helps keep the whole plant operation smooth.In fact, the expected losses āj used in the objective functions of the probabilistic pmedian models are mean values not able to precisely capture the realization of defects.On average, the net production obtained via [P + ] (resp., [P − ]) increased (resp., decreased) by 2.69%.
About the Monte Carlo simulation, Table 5 shows, for each model, the number of iterations of the simulation, the CPU time, the average number f of sampled faults per repetition, and the average percentage of critical defects.Few seconds CPU time were generally spent to simulate the solutions of instances I 1 -I 11 .The largest deviation from this behavior is the 50.21 seconds recorded to simulate the [R] solution of instance I 11 .The mean CPU time ranges between 5.62 ([P − ]) and 10.87 s ([R]).The average f is larger for the models ([P + ] and [R]) that embed a larger estimation of faults, reaching 74, 422 defects per iteration.The hit rate is similar for all models, about 94.7% on average.
We finally observe that the sample means are almost always slightly lower than the corresponding analytical values (with a maximum difference of 0.012%).However, the values of all the analytical means fall in the 95% confidence intervals of the statistical parameters obtained by Monte Carlo simulation.The small bias is probably due to the fact that, for the sake of numerical precision, we approximated t  = 0 for t > 10, that is, a large item with more than 10 defects is regarded as an event with negligible chances.

7.5
Results for CR strategy Tables 6, 7, and 8 give the results obtained with CR strategy.If one reads the outcome of the Monte Carlo simulation, [P 0 ] turns out to produce the same percentage of backorders as with NR.On the other hand, the mean percent underproduction and the mean total waste E W decrease: the former from 7.98% to 7.68%, the latter from 9.18% to 8.88%.
We observed similar variations in [R], [P + ], and [P − ] solutions.In particular, the mean total waste E W of [R] solutions decrease by 0.32%.Instead, unlike [P 0 ] solutions, the mean percent back-orders E B of the [R] solutions obtained with CR decreases by some 1%.
Overall, the implementation of CR strategy keeps the percent deviation E S at same level as NR strategy, but can reduce total waste by 3.5% (from 9.20% to 8.88%), indeed a nonnegligible improvement given the high production volume of the plant, and also considering that E W is mainly lowered because faulty area is in turn reduced by 4% on average.However, the CR strategy may require additional cuts and/or cutter set-ups that can considerably increase processing time, thus possibly cancelling the benefits of a better response to defects.An assessment from the plant management is therefore required in this regard.
A final note on the performance of the simulations: Table 9 shows the details of Monte Carlo simulation for CR strategy.Our considerations are similar to those done for Table 5, but some differences appear when comparing simulation performance of CR and NR.With the former strategy, the CPU time required for instances I 1 -I 11 is generally larger than with the latter, reaching 89.19 s with model [P + ] in instance I 11 .On the whole, the mean simulation time with CR strategy is 2.42 times that with NR.This growth can be explained by a +7.83% iterations to converge, and also by the enumeration required to find the best pattern reconfiguration.The faults sampled on the whole in CR and NR are comparable in number, with a slight reduction (−0.3%) for CR; but the hit ratio lowered on average by 3.49%, resulting in a mean 4% reduction of defective area.This is due to the benefits of this reconfiguration policy.Considering the potential benefit of CR w.r.t.NR, the hit ratio reduction could seem modest.It should be noted, however, that hit ratios refer to solutions employing only maximal cutting patterns.These, on one hand, ensure very small trim loss but, on the other hand, offer little room for defect avoidance operation.

Production and Operations Management
TA B L E 6 Solutions with constrained reconfiguration strategy.In bold are the averages over instances I1 -I11.

CONCLUSIONS AND FUTURE RESEARCH
We addressed a stock assortment-and-cutting problem with the goal of constructing optimal solutions that are robust against imperfections of raw material.The problem and the computational results presented here refer to a real application in the glass industry.Defect occurrence was modeled as a Poisson point process.On that basis, we developed approaches that protect solutions against the worst possible distribution of f defects over the glass sheets to be cut.Depending on the recourse strategy used, a defect on a glass sheet may or may not be avoided.With the simplest reaction considered in the present case, that is, NR, the recourse simply consists in doing nothing.The small items hit are just discarded with no attempt to reconfigure the patterns.In more general and effective (but also complex) reactions, patterns can be reconfigured by moving the strips of scrap (CR), or even by splitting them with Production and Operations Management additional cuts (UR) in order to let defects fall in the trim loss.
Within each glass sheet, we then evaluate the expected loss of produced items via the conditioned probability of finding a defect unrecoverable with the recourse strategy adopted.Closed forms for the computation of such probabilities were provided in simple cases, and obtained via Monte Carlo simulation in more complex ones.
Starting from a deterministic p-median model that minimizes the total glass area used, two alternative approaches were presented.In the first one (models [P + ] and [P − ]), the expected loss by defects is just plugged into the objective function of the p-median model, and in the second one (model [R]), the robust counterpart of the p-median model is constructed in the classical vein of robust optimization.Computational tests were performed on real data taken from the application.The tests show that: • both approaches mitigate the pronounced net production deficit and back-order volumes found by deterministic solutions, while keeping trim loss pretty much at the same (low) level.• Nonetheless, [P + ] and [P − ] seem to not completely hit the target.The former nullifies back-orders at the cost of a nonnegligible extra production (which is regarded as waste).The latter substantially reduces net production deficit, but leaves back-order percentages almost untouched.• Conversely, the robust model [R] halves the amount of back-orders with a net production that deviates from requirements by only 0.06%.The price of robustness, consisting in the initial overproduction, is then completely absorbed.• Finally, a redesigned easy-to-implement recourse strategy, CR, achieves a nonnegligible 3.5% reduction of total waste, mainly attributed to a mean 4% reduction of the defective area.This is a valuable figure in case defective items are not treated as waste but as lower value products.
In our opinion, the results open interesting perspectives and research challenges in the field of stock cutting.In fact, despite the recognized importance of defect handling in industrial applications, very few studies can be found on this subject.To the best of our knowledge, none of them address the problem of finding robust solutions.Among the questions left open we can list the following: • How should robustness against defects be dealt with in standard cutting stock, or in general one-and twodimensional knapsack/bin packing problems?Clearly, the possibility of solving model [R] algorithmically (by, e.g., column generation and/or branch-and-price) is both interesting and relevant.• It is not clear whether a stochastic programming formulation would lead to a better performance in both computational and economical terms.This topic is worth a separate investigation and left as a potential future project.
• What is the complexity of fault recovery with UR recourse strategy or more complex ones?
Finally, in our case study, trim loss and defective items are not distinguished but altogether treated as waste.This is not the case for industrial production in general.For example, some defect types are tolerated in wood-board cutting and the items affected are not treated as waste but as lower value production.In such cases, it would be interesting to understand how robust solutions would change, and in particular whether they would or would not make a larger use of trim loss as a means to increase protection against value loss.

A C K N O W L E D G M E N T S
We gratefully acknowledge the technical support of the R&D department of Nippon Sheet Glass Co. Ltd., with special thanks to Dr. Eng.Alessandro Consorte and Marcello Romano for the help given in the analysis of defect generation.We are also very grateful to the AE and the anonymous reviewers for the in-depth reading and stimulating comments, which helped us improve the quality of this contribution.Finally, we would like to thank Prof. Simon Wigley for the kind editorial support.

R E F E R E N C E S
Scheme of an automotive glass production process [Color figure can be viewed at wileyonlinelibrary.com] U R E 2 (a) No reconfiguration, (b) constrained, and (c) unconstrained reconfiguration [Color figure can be viewed at wileyonlinelibrary.com]

Production and Operations Management ∑
Details of instance I 2 , and the solutions obtained by [P 0 ] and [R].In bold are the different large sizes chosen by [R].
TA B L E 1 Results of Monte Carlo simulation.In bold are the averages over instances I 1 -I 11 .
Performance of Monte Carlo simulation with NR strategy.In bold are the averages over instances I 1 -I 11 .
Table 6 has no [P 0 ] section since it is the same as Table 2: in fact, [P 0 ] solutions are not based on probabilities and therefore are identical.The other models behave very similarly with CR and NR strategy, with

Production and Operations Management TA B L E 8
Results of worst-case scenario Performance of Monte Carlo simulation with CR strategy