Lagrangian duality for robust problems with decomposable functions: the case of a robust inventory problem *

We consider a class of min-max robust problems in which the functions that need to be robustiﬁed can be decomposed as the sum of arbitrary functions. This class of problems includes many practical problems such as the lot-sizing problem under demand uncertainty. By considering a Lagrangian relaxation of the uncertainty set we derive a tractable approximation, called the dual Lagrangian approach, that we relate with both the classical dualization approximation approach and an exact approach. Moreover we show that the dual Lagrangian approach coincides with the aﬃne decision rule approximation approach. The dual Lagrangian approach is applied to a lot-sizing problem, where demands are assumed to be uncertain and to belong to the uncertainty set with a budget constraint for each time period. Using the insights provided by the interpretation of the Lagrangian multipliers as penalties in the proposed approach, two heuristic strategies, a new guided iterated local search heuristic and a subgradient optimization method, are designed to solve more complex lot-sizing problems where additional practical aspects, such as setup costs, are considered. Computational results show the eﬃciency of the proposed heuristics which provide a good compromise between the quality of the robust solutions and the running time required in their computation.


Introduction
Dealing with uncertainty is very important when solving practical problems where some decisions need to be taken before the real data is revealed. This is the case of inventory management problems where some decisions, such as the quantities to be produced or ordered, need to be taken without knowing the exact demands. A recent and popular approach to deal with such uncertain optimization problems is Robust Optimization (RO). RO was first introduced by Soyster (1973) who proposed a model for linear optimization where constraints had to be satisfied for all possible data values. Ben-Tal and Nemirovski (1999), El-Ghaoui and Lebret (1997), Sim (2003, 2004) propose computationally tractable approaches to handle uncertainty and avoid excessive conservatism. For a recent paper on a less conservative variant of RO see Roos and den Hertog (2017). For general reviews on RO see Ben-Tal et al. (2009) and Bertsimas et al. (2011).
Although current research on RO is being very useful and different approaches have been proposed, there is a large gap in the research devoted to applying those approaches to complex mixed-integer problems. This is the case of many practical production planning problems with demand uncertainty, which motivated our work. While deterministic production planning problems have been extensively studied both from a practical and a theoretical viewpoint (Pochet and Wolsey 2006), robust applications are still scarce. Two seminal works on robust inventory problems consists of i) the study of robust basestock levels, by Bienstock andÖzbay (2008), where a decomposition approach to solve the true min-max problem to optimality is proposed (henceforward denoted by BO approach), and ii) the dualization approach introduced by Bertsimas and Thiele (2006) (henceforward denoted by BT approach) to inventory problems adapted from the general approach proposed by Bertsimas and Sim (2004). The two approaches have been applied to more complex problems. The decomposition approach for the min-max problem using the budget polytope is investigated by Agra et al. (2016b) for a larger class of robust optimization problems where the first-stage decisions can be represented by a permutation. The general decomposition procedure, regarded as row-column generation, is described for general robust optimization problems by Zeng and Zhao (2013). The BO approach is also used to solve more complex inventory problems, for example, the robust maritime inventory problem (Agra et al. 2018a), and a production and inventory problem with the option of remanufacturing (Attila et al. 2017). The dualization approach is also very popular since it often leads to tractable models. Wei et al. (2011) extended the results presented by Bertsimas and Thiele (2006) to a production and inventory problem with remanufacturing, where uncertainty is considered on returns and demands. For the application of the dualization approach to a robust inventory routing problem see Solyali et al. (2012).
Solving the true min-max problem to optimality, using for instance a decomposition algorithm, can be impractical for many inventory problems, while the dualization approach may produce solutions which are too conservative. In order to circumvent both the difficulty of solving the min-max problem and the conservativeness of the dualization approach, other approaches, such as the use of affine decision rules (Ben-Tal et al. 2004, Chen andZhang 2009) have been proposed. The affine decision rules often lead to less conservative solutions than the ones obtained with the dualization approach, and in some cases they can lead to optimal solutions, see Bertsimas et al. (2011), Bertsimas and Goyal (2012), Kuhn et al. (2011), Iancu et al. (2013). Furthermore, for special uncertainty sets the use of affine decision rules leads to computationally tractable affinely adjustable robust counterpart (AARC) models. In particular, when the uncertainty set is a polyhedron the resulting AARC model is linear, see Ben-Tal et al. (2004). Georghiou et al. (2019) propose an approach that combines the affine decision rules with the extreme point reformulation used in the exact decomposition methods.
Tractable AARC models can also be obtained for lot-sizing problems when the demands are uncertain and belong to the uncertainty set with a budget constraint for each time period. However, when additional aspects are included in the lot-sizing problems, such tractable models can become computationally hard to solve even for small size instances. The results in this paper express how difficult can be to solve the AARC model when setup costs are considered in the basic lot-sizing problem. Such results justify the need for developing simpler tractable models as well as the use of approximation heuristic schemes. For a survey on adjustable robust optimization see Yanikoglu et al. (2018). For a deeper discussion of other conservative approximations for the min-max problem obtained through relaxations of the uncertainty set we refer to Ardestani-Jaafari and Delage (2016) and Gorissen and den Hertog (2013).
For many practical production planning and inventory management problems some data, such as demands, are not known in advance, and several decisions need to be taken before the data is revealed. Frequently, such decisions are taken before the start of the planning horizon and are not adjustable to the data when it is revealed. That is the case of decisions such as the amount of each item to produce in each time period, when complex aspects such as setups, sequence dependent changeovers, etc. are present (Pochet and Wolsey 2006). Adjusting the production to the known demands can imply new setups, creating different sequences of products that may not be implementable. Another example in inventory management occurs in maritime transportation, where the distribution must be planned in advance and can hardly be adjusted to the demands given the long transportation times. Motivated by such applications we focus on robust problems in which the functions to be robustified can be decomposed as the sum of arbitrary functions. This class of problems was also investigated by Delage et al. (2018). The authors proposed a new robust formulation for generic uncertainty sets where it is assumed that the functions to be robustified are decomposed into the sum of functions, each one involving a different nonadjustable variable, which is not the case we consider in this paper.

Contributions
In this paper, for a class of RO problems with decomposable functions, we propose a reformulation of the inner maximization subproblem occurring in a min-max model, known as adversarial problem. This reformulation starts by creating copies of both the uncertain variables and the uncertainty set in a way that the uncertainty set becomes constraint-wise independent. Further, a set of additional constraints is imposed enforcing that all the copies take identical values. By relaxing those constraints in the usual Lagrangian way, we obtain a mixed integer linear model, called Lagrangian dual model, that permits to directly relate the min-max approach with the dualization approach (obtained when such constraints are ignored). With the obtained model, it is possible to derive efficient heuristic approximation schemes that use the information from the Lagrangian multipliers to obtain solutions with lower true cost.
Our main contributions are the following: 1. Exploit the Lagrangian relaxation of the uncertainty set to obtain a tractable model for a class of RO min-max problems in which the function to be robustified is decomposable in the sum of the maximum of affine functions.
2. Provide a better theoretical understanding of the relations between several approaches for RO problems with decomposable functions. In particular we show that our Lagrangian dual model coincides with the AARC model and that the classical dualization approach results from the Lagrangian dual approach with all the Lagrangian multipliers null.
3. Provide computational results for the lot-sizing problem with setups showing the impact of the setup costs on the several approaches considered. In particular, when the setup costs increase, the quality of the solutions obtained by the BT approach deteriorates rapidly. This behaviour was not observed when using the proposed Lagrangian dual model. For large setup costs, the BT approach provides a bound that is up to 28% larger than the optimal solution provided by the BO approach, while an optimal choice of multipliers can reduce it to near 6%. A similar reduction on the gap can be quickly achieved by solving the Lagrangian dual model with the multipliers fixed to their optimal value in the linear relaxation.
4. Design efficient heuristic schemes. We propose a new Guided Iterated Local Search heuristic and a Subgradient Optimization method that explicitly uses the interpretation of the Lagrangian multipliers as penalties. Comparing with other heuristics, for large size instances, the Subgradient Optimization method runs in a shorter time, and finds solutions with true costs that are i) strictly better for 91.8% of the instances used and ii) up to 18.4% better than those obtained by the BT approach.
The paper is organized as follows. In Section 2 a dual Lagrangian approach is presented for RO problems with decomposable functions and its relation with the known approaches is established. The dual Lagrangian approach is applied to the robust inventory problem in Section 3. Heuristics based on the interpretation of the Lagrangian multipliers, including a new Guided Iterated Local Search heuristic and a Subgradient Optimization method are also presented in Section 3. Computational tests are reported in Section 4 and final conclusions are given in Section 5.

Lagrangian duality for RO problems with decomposable functions
Consider the min-max robust model where U is a feasible set, Ω ⊆ R n is some compact uncertainty set, T = {1, . . . , n} and each f t : U × Ω → R is an arbitrary continuous function. Variables u represent non-adjustable decisions. The decision maker chooses a vector u while an adversary determines the uncertain vector ξ ∈ Ω that is most unfavorable to the decision u ∈ U. Problem R(u) is known as the adversarial problem (Bienstock andÖzbay 2008) and it calculates what is called the true cost for the vector u.
Problem R * can be rewritten as a two-stage robust problem by using adjustable variables θ t , such that θ t (ξ) : Ω → R, t ∈ T, as follows.
When particular functions θ t (ξ) are considered, conservative approaches of R * are obtained. In particular, the usual non-adjustable approach examines the case where θ t (ξ) = θ t , t ∈ T , that is, It is known (Bienstock andÖzbay 2008) that the gap between the two approaches can be large. However, there are cases with no gap between these approaches, that is, R * = C * (see Ben-Tal et al. (2004), Marandi and den Hertog (2018) and El Housni and Goyal (2018)). In particular, this equality holds when the uncertainty region Ω is the Cartesian product of sets Ξ t (that is Ω = Ξ 1 × · · · × Ξ n ), and each function f t (u, ξ) in constraints (2.2) is only affected by the terms of ξ which lie in Ξ t : Here we explore this property to derive a Lagrangian relaxation of the adversarial problem. First, for each constraint t ∈ T, create a list of copies {ζ t } t∈T of the variables ξ and a list of respective uncertainty sets {Ω t } t∈T , such that each Ω ⊆ Ω t and ∩ t∈T Ω t = Ω (e.g. for simplicity one can use Ω t := Ω). We further impose a set of constraints enforcing that all the copies must be equal. This leads to the following exact reformulation of R(u): Remark 1. In relation to the set of equalities (2.3), it is important to notice that one could impose additional redundant equalities ζ t = ζ for t = or replace them with other equivalent sets of equations. For all those cases, the process derived next still holds.
Attaching Lagrangian multipliers λ t ∈ R n to each constraint (2.3) and dualizing these constraints in the usual Lagrangian way, the following Lagrangian relaxation of R(u) is obtained The multipliers λ penalize the use of different uncertainty vectors for different constraints. Imposing that λ 1 := − n t=2 λ t this model is equivalent to By using the epigraph reformulation, model LR(u, λ) can be written as follows.
For a given u and λ, the minimization problem in LR(u, λ) can be separated into n independent subproblems, one for each t ∈ T : ∀ζ t ∈ Ω t and LR(u, λ) = g(u) + n t=1 LR t (u, λ t ).
Denoting by D the problem D = min u∈U DLR(u), the following relation holds The Lagrangian dual model D can be written as follows: Define D(λ) = min u∈U LR(u, λ). Hence, D = min λ D(λ), and for given multipliers λ, Noticing that C * is obtained with λ = 0 and Ω t = Ω, we have the following relation.
Following the work of Ben-Tal et al. (2015), one can start identifying conditions under which D becomes tractable. In particular, Section 2.3 will focus on the case where each f t (u, ξ) captures the maximum of a finite set of affine functions and Ω t is a famous polyhedral budgeted set.
Theorem 3. Given that g(u) is a convex function, that f t (u, ξ) is the maximum of K functions h tk (u, ξ) convex in u and concave in ξ, and that both U and each Ω t are compact convex sets, then D can be reformulated as the following finite dimensional convex program: where δ * (v|Ω t ) := sup ξ∈Ω t v ξ is the support function for Ω t , while h tk * (u, v) := inf ξ {v ξ − h tk (u, ξ)} is the partial concave conjugate of h tk (u, ξ). Moreover, if the epigraph of functions g(u), δ * (v|Ω t ), and h tk * (u, v) are polyhedral representable and set U is a polyhedron, then problem (2.4) can be reduced to a linear program.
The proof of this theorem is given in Appendix A.

Dualization versus affine approximation
It is known that less conservative approaches than C * to approximate R * can be obtained assuming that θ t (ξ) is the affine approximation θ t (ξ) = ν t 0 + (ν t ) ξ, with ν t 0 ∈ R and ν t ∈ R n . The resulting model, called Affinely Adjustable Robust Counterpart (AARC) model (Gorissen and den Hertog 2013) is Next we establish the main result of this section stating that the Lagrangian dual bound D obtained with Ω t = Ω, t ∈ T coincides with the affine approximation AARC.
Proof. Proof. When Ω t = Ω, model D can be obtained from model AARC by replacing ν t 0 with θ t and ν t with λ t , and adding the constraint n t=1 ν t = 0, hence AARC ≤ D. To prove that AARC ≥ D, we can show that given any feasible solution (ν 0 , ν) of AARC that achieves a finite objective value, it is possible to construct a feasible solution for D that achieves the same objective value. To do so, set m = max ξ∈Ω n t=1 (ν t ) ξ, λ 1 = − n t=2 ν t , and θ 1 = ν 1 0 + m, and for each t ∈ {2, . . . , n} we define λ t = ν t , and θ t = ν t 0 . Clearly, the objective value for D is the same as achieved in AARC: and the solution (θ, λ) is feasible for D, since for the constraint t = 1 (the remaining constraints are easily shown to be equivalent) and for each ξ ∈ Ω, we have where the first inequality follows from the definition of m since and the second inequality follows from the feasibility of (ν 0 , ν) in AARC.

Extensions and related problems
Next we discuss three extensions of Theorem 4. Two-stage robust linear programs with box uncertainty: First, we extended the result to two-stage robust linear programs where the uncertainty set is the box uncertainty set. Let us consider a two-stage robust linear program of the form min u∈U max ξ∈Ξ min y g(u) + c y (2.5) where A ∈ R m×q , B ∈ R m×p , D ∈ R m×n and Ξ = {ξ ≥ 0 | ξ ≤ d}. By dualizing model (2.5) over the second-stage variables y and then over the uncertain variables ξ, Bertsimas and de Ruiter (2016) obtain the Dual Reformulation (DR) of model (2.5) where Ω = {ζ ≥ 0 | c + B ζ = 0}. Bertsimas and de Ruiter (2016) proved that when affine decisions rules are applied to both the primal model (2.5) and the dual model (2.6), the optimal values of the resulting models coincide. The DR model (2.6) can easily be rewritten as an instance of the general model R * defined in Section 2 as follows: where D :,t refers to the t-th column of D (see Zhen et al. (2018) for elimination of the adaptive variables). We can therefore state the following result as a consequence of Theorem 2 in Bertsimas and de Ruiter (2016).
Quadratic decision rules: In a different direction, a similar result to the one proved in Theorem 4 can be derived for the case of the quadratic decision rules. By defining for each with Π t ∈ R n×n , the obtained Quadratic Adjustable Robust Counterpart (QARC) model can be written as follows: Following the idea of the proposed Lagrangian approach, in addition to the set of equality constraints (2.3), let us consider in model R(u) the new set of equalities Attaching a matrix Λ t ∈ R n×n of Lagrangian multipliers to each one of constraints (2.8), the Lagrangian Dual Quadratic (DQ) model becomes We can once again establish a connection between this dual model and the use of quadratic decision rules.
The proof of this result follows straightforwardly the steps of the proof of Theorem 3. In particular, for showing that QARC≥DQ, one can define m = max Distributionally robust optimization: Finally, the form of problem R * is not limited to classical robust optimization but also emerges quite naturally when handling distributionally robust optimization (DRO) problems. In particular, consider the following general momentbased DRO problem: where ξ is now considered to be drawn from a distribution F that lies in an ambiguity set D defined as with each h k (ξ) defining a moment function, and U ⊂ R K defining the set of possible moments. Exploiting a famous reformulation for moment problems, one can, under fairly general conditions, reformulate the inner supremum as follows: where the first step assumes that strong semi-infinite conic duality holds (see Shapiro (2001) for more details), followed by an application of Sion's minimax theorem as long as U is convex and bounded. This gives rise to the following reformulation of the DRO problem: One can directly see that this DRO reformulation takes the form of R * . Theorem 4 therefore applies to the DRO problem reformulation (2.9).

Duality for the B&T budgeted set and for the maximum of affine functions
Here we consider the particular case that motivated our work, where functions f t (u, ζ t ) are given by the maximum of affine functions. We consider the uncertainty set used by Bertsimas and Thiele (2006): where a budget constraint is imposed for each time period and we refer to this set as the B&T budgeted set.
We assume that f t (u, ζ t ) = max k∈Kf k t (u, ζ t ) where K is a finite set of indexes and where a tk j ∈ R and L k t (u) : U → R is an affine function for all k ∈ K and j, t ∈ T . When Ω t = Ω, the Lagrangian dual problem takes the form The uncertainty sets Ω t are not sets of linear constraints due to the presence of the absolute value function in constraints j =1 |ζ t | ≤ Γ j , j ∈ T. There are several ways of converting sets Ω t into equivalent sets of linear constraints (see Ben-Tal et al. (2009)). Preliminary tests using different forms of conversion indicate that the best results are obtained by replacing ζ t with For practical reasons, to reduce the size of the resulting model, we assume henceforward that ζ t j = 0 for j > t, which implies that constraints (2.10) and (2.11) can be disregarded for j > t. By considering the above linear transformation, one can apply linear programming duality to reformulate each robust constraint and obtain the following linear program: (2.14) where the dual variables q tk j and r tk j are associated with constraints (2.10) and (2.11), respectively. In practice, when T is reasonably small, it can be interesting to rewrite D in a lower dimensional space by eliminating variables r tk j . The resulting model is called projected model and is given in Appendix B. Alternatively, one might improve numerical efficiency, albeit at the price of precision, by using a simpler setΩ t , such that Ω t ⊆Ω t . In particular, the following form is a natural choice (see Remark 6): and leads to the following relation: whereD andD(λ) denote, respectively, D and D(λ) whenΩ t is considered instead of Ω t . Observe that since constraints (2.10) in setΩ t were disregarded for j > t, the version of the AARC approach equivalent to model D consists of using the affine policy θ t (ξ) = ν t 0 + (ν t ) ξ with ν t j = 0 for j > t.
Remark 6. In the case of the B&T budgeted set, the set of constraints in the adversarial problem is given by where constraints (2.18) for < t are redundant in the presence of constraints (2.17). However, when constraints (2.17) are relaxed, constraints (2.18) are no longer redundant for < t and the corresponding Lagrangian relaxation may differ.
Proposition 7. Given any fixed λ such that t∈T λ t = 0 and letting α tk The proof is similar to the proof of Proposition 1 in Bertsimas and Sim (2004) so it is omitted.
Remark 8. All the approximation models for R * presented in this section overestimate the cost associated with each first-stage solution u. Hence, those approaches may lead to poor bounds based on good solutions and the following relation holds where u J denotes the first-stage solution obtained with model J, and J ∈ {D,D,D(λ)}.

The case of a robust inventory problem
In this section we particularize the results of the previous section for the case of the robust inventory problem that motivated this study and relate them with those known from the literature. We consider lot-sizing problems defined over a finite time horizon of n periods and define T = {1, . . . , n}. For each time period t ∈ T, consider the unit holding cost h t , the unit backlogging cost b t and the unit production cost c t . The demand in time period t is given by d t . Define x t as the inventory at the beginning of period t (x 1 is the initial inventory level). In case x t is negative it indicates a shortage. Variables u t ≥ 0 indicate the quantity to produce in time period t. When the demand d t is known and fixed we obtain a basic deterministic lot-sizing problem that can be modelled as follows: Here we consider the case where the demands d t are defined by d t := µ t + δ t z t , for each t ∈ T, where µ t and δ t are the nominal demand and the maximum allowed deviation in period t, respectively, and the uncertain variables z t belong to the B&T budgeted set: The results presented in this section can easily be extended to accommodate other practical aspects such as setup costs and/or other production constraints. In that case, the objective function is where y t is the setup variable indicating whether there is a production setup in time period t, and S t is the setup cost in time period t. A new set of constraints is also considered where P t is an upper bound on the production quantity at period t. In order to keep the notation easy, and since all the theoretical results presented hold for both cases (with and without setups) hereafter in the derivation of the theoretical results, we consider only the simplest case where no setup costs (and no setup variables) are considered. For the computational aspects (Sections 3.3 and 4) the more general case with setups is considered.

The Bienstock andÖzbay and the Bertsimas and Thiele approaches
First, we review two of the main approaches for robust inventory problems: the decomposition approach introduced by Bienstock andÖzbay (2008) to solve the problem written as a min-max problem (BO approach) and the dualization approach employed by Bertsimas and Thiele (2006) (BT approach). Bienstock andÖzbay (2008) consider the robust inventory problem as a min-max problem where, for a given production vector u, the demand d t is picked by an adversary problem. The min-max formulation is the following: Problem (3.3) corresponds to the general adversarial problem introduced in Section 2.3 with Bienstock andÖzbay (2008) solve the min-max problem using a decomposition approach where, in the master problem, a production planning problem is solved for a subset of demand scenarios, while in the subproblem (adversarial problem) the worst-case scenario is found for the current production plan and added to the master problem. A FPTAS is proposed in Agra et al. (2016b) where a similar decomposition approach is used and the adversarial problem is solved by dynamic programming.
The dualization approach introduced by Bertsimas and Sim (2004) was developed for the robust inventory problem by Bertsimas and Thiele (2006). The formulation is as followŝ where, for all t ∈ T, Notice that this approach is based on the supersetsΩ t , t ∈ T .

Lagrangian relaxation based approaches
To derive the Lagrangian relaxation of the adversarial problem (3.3) consider, for each time period t ∈ T, a copy v t j of each variable z j with j ≤ t. That is, consider new variables v t j ∈ [−1, 1] which account for the deviation in period j affecting period t, t ≥ j, and impose the constraints With this set of equalities, constraints t j=1 |z j | ≤ Γ t , t ∈ T are replaced by constraints j=1 |v t j | ≤ Γ , 1 ≤ ≤ t ≤ n and the following approximation for the problem R * is obtained.
Theorem 9. Model D defined below is a tractable approximation for the problem R * . min u,λ,θ,q,p,r,s t∈T The proof of this theorem is given in Appendix C and it directly follows from the application of the process described in Section 2 to the robust inventory problem. By replacing the sets Ω t with the supersetsΩ t we obtain modelD, which is used in the heuristics proposed in Section 3.3. ModelD corresponds to model D by setting variables q tk j = 0 for all k ∈ {1, 2}, j, t ∈ T : j < t. The projected version of model D in a lower dimension space can be written as follows: (3.17) Remark 11. The BT model can be obtained through the projected model (3.14)-(3.17) by setting λ = 0, q t1 j = q t2 j = 0, j, t ∈ T : j < t and q t1 t = q t2 t .
Although the number of constraints (3.15) and (3.16) in the projected model increases exponentially with the number n of time periods, most of these inequalities are redundant.
In fact, for each k ∈ {1, . . . , t} such that t j=1 |π t j | = k, only one inequality (3.15) and one inequality (3.16) are non dominated for each t ∈ T. The projected model can be solved using a Benders decomposition approach together with a separation algorithm for the constraints (3.15) and (3.16) that can easily work by inspection. However, preliminary results reported in Section 4.1 show that many of such constraints need to be included.
The next proposition provides an efficient way to solve modelD when multipliers are fixed. Such result will be used in the next section to design efficient heuristics to find solutions with lower true cost.
Proposition 12. For fixed multipliers λ,D(λ) is given as followŝ The proof is a direct application of Proposition 7 so it will be omitted.

Heuristic schemes to improve the quality of solutions
Among all the models considered in this paper, model D, corresponding to the AARC approach, is the one that provides bounds closer to R * . Another important concern is related with obtaining solutions u such that R(u) is close to R * , that is, solutions with the best possible true cost. From a practical perspective, obtaining such solutionsū is more relevant than obtaining good bounds. Taking into account this more practical orientation, in this section, we develop iterative heuristic solution approaches, based on the interpretation of the Lagrangian multipliers as penalties associated with constraints violation, to obtain solutions with a lower true cost. With that purpose, for a given vector of multipliers, the value of the uncertain variables v j t , 1 ≤ t ≤ j ≤ n, must be computed at each iteration. Since such computation can easily be done by inspection in modelD but not in model D, we use modelD rather than model D. Besides, there are two more reasons to use modelD instead of model D. First, modelD is computationally easier to solve when the multipliers are fixed (see Proposition 12) and second, the results presented in the computational section for the instances solved to optimality suggest that there are no significant differences between the true cost of the solutions provided by both modelsD and D.
Given that modelD (and also model D) is a pure linear model, one would expect to solve it to optimality even for large size instances. However, when other aspects are included, the model can quickly become very large and the direct use of such model can be prohibitive. In order to take advantage of modelD we derive heuristic schemes that iteratively fix the value of the new variables (multipliers) leading to easier subproblems. The proposed heuristics are tested using the inventory problem when production setup costs are considered, that is, when the objective function is given by (3.1) and the set of constraints (3.2) is added.

Guided Iterated Local Search algorithm
The first heuristic approach that we propose is called Guided Iterated Local Search (GILS). The GILS heuristic can easily be used to solve other complex problems and it is inspired in the classical Iterated Local Search (ILS) heuristic based on the local branching scheme proposed by Fischetti and Lodi (2003). ILS heuristics have performed well in complex inventory problems with uncertainty, such as the Maritime Inventory Routing problem (Agra et al. 2016a(Agra et al. , 2018a and the Production Inventory problem (Agra et al. 2018b).
The main idea of the ILS heuristic is to restrict the search space of some integer variables (setup variables in our case) to a neighbourhood of a given solution. For a given positive integer parameter ρ, define the neighborhood N (ȳ, ρ) ofȳ as the set of feasible solutions of the model D satisfying the additional local branching constraint (see Fischetti and Lodi (2003)): ( 3.18) The neighborhood N (ȳ, ρ) is the set of solutions that differ from the current solutionȳ by a maximum number of ρ values of the y t variables. The linear constraint (3.18) limits to ρ the total number of binary variables y t flipping their value with respect to the solutionȳ, either from 1 to 0 or from 0 to 1. The GILS heuristic is a modified version of the ILS heuristic and can be seen as an improved version in which the search space is even more reduced through the inclusion of new constraints on the Lagrangian multipliers. Motivated by the fact that the Lagrangian multipliers are used to penalize the deviations between the copies of the uncertain variables of the adversarial problem, we impose, at each iteration, two types of constraints to guide the value of the multipliers as follows.
At each iteration, the current value of the uncertain variables v j t and the current value of the Lagrangian multipliers are denoted by v j t and λ j t , for all 1 ≤ t ≤ j ≤ n, respectively. To start the GILS heuristic, an initial solution is required. Such solution can be found by solving the modelD and fixing the Lagrangian multipliers to their value in the linear relaxation of modelD. The full algorithm is described in Algorithm 1. for all t, j ∈ T such that t ≤ j do 6:

Algorithm 1 Guided Iterated Local Search
Compute the value of the uncertainty variables v j t and add either constraints of type I or of type II to the model according to a predefined rule Remove all the constraints added 11: until the time limit of β seconds or a maximum number of iterations is reached Steps 5 to 7 are used to guide the values of the Lagrangian multipliers as penalties for variable deviations. By ignoring Steps 5 to 7, Algorithm 1 becomes the classic ILS heuristic, that will be also tested in the computational section. In Step 6, several specific rules can be used to choose in each iteration the type of constraints added to the problem. Some of those rules will be discussed in the computational section. It is important to notice that the purpose of Steps 5 to 7 is not to accelerate the algorithm. Additionally, we may also expect to obtain worse bounds (based on the value of modelD) using the GILS heuristic than using the ILS heuristic since we are restricting the search space. By penalizing the differences between the copies of the uncertain variables, we aim to force the choice of a neighbor solution based on an estimation of the cost closer to the true one. With this technique we expect to obtain better quality solutions (with true cost close to the cost of the optimal solution).

Subgradient Optimization method
Since modelD is based on a Lagrangian relaxation, we adapt the subgradient method, frequently used to solve the dual problem of a Lagrangian relaxation, to solve modelD heuristically. The Subgradient Optimization (SO) method that we propose depends on two given a priori parameters, parameter It Lim and parameter φ, and uses the following additional functions: R(u) : computes the true cost of a given production policy u. C deviations (λ) : given a vector λ , computes the valuev of the deviation variables v.
The SO method starts by solving the linear relaxation of modelD to obtain the initial values for the Lagrangian multipliers λ. The optimal value of the linear relaxation is used to define a lower bound to the problem. In the loop (step 4 to step 28 of the Algorithm 2), modelD is solved with updated information and the corresponding bound as well as the true cost of the production policy are computed and compared with the current best values. The value of the Lagrangian multipliers is updated in steps 21 to 25 according to the interpretation of the multipliers as penalties associated with the violation of constraints (3.5), taking into account the value of the variables v j t and v t t . At each iteration, modelD is solved with the Lagrangian multipliers fixed and all the remaining variables free, however, whenever a limit number of iteration (It Lim ) is reached without a better bound or a better solution (solution with lower true cost) is obtained, the multipliers are left free and the setup variables are fixed. This strategy is used to escape from local minimums and hence explore new feasible regions of the search space. Solve the integer modelD with the imposed constraints 12: Set Bound equal to the objective function value of modelD Compute v j t , the value of the deviation variables v j t , for all t, j ∈ T such that t ≤ j using function C deviations (λ)

22:
Compute the subgradient s j t := v t t − v j t for all t, j ∈ T such that t < j 23:

Computational experiments
This section reports the computational experiments carried out to compare the BO approach, the BT approach, the Lagrangian Dual approach based on modelD (that is named by LD), and the approach based on model D. Since we have proved that this last approach coincides with the affinely adjustable robust counterpart approach, hereafter, it is denoted by AARC approach. A model equivalent to the AARC can be obtained by considering the dual reformulated model proposed in Bertsimas and de Ruiter (2016) solved through affine decision rules. However, preliminary results not reported here showed that it is not beneficial in our case to use such reformulation, since the computational times associated with this model are higher than the ones obtained by using the AARC model.
In Section 4.1 we report the results for medium size lot-sizing instances with 30 time periods, for which all the optimal solutions can be obtained, while in Section 4.2 larger size instances with at most 100 time periods are considered. Table 1 displays the total number of constraints and the total number of non integer variables of model D with 30 and 100 time periods. The reported results correspond to the cases where the setup costs are either considered or not (column Setup), and the cases where the Lagrangian multipliers are either free or fixed (column #Multipliers). In the column #Variables, the numbers in parenthesis indicate the total number of integer variables in model D associated with the use of setup costs. Notice that the number of constraints for modelD is exactly the same as for model D and the number of variables is approximately 2/3 of the number of variables in model D. The computational experiments use instances generated as follows. For each time period t ∈ T , the nominal demand µ t and the maximum allowed deviation δ t are randomly generated in [0, 50] and [0, 0.2µ t ], respectively. The maximum number of deviations in period t is computed using the relation Γ t = Γ t−1 +τ , with τ varying in {0, 1} and Γ 0 is assumed to be zero. The initial stock level at the producer, x 1 , is randomly generated between 0 and 30 and the production capacity P t is constant and equal to n t=1 µ t . The production, holding and backlog costs are the same as those used by Bertsimas and Thiele (2006), i.e., c t = 1, h t = 4, b t = 6, respectively, for all t ∈ T . Throughout this section, we consider two variants of the robust inventory problem (with and without setup costs). The production setup costs occur in many practical inventory problems. However, the main goal of using instances with setup costs is to get harder instances, since the inclusion of integer setup variables results in a non linear model.
In order to compute the true cost R(u) of a given solution u, preliminary tests were conducted using four approaches: the dynamic program proposed by Bienstock andÖzbay (2008), the dynamic program proposed by Agra et al. (2016b), the mixed integer formulation with big-M constraints presented by Gorissen and den Hertog (2013), and the decomposition approach proposed by Bienstock andÖzbay (2008). The dynamic program proposed by Bienstock and Ozbay (2008) provided, in general, better results and solved all the adversarial problems in less than one second for instances with 100 time periods. Hereafter, for all the approaches considered in the computational experiments, the true cost of a solution is computed using the dynamic program proposed by Bienstock andÖzbay (2008).
All tests were run using a computer with an Intel Core i7-4750HQ 2.00 GHz processor and 8 GB of RAM, and were conducted using the Xpress-Optimizer 28.01.04 solver with the default options.

Computational experiments for medium size instances
In this subsection all the reported results are based on instances with 30 time periods. Preliminary experiments on a set of 10 instances were conducted to compare the performance of model (3.6)-(3.13) against the projected model (3.14)-(3.17). The second model is solved through a Benders decomposition procedure, having a separation scheme for constraints (3.15) and (3.16). The average running time was 721 seconds and the required average number of iterations was 552. Using the model (3.6)-(3.13) the average running time was lower than 1 second. Note that model D could also be solved using the decomposition procedure proposed by Ardestani-Jaafari and . However preliminary experiments indicate that its performance is similar to the one observed when Benders decomposition is used to solve the projected model (3.14)-(3.17), since a large number of iterations is needed. Therefore, henceforward, we consider only model (3.6)-(3.13). Now we analyse the impact of the setup cost in the presented approaches. Figures 1 to 4 report average results obtained for 16 different setup costs with values in {0, 10, . . . , 150}. For each setup cost, one hundred instances were randomly generated considering different samples of the nominal demand values. All obtained results are presented through their average values, therefore Mann-Whitney hypothesis tests are applied to find significant differences between the approaches. A significance level of 1% is used in all tests. Figure 1 displays the average cost of the solutions obtained by the BO approach (optimal value) and the average objective function values corresponding to the LD, AARC and BT approaches (which are upper bounds for the value of the BO solution). The points marked with squares (LD(u BT )) represent the average cost of the solutions obtained by the LD approach for the production policy obtained by the BT approach, i.e., after obtaining the solution of the BT approach, the value of the production variables u t , t ∈ T , is fixed and modelD is solved. The obtained results suggest that the BT approach is too conservative, since the quality of the upper bound provided by this approach degrades rapidly as the setup cost increases. This is not the case of both LD and AARC approaches where the obtained upper bounds are close to the cost of the solution obtained by the BO approach, even when the setup cost increases. In fact, for large setup costs, the BT approach provides an optimal bound that is up to 28% larger than the true cost of the solutions provided by the BO approach while the gaps associated to both the LD and AARC approach are up to 6% and 3%, respectively, for all the setup costs tested. When comparing the displayed lines associated with LD(u BT ) and BT we observe that, in general, there is a gap (that is up to 6%) between the corresponding bounds. This means that the optimal value of the Lagrangian multipliers for the production policy obtained by the BT approach is usually different from zero (as considered in the BT approach). Hence, a better choice of the Lagrangian multipliers can be used to improve the quality of the upper bound provided by the BT approach.
A prevailing conclusion for all the setup costs tested is that the LD, the AARC and the BT approaches lead to solutions with average upper bounds significantly higher than the optimal value provided by the BO approach. Further, the average upper bounds obtained by the BT approach are significantly higher than the ones obtained by both the LD and the AARC approaches for setup costs greater than 10, and significantly greater than the ones obtained by the LD(u BT ) approach for high setup costs (greater than 110). Figure 2 reports the average computational time in seconds required by each approach to find the solution, in terms of the setup cost. The computational time of the BT approach is always lower than one second. It can be observed that the exact BO approach is on average twice as faster as the LD approach. The computational time required by the AARC approach is approximately twice the computational time required by the LD approach. The average time required by the BO approach to solve each master problem ranges from 0 to 12 seconds while the computational time required to solve each adversarial problem is always lower than one second. Figure 3 displays the average true cost of the production policy determined by the approaches LD (R(u LD )), AARC (R(u AARC )), BT (R(u BT )), and compare them with the cost of the optimal production policy obtained by the BO approach. Note that these values are not the upper bounds obtained by the LD, AARC and BT approaches directly. They are the true costs obtained by solving the adversarial problem for each solution obtained with the indicated approach. The behavior of the true cost of the production policy obtained by the LD, AARC and BT approaches resembles the trend observed for the upper bounds. However, when the setup costs are not considered, the true cost of the production policy obtained by the BT approach is, in general, lower than the one obtained by both the LD and AARC approaches. It is interesting to note that the true cost of the solutions determined by both the LD and AARC approaches are very close. In fact, the Mann-Whitney hypothesis tests reveal that, in terms of the true cost of the production policy, the differences between both approaches are not significant. Moreover, the average true costs of the production policies determined by both LD and AARC approaches are not significantly different from the average costs of the optimal production policies. However, the average true cost of the production policies determined by the BT approach is significantly greater than the one determined by the LD and the AARC approaches for setup costs greater than 30.
A key conclusion from Figure 3 is that in the case where the setup costs are not considered, the true average cost from the solutions obtained using the BT approach may give a fair approximation on the optimal value. However, when setup costs are high the BT approach can give poor bounds and, beyond that, it can also produce bad solutions (with costs up to 16% larger than the optimal true costs). This may indicate that for more complex inventory problems the overestimation of costs obtained by the BT approach may lead to poor decisions. Figure 4 displays the average number of production periods associated with the production policy determined by the BO, the LD, the AARC and the BT approaches. This figure can help to explain the results displayed in Figures 1 and 3, since the average numbers of production periods in the LD, AARC and BO approaches are similar. Notice that even when the setup cost is high, the number of production periods in the BT approach remains high, which may be justified by the fact that the BT approach tends to overestimate the contribution of the inventory costs in the objective function. The differences between the average number of production periods in the LD, AARC and BO approaches are not significant for any setup costs used while such differences between the BT and BO approaches are significant for all the setup cost tested.
We also analyse the performance of the LD, the AARC and the BT approaches regarding the maximum number of deviations Γ n in the last time period. The obtained results are reported in Appendix D.

Computational experiments for large size instances
In this section we report the computational results for large size instances with up to 100 time periods. For these instances the exact BO approach cannot be solved to optimality within a reasonable time limit. Preliminary results showed that even for a small number of scenarios the master problem cannot be solved within eight hours. Similar difficulties were observed for a related lot-sizing problem in Attila et al. (2017). Furthermore, when the setup costs increase, model D (the one used in the AARC approach) becomes computationally harder to solve to optimality. For the instances with 100 time periods and setup costs greater than 70 we are not able to solve model D within a time limit of eight hours. Table 2 reports the average optimality gaps obtained with model D over a set of 10 instances with 100 time periods considering a time limit of two hours, for four different setup costs.   Table 2 shows that the instances become more difficult to solve when the setup cost increases. Results not reported here allow us to conclude that the optimal solution of the LD approach can be obtained in less than 2 hours for problem instances with up to 55 time periods while for the AARC approach only problem instances with up to 40 time periods can be solved to optimality within 2 hours. Hence, the main goal of this section is to test heuristic approaches that can be used on large size inventory models to obtain tight upper bounds as well as good solutions (with true cost close to the optimal value).
When the Lagrangian multipliers are fixed, models D andD can be quickly solved, even if setup costs are considered. In particular, modelD with the multipliers fixed to zero, that corresponds to the BT approach, can be solved in less than 5 seconds. An initial value for the Lagrangian multipliers can easily be obtained by solving the linear relaxation of models D andD, respectively. Figure 5 displays, for each setup cost in {0, 10, ..., 150}, the average upper bound values over 100 randomly generated instances with 100 time periods obtained by both LD and AARC approaches when all the multipliers are fixed to their values in the optimal linear relaxations of modelsD and D, respectively. The average upper bound values obtained by the BT approach are also displayed. Figure 5 shows opposite behavior of both LD and AARC approaches, when the Lagrangian multipliers are fixed to their values in the linear relaxation, comparing with the BT approach. While the gap between the lines associated with the AARC and the BT approaches tend to decrease as the setup cost increases, the gap between the lines associated with the LD and BT approaches tends to increase as the setup cost increases. The first gap varies between 2.6% and 3.0% while the second varies between 0.7% and 6.4%.
These results show that, when the value of the setup cost increases, tighter upper bounds can be obtained by considering the Lagrangian multipliers fixed to their values in the linear relaxation of modelD in the LD approach instead of considering all the multipliers equal to zero (as in the case of the BT approach). Furthermore, the difference between the computational time required to compute the upper bounds in both cases corresponds to the computational time required to solve the linear relaxation of modelD, which is always lower than 7 seconds for all the tested instances. This means that, in general, for large size instances, a better bound than the one obtained by the BT approach can be quickly obtained by considering the optimal multipliers of the linear relaxation of modelD.
From the theoretical study we know that the upper bound corresponding to the optimal solution of the AARC approach is lower than or equal to the upper bound obtained by the LD approach. Moreover, the value of the linear relaxation is lower in the AARC approach than in the LD approach. Nevertheless, Figure 5 shows that when the multipliers are fixed to their value in the linear relaxation, the upper bounds provided by the LD approach tend to be better than the ones obtained with the AARC approach, when the value of the setup cost increases.

Evaluation of the proposed heuristics
In this section we analyse the performance of both the GILS heuristic and the SO method presented in Section 3.3. It is important to remind that these two heuristics were specifically designed to generate better solutions and not necessarily better bounds resulting from the objective function values of the considered models. As reference methods we use the heuristic that consists on solving the full model D with a time limit of one hour, and the ILS heuristic.
The first heuristic will be called Full Model heuristic (FM heuristic).

Tuning of the parameters
We consider two variants of the ILS heuristic, one based on modelD and other based on model D, denoted by ILSD and ILS D , respectively. Both heuristics correspond to Algorithm 1 described in Section 3.3.1 without steps 5 to 8. However, instead of imposing a time limit or a maximum number of iterations, the algorithm stops when no improvement in the objective function value is observed. In both heuristics, the parameter ρ was set to 2 since with this parameter for the instances with 100 time periods, almost all the problems arising in each iteration of the ILS heuristics were solved to optimality in less than 150 seconds (the time limit imposed in each iteration).
For both the GILS heuristic and the SO method, a set of 20 randomly generated instances with 30 time periods was used to tune the values of the parameters. Since in the GILS heuristic we are imposing additional constraints on the Lagrangian multipliers, the value of the objective function in a given iteration can be worse than the one obtained in the previous iteration. So it does not make sense to stop the algorithm when there is no improvement in the objective function value. Hence, the stopping criteria for the GILS heuristic is defined using the number of iterations (that is limited to 15). Three rules were tested to choose the type of constraints added to the problem in each iteration: i) add only constraints of type I; ii) add only constraints of type II and iii) successively add constraints of type I k times and then add constraints of type II k times (with k = 1, 2, 3). Taking into account both the upper bounds and the true cost of the solutions, the best results were obtained when the third rule was used with k = 2, so this is the strategy used henceforward. To compare the GILS with both variants of the ILS heuristics we use the same time limit in each iteration (150 seconds) and also ρ = 2.
For the SO method, different values {0.25, 0.5, 1, 1.5, 2} of φ and different values {5, 10, 15, 20} of parameter It Lim were tested. The best results where obtained when the values φ = 1 and It Lim = 10 were used. The time limit imposed in the SO method is 600 seconds.

Comparing upper bounds and true costs
Here we compare the performance of the heuristics in terms of the setup cost (for instances with 100 time periods) and also in terms of the number of periods (for instances with a setup cost equal to 150). Tables 3 and 4 present the average upper bounds obtained for each heuristic tested as well as the corresponding average computational time in seconds. Each line of the tables reports average values obtained for a set of 10 instances. The best average upper bounds obtained for each set of instances are marked in bold. Furthermore, the numbers in parenthesis next to the bounds indicate the number of best bounds obtained by the corresponding heuristic.  The results presented in Tables 3 and 4 reveal that best upper bounds are obtained with both the FM and the ILS D heuristics. These results agree with what was stated in the theoretical study since the best upper bounds are obtained by the heuristics based on model D. All the instances with 20 time periods and almost all the instances with 40 time periods are solved to optimality by the FM heuristic . This justifies that the best results for the instances with these time periods are obtained with the FM heuristic. However, when the number of periods increases, best upper bounds are in general obtained by the ILS D heuristic.
In Tables 5 and 6 we compare the heuristics in terms of the true cost of the obtained solutions. As in the previous tables, the best average results are marked in bold and the number of best solutions (with the best true cost) appears in parenthesis.
At each iteration of the ILS heuristics, GILS heuristic and SO method, the true cost of the current solution is obtained and the best obtained value is reported. In the FM heuristic the true cost of all integer solutions found during the Branch-and-Bound process is computed and the best true cost is reported.  The results presented in Tables 5 and 6 clearly suggest that the best average true costs are in general obtained by the SO method. Only for 9 out of the 110 instances presented in these two tables the best solutions were not found by the SO method. Furthermore, the computational time of the SO method (600 seconds) is much lower than the one required by the remaining heuristics. The SO method allows us to obtain solutions with true costs that are, on average, 1.8% lower than the ones obtained with the FM heuristic (which is the heuristic closer to the AARC approach). Hence, among all the heuristic solutions tested, the SO method is the most efficient heuristic to obtain good solutions (with low true costs).
As expected, the upper bound values obtained by both ILS heuristics are better than the ones obtained by the GILS heuristic. However, in terms of the true cost of the obtained production policies, the best results are in general obtained using the GILS heuristic. In fact, among all the 60 instances with 100 time periods considered in Table 5, 45 of the best solutions were found by the GILS heuristic while 9 and 6 were found by the ILS D and the ILSD heuristics, respectively. Among all the 50 instances with a setup cost equal to 150 considered in Table 6, 38 best solutions were found by the GILS heuristic while 9 and 3 were found by the ILS D and the ILSD heuristics, respectively.

Looking deeper to the SO method
Since the true cost of the solutions obtained with the BT approach is much higher than those obtained by all the heuristics tested, such results were not reported the Tables 5 and 6. However, in Table 7 we report some gaps showing the improvements on the true cost of the solutions obtained by the SO method compared to the true cost of the solutions obtained by the BT approach. Columns 2 to 7 refer to the instances presented in Table 5, those with 100 time periods and setup costs varying between 25 and 150, while columns 8 to 12 refer to the instances presented in Table 6, the ones with a setup cost equal to 100 and time periods varying between 20 and 100. Remember that the SO method starts from the solution obtained with modelD with the multipliers fixed to their value in the linear relaxation of such model. Hence, the line Initial Solution reports the average gaps associated with the true cost of the initial solution used in the SO method comparing with the true cost of the solution obtained by the BT approach. The line Best Solution reports the average gaps associated with the true cost of the best solution found by the SO method comparing with the true cost of the solution obtained by the BT approach. Table 7: Average gaps (in percentage) between the SO method and the BT approach in terms of the true cost of the solutions.
Instances from Table 5 Instances from We observe in Table 7 that the gap between the SO method and the BT approach, in terms of the true cost of the solutions, increases as the setup cost increases and decreases as the number of periods increases. For the hardest instances, the ones with 100 time periods and setup cost equal to 150, the BT approach provides solutions with true costs that are 7.9% larger than those obtained by the SO method.
Finally, in order to compare the quality of the solutions generated by the SO method with those resulting from the AARC method solved to optimality, we report in Table 8 the average optimality gaps associated with both the SO method and the AARC approach in terms of the true cost of the solutions, for instances with n = {10, 20, 30, 40} time periods (those instances where the AARC method can be solved to optimality within reasonable amount of time). For each number n, 25 instances were used. The numbers in parenthesis next to the gaps indicate the number of best solutions obtained by the corresponding method. The average gaps, in percentage, were computed according to the formula: where u * is the optimal solution (obtained by the BO approach) and u J is the solution obtained by approach J, with J = AARC or J = SO. Table 8 suggests that for the instances solved to optimality the best solutions are on average obtained by the SO method since the gaps associated with this approach are lower than the ones associated with the AARC. 2.06 (7) 1.82 (2) Furthermore, the number of best solutions found is greater in the SO method than in the AARC approach.

Conclusion
In this paper we consider RO min-max problems with decomposable functions. Based on the dual Lagrangian problem resulting from a Lagrangian relaxation of the reformulation of the adversarial problem, we provide a compact formulation to approximate the true min-max problem and show that the Bertsimas and Thiele dualization approach is a particular case of this approach with the multipliers equal to zero. Additionally, we show that the new dual Lagrangian formulation coincides with an affine approximation.
The theoretical results are applied to the robust inventory problem where the demands are uncertain and the uncertain variables belong to the B&T budgeted set. Computational results have shown that when other complicating aspects such as setup costs are present, by overestimating the costs, the classical dualization approach from Bertsimas and Thiele (2006) can provide poor bounds and poor solutions. The dual Lagrangian formulation, which coincides with an affine approximation model, leads to bounds closer to the true min-max value even for those instances where the dualization from Bertsimas and Thiele (2006) provide worst bounds. However, although the dual Lagrangian formulation leads to tractable models, their size can be too large to be solved to optimality for real size instances. Taking advantage of regarding such models from the perspective of Lagrangian duality theory, we propose heuristics approaches that consider the new multipliers as penalties for violation of the constraints of the adversarial problem. Thus, such penalties penalize the overestimation of the true cost of each feasible solution. Using such idea, we introduce a Guided Iterated Local Search heuristic and a Subgradient Optimization method to solve large size inventory models. The Subgradient Optimization method proved to be efficient to obtain better solutions than those obtained using other approximation approaches including the dual Lagrangian formulation.
is displayed, where U B is a given upper bound and BO is the optimal value obtained by using the BO approach.The lines associated with LD, AARC and BT represent the average gap corresponding to the upper bounds obtained by the LD, AARC and BT approaches, respectively. The lines associated with R(u LD ), R(u AARC ) and R(u BT ) represent the average gap corresponding to the true cost of the solutions obtained by the LD, AARC and the BT approaches, respectively. In Figure 6 the setup costs are not considered, while in Figure 7 a setup cost of value 150 is considered. For both cases, the average gap associated with the LD and AARC approaches is always lower than 10% and 7%, respectively, while for the BT approach such gap can reach 28%. In particular, in our experiments, for the box-constrained case, there is no gap associated with the upper bounds obtained by both the LD and AARC approaches. In general, the average gap associated with the true cost of the solutions determined by both LD and AARC approaches tends to decrease as the number of deviations increases and it is zero for the box-constrained case.