Efficient lower and upper bounds for the weight-constrained minimum spanning tree problem using simple Lagrangian based algorithms

The weight-constrained minimum spanning tree problem (WMST) is a combinatorial optimization problem for which simple but effective Lagrangian based algorithms have been used to compute lower and upper bounds. In this work we present several Lagrangian based algorithms for the WMST and propose two new algorithms, one incorporates cover inequalities. A uniform framework for deriving approximate solutions to the WMST is presented. We undertake an extensive computational experience comparing these Lagrangian based algorithms and show that these algorithms are fast and present small integrality gap values. The two proposed algorithms obtain good upper bounds and one of the proposed algorithms obtains the best lower bounds to the WMST.


Introduction
Consider an undirected complete graph G = (V, E) , with node set V = {0, 1, … , n − 1} and edge set E = {{i, j}, i, j ∈ V, i ≠ j} .Associated with each edge e = {i, j} ∈ E consider positive integer costs c e and weights w e .The Weight- constrained Minimum Spanning Tree problem (WMST) is to find a spanning tree T = (V, E T ) in G, E T ⊂ E , of minimum cost C(T) = ∑ e∈E T c e and with total weight W(T) = ∑ e∈E T w e not exceeding a given limit W.An additional constraint to the Minimum Spanning Tree problem (MST) such as the weight constraint (the total tree weight W(T) can not exceed a given limit W) turns this constrained MST into a NP-hard problem.The WMST is a NP-hard combinatorial optimization problem (Aggarwal et al. 1982;Yamada et al. 2005).
The WMST appears in several real applications where the weight restrictions are mainly concerned with a limited budget on installation/upgrading costs.In this case, weights w e represent the installation/upgrading cost of the link e = {i, j} ∈ E and c e represent the link nominal cost/length.A classical application arises in the areas of communication networks and network design, in which information is broadcasted over a minimum spanning tree, and is related with the upgrade and/or design of the physical systems when there is a pre-specified budget restriction (Henn 2007).
The WMST has received several different designations.It was first mentioned in Aggarwal et al. (1982) as the MST problem subject to a side constraint, where MST stands for Minimal Spanning Tree.Besides the common designation as a WMST, the most common designation is as a knapsack-constrained MST.However there are some authors that refer to it as a resource-constrained MST.
Exact and approximation algorithms have already been proposed to the problem.Exact algorithms that use a Lagrangian relaxation to approximate a solution combined with a Branch and Bound strategy were proposed by Aggarwal et al. (1982) and by Shogan (1983).Jörnsten and Migdalas (1988) propose a Lagrangian Decomposition scheme in which, through duplication of variables, two subproblems, a MST and a Knapsack problem, have to be solved.Approximation schemes were proposed, by Ravi and Goemans (1996) a polynomial-time approximation scheme, by Xue (2000) a primal-dual algorithm, by Hong et al. (2004) a bicriteria scheme and by Hassin and Levin (2004) an improvement of the algorithm proposed by Ravi and Goemans (1996).A compilation of some results and existing algorithms to solve the problem can be found in Henn (2007).Requejo et al. (2010) describe and compare, from the computational point of view, several Integer Linear Programming formulations for the WMST.Recently Agra et al. (2011Agra et al. ( , 2016) ) present valid inequalities for the WMST, the family of implicit cover inequalities is introduced and a lifting algorithm is discussed.
A common related approach is to include the weight of the tree as a second objective instead of a hard constraint.The resulting problem is the bicriteria/biobjective spanning tree problem (see Andersen et al. 1996;Hamacher and Ruhe 1994;Hong et al. 2004;Ramos et al. 1998;Sourd and Spanjaard 2008;Steiner and Radzik 2008 among many others).In Aggarwal et al. (1982) certain properties of an optimal solution are established considering a bicriteria spanning tree.
In this work we describe and compare, from the computational point of view, several Lagrangian based algorithms for the WMST.Similar Lagrangian based algorithms have been referred in several works of constrained shortest path problems 1 3 Efficient lower and upper bounds for the weight-constrained… (Handler and Zang 1980;Jüttner et al. 2001;Xiao et al. 2005) or of general combinatorial optimization problems (Blokh and Gutin 1996;Mehlhorn and Ziegelmann 2001).To the best of our knowledge, a computational study on the Lagrangian based algorithms for the WMST has never been published.Xue (2000) describes a primaldual algorithm to find approximate solutions for the WMST but no computational results are reported.
We present the following Lagrangian based algorithm approaches for deriving approximate solutions for the WMST: (1) algorithms based on approaches for the constrained shortest path problems and for general combinatorial optimization problems; (2) the classical subgradient setting; and (3) algorithms that use information about the shape of the dual Lagrangian function, two of these algorithms are new and one incorporates cover inequalities which is a novelty.
All the Lagrangian based algorithms considered solve only a MST as subproblem.This contrasts with other Lagrangian decomposition approaches (e.g.Jörnsten and Migdalas 1988) where both a MST and a Knapsack subproblems are solved.
We derive a uniform framework that standardises the several versions of the algorithms and further we undertake an extensive computational experience comparing the performance of the algorithms.This experience shows that the two proposed algorithms obtain good lower and upper bounds.Moreover, the new algorithm that uses cover inequalities obtains the best lower bounds.
We describe a general formulation for the problem in Sect.2, discuss some properties of the problem in Sect. 3 and present the Lagrangian relaxation for the WMST in Sect. 4. In Sect. 5 we present a general framework, that uses different settings including the classical subgradient method to obtain approximate trees and propose two new settings.Computational results to assess the quality of the discussed procedures will be shown in Sect.6.Finally, in Sect.7 we present the conclusions.

A formulation for the WMST
To obtain formulations for the WMST one can adapt a MST formulation.For the MST several formulations are well known (see Magnanti and Wolsey 1995) and in Requejo et al. (2010) natural and extended formulations for the WMST are discussed.For instance the two classical formulations for the MST, namely the formulation using cut-set inequalities and the formulation using circuit elimination inequalities, and the well-known compact extended multicommodity flow formulation using additional flow variables for the MST are easily adapted for the WMST through the inclusion of a weight constraint (Requejo et al. 2010).Other extended formulations, using e.g.Miller-Tucker-Zemlin (MTZ) inequalities to prevent the existence of circuits in the feasible solutions, can be derived (Requejo et al. 2010).

Some properties of the WMST
The well known Minimum Spanning Tree problem (MST) is to find a spanning tree For this combina- torial optimization problem there are several polynomial algorithms such as Kruskal and Prim's algorithms (see Ahuja et al. 1993 for descriptions of these algorithms).Consider a companion problem, the Minimum-weight Spanning Tree problem that is to find a spanning tree 1 3 Efficient lower and upper bounds for the weight-constrained… The tree T w corresponds to a feasible solution for the WMST and the tree T c cor- responds to an unfeasible solution when W(T c ) > W .We have the following propositions.
Proposition 1 If W ≥ W(T c ) the WMST reduces to the MST and the tree T c corre- sponds to the optimal solution.
Proposition 2 When W < W(T c ), there exists an optimal solution for the WMST if and only if W(T w ) ≤ W.
If W(T w ) > W , then the WMST has no solution.If W(T w ) ≤ W and C(T w ) = C(T c ) , then the tree T w corresponds to an optimal solution for the WMST.
In the case of W(T w ) ≤ W < W(T c ) and neither the tree T c nor the tree T w are opti- mal solutions for the WMST, we need to find another tree that is an optimal solution to the problem.To search for another tree, different from T c and from T w , we may use in the objective function different positive coefficients associated to each variable x e cor- responding to edge e ∈ E .Denote by p e these new coefficients that are defined as a lin- ear combination of the cost c e and of the weight w e associated to each edge e ∈ E .Thus

Proposition 3 Consider a tree
Therefore, with tree T p we may obtain a better upper bound or a better lower bound to (WMST) , depending on the feasibility or unfeasibility of the tree T p .

Lagrangian relaxation
In order to derive a Lagrangian relaxation, assign a non-negative Lagrangian multiplier to the weight constraint (2.1) and dualize the constraint in the usual Lagrangian way.This leads to the following relaxed problem.
For every ≥ 0 , the solutions to this relaxed problem give lower bounds on the opti- mum value, i.e.
For a given non-negative value of , the relaxed problem WMST can be solved using any well known polynomial algorithm to solve the MST (Ahuja et al. 1993).For each define positive coefficients p e = w e + c e associated to each edge e = {i, j} ∈ E , this is, take b = 1 and a = .Let T p be the tree that corresponds to the solution for the problem WMST , which is the minimum spanning tree obtained with the objective function coefficients defined by p e , therefore Notice that different values of may yield different trees T p , so that (WMST ) is a concave and piecewise linear function of .To obtain the best lower bound of the function (WMST ) we have to solve the following Dual Lagrangian problem being * the non-negative multiplier which maximizes (WMST ).
Classically a Lagrangian relaxation is solved using a subgradient optimization procedure (Shor 1985).The subgradient optimization procedure starts by initializing the Lagrangian multiplier to some value 0 .After, iteratively at each iteration ( k = 0, 1, … ), it solves the relaxed problem WMST k , and updates the Lagrangian multiplier k by setting k+1 = max{0, k + s k d k } using a direction d k and a step-size s k .Several directions d k can be defined (Held et al. 1974).Together with an appro- priate choice for the step size s k (Shor 1985) produces a convergent method.Finally some stopping criteria is verified.
For the solution x k = (x k e ) of the Lagrangian relaxed problem WMST k , corresponding to the tree T p k , we have ) is an upper bound for (WMST k ).For the Lagrangian multipliers k , of the subgradient optimization procedure, the following upper bound can be established.

Proposition 4 The Lagrangian multipliers
Proof First notice that, by construction, k ≥ 0 .Any feasible tree T is such that Any feasible tree T has weight x ≤ W , thus it is such that . Therefore, any feasible tree T is such that its cost C(T) ≥ C(T w ) − k (W − W(T w )).On the other hand, when . □

Solution procedure
In this section we propose a general framework for deriving approximate solutions for the WMST.
Iteratively several trees are generated.We start by generating the tree T c , and the tree T w .After, at each iteration, a different tree T p is generated by using appropri- ate objective function coefficients, the coefficients p e = aw e + bc e .For each e ∈ E , these objective function coefficients p e are linear combination of the cost c e and of the weight w e .Different algorithms are obtained depending on the settings for parameters a and b.
The tree T p can either be feasible or unfeasible.We keep track of the best obtained feasible tree with the tree T u k , and of the best obtained unfeasible tree with the tree T k .The values of the cost of these trees correspond to the best upper bound (UB) and to the best lower bound (LB), respectively.The tree T k is initialized with tree T c , T 0 ∶= T c .The tree T u k is initialized with tree T w , T u 0 ∶= T w , if its weight is less than or equal to W. Otherwise there is no feasible solution.If tree T p is feasible and its cost is less than or equal to the actual UB value, which is the cost of the current tree T u k , then the UB value is updated.Otherwise tree T p is unfeasible and if its cost is greater than or equal to the actual LB value, the cost of the current tree T k , then the LB value is updated.If the Interval Reduction procedure returns two trees, both LB and UB values can be updated.Accordingly, the trees associated to the sequences of the UB and of the LB values are also updated.Parameters a and b depend from parameter k and these parameters are iteratively updated according to the algorithm setting.In some algorithm settings the parameter k belongs to the interval [ 0 , u 0 ] which is iteratively reduced according to the setting.Different coeffi- cient settings yielding different Lagrangian dual variables and updating rules will be discussed.The general framework is as follows and the specific algorithm settings are displayed in Tables 1 and 2 and will be discussed next.

end-if ;
initialize parameters λ k and ν k as specified by the algorithm setting.

End-if.
Step 2 -Compute an approximate tree.
Obtain the parameters a and b according to the algorithm setting.
according to the algorithm setting. End-if.
Step 3 -Update trees and bounds.

End-if
End-if.
Else Iteration k := k + 1 and go to Step 2.
Step 1.1 -Obtain a lower bound.
Find Tc = (V, ET c ) of minimum cost C(Tc) and compute W (Tc).
If W (Tc) ≤ W then Tc corresponds to an optimal solution.STOP. End-if.
Step 1.2 -Obtain an upper bound.
Find Tw = (V, ET w ) of minimum weight W (Tw) and compute C(Tw).
If W (Tw) > W then there is no solution.STOP.

Else set Tu
initialize parameters k and u k as specified by the algorithm setting; 1 3 Efficient lower and upper bounds for the weight-constrained… Table 1 specifies for each algorithm setting the initialization of the parameters 0 , 0 , 0 and u 0 ; the definition of the parameters a and b; and the updating rules.For some algorithm settings an interval reduction is used.The details of these interval reduction procedures are given in Table 2.In Step 4 the tolerance tol is a small real positive value given as an input of the algorithm.
To evaluate the complexity of this algorithm one has to take into account the complexity of the algorithm used to build the trees, consider that such algorithm has complexity of (m, n) as it depends on the number m = |E| of edges and on the number n = |V| of nodes of the graph G.One also has to take into account the number of trees formed.If K is the total number of trees that can be formed, then this algorithm stops after (logK) iterations (in the worst case, the number of trees is proportional to K).The main effort of the algorithm is on the obtention of a tree.Consequently, the complexity of this algorithm is ( (m, n) logK).
Different algorithms are obtained depending on the settings for parameters a and b in the definition of the new coefficients in Step 2 and the interval reduction when applied.Notice that the subgradient optimization scheme perfectly fits this algorithm layout and is one of the settings discussed bellow.Next we will discuss different settings for the positive coefficients p e = aw e + bc e , with non-negative real scalars a and b, associated to each edge e ∈ E T c and their update at each iteration.
First consider a setting for the coefficients p e characterized by a convex linear com- bination of costs c e and of weights w e , more precisely associate parameter a = 1 − k for the weights and parameter b = k ∈ [0, 1] for the costs.These settings were pro- posed in Xue (2000) for the weight-constrained shortest path problem.It is proposed 1] and initializing 0 = 0 and u 0 = 1 , see Table 1.An interval reduction is applied as displayed in Table 2.This setting is referred as Alg1.Now consider settings for the coefficients p e characterized by associating a parame- ter, the Lagrangian multiplier, for the weights, a = k , and a parameter with value equal to one for the costs, b = 1 .Some examples of such settings will be given next.Jüttner et al. (2001) built up the LAgrangian Relaxation based Aggregated Cost (LARAC) algorithm which solves the Lagrangian relaxation of the constrained shortest path problem.Xiao et al. (2005) establish the equivalence between the LARAC algorithm and other algorithms in Blokh and Gutin (1996), Handler and Zang (1980), Jüttner et al. (2001).Afterwards Mehlhorn and Ziegelmann (2001) also consider this algorithm and Xue (2003) presents a variant of the algorithm.Using the ideas of these algorithms, consider the setting 1.This set- ting is referred as Alg2.
The settings for the classical subgradient algorithm consider a direction d k and an appropriate step-size s k to update the Lagrange multiplier at each iteration of the algo- rithm.If the Held et al. (1974) setting for a direction is considered, then the direction is Using, as suggested in Shor (1985), the step-size Blokh and Gutin (1996), Handler and Zang (1980), Jüttner et al. (2001), Xiao et al. (2005) Alg3 Held et al. (1974), Shor (1985) Alg4 Amado and Bárcia (1996) 1 3 Efficient lower and upper bounds for the weight-constrained… with ∈]0, 2[ and the upper bound C(T u k ) to approximate the optimum value of the problem, leads to the setting Else set k+1 := k and u k+1 := λ k . End-if.

Alg6 and Alg7
If The parameter ∈]0, 2[ needs to be carefully chosen, this tuning process is a drawback of this setting.This setting corresponds to the classical subgradient optimization procedure to a Lagrangian relaxation, see Table 1, and is referred as Alg3.
In Xiao et al. (2005) another algorithm is proposed to solve the Lagrangian relaxation of the constrained shortest path problem.This algorithm uses a binary search to iteratively reduce the interval [ k , u k ] and obtain the tree that corre- sponds to the approximate solution.This setting for the approximate tree coefficients is 1, and update the interval extremes k and u k as displayed in Table 2.This setting is referred as Alg4.Amado and Bárcia (1996) propose an algorithm to solve several matroidal knapsacks such as the Multiple Choice Knapsack Problem and the WMST.Their algorithm initializes the interval [ 0 , u 0 ] with 0 = 0 and u 0 = U , with U ∶= max e∈E T c {c e , w e } , iteratively updates the reduced interval [ k , u k ] until a stopping criteria is satisfied and the tree that corresponds to the approximate solution is identified.By setting a = k = k +u k 2 and b = 1 the reduced interval is obtained by comparing the values of (WMST k ) , (WMST k ) and (WMST u k ) .To ease notation use ) and (u 0 ) is obtained using the tree T u 0 of minimum P(T u 0 ) , with p e = u 0 w e + c e .Notice that The tree T p corresponds to tree T p k .In the interval reduction procedure to obtain values ( a k ) and ( b k ) two more trees are obtained, tree T p a k and tree T p b k .The tree T p is updated to the best tree chosen among the three trees obtained to reduce the interval.The interval reduction and the update of the tree T p are displayed in Table 2. Using the Proposition 4, here the initialization of the interval [ 0 , u 0 ] is done differently from Amado and Bárcia (1996) with 0 = 0 and u ) .This setting is referred as Alg5.
As the Lagrangian function is concave, the interval extremes can be updated according to the gradient W − W(T p k ) of the Lagrangian function at k and at the interval extremes k and u k .If both slopes at k and at one of the extremes have the same inclination, that extreme can be updated to k .Otherwise, both extremes (lower and upper) can be updated.Therefore, the interval reduction is simplified and the previous procedure for the interval reduction can be modified.This setting we propose is referred as Alg6 and the new interval reduction procedure is displayed in Table 2.
Following Amado and Bárcia (1996) another Lagrangian based algorithm to solve several matroidal knapsacks can be used.This algorithm is obtained by dualizing not only the weight inequality (2.1) but also an extra valid inequality, inequality (5.1), added to the model as it follows.

3
Efficient lower and upper bounds for the weight-constrained… The valid inequality (5.1) added to the model is a cover inequality (Agra et al. 2016) Given a cover  the cover inequality ∑ e∈ x e ≤ �� − 1 is valid for the WMST (Agra et al. 2016).Any other family of valid inequalities can be used, one example is the family of implicit cover inequalities proposed by Agra et al. (2016).By adding a valid inequality to the model and dualizing both constraints (2.1) and (5.1) in the Lagrangian way, a Lagrangian function with a better lower bound can be obtained.This is (WMST ) ≤ (WMST , ) ≤ (WMST) , for non-negative Lagrangian multipliers and , with the non-negative Lagrangian multiplier associated to inequality (2.1) and the non-negative Lagrangian multiplier associated to inequality (5.1).This setting for the approximate tree coefficients p e is a = and b = 1 with p e ∶= aw e + bc e , for all e ∉  and p e ∶= aw e + bc e + , for all e ∈  .For the solution x = (x e ) of the Lagrangian relaxed problem WMST , , corresponding to the tree T p , , define W(T p , ) = ∑ e∈E w e x e , C(T p , ) = ∑ e∈E c e x e and P(T p , ) = W(T p , ) + C(T p , ) + ∑ e∈ x e .Therefore Several cover inequalities will be dynamically added to the model.At each iteration, whenever an unfeasible tree is obtained one cover inequality is constructed and added, by dualization, to the model.The proposed algorithm to solve this problem by Amado and Bárcia (1996) is the two dimension version to the one previously presented.This setting is referred as Alg7, see Tables 1 and 2 for details.The algorithm initializes the interval [ 0 , u 0 ] .Additionally the multiplier must also be initialized in Step 1.2 to a small value, we may use 0 = tol .Iteratively the algorithm builds an approximate tree T p .Notice that this tree T p uses multiplier k and multiplier k .Thus the tree T p corresponds to tree T p k , k .Whenever this tree T p is unfeasible, a cover ine- quality is constructed using a separation algorithm to identify a valid cover  k (Agra et al. 2016).The corresponding cover inequality is added to the problem associated with multiplier k .When reducing the interval [ k , u k ] for the multiplier k using the previous simplified interval reduction procedure (the same as Alg6), the value of the multiplier k must be updated.Considering � k � = ∑ e∈ k x e , the multiplier k is updated to the following setting (5.1) This algorithm we propose, referred as Alg7, obtains good upper bounds and the best lower bounds to the WMST.
The simplicity of all these procedures has its price as it depends greatly on the ability of each algorithm setting to find near optimal multipliers quickly and specific to each instance.As we will observe in the next section different gaps are reported for the same problem instance depending on the specificity of the overall algorithm settings and its ability in finding approximate solutions.

Computational experience
Computational results will assess the quality of the approximate solutions obtained with each setting and the corresponding updating process of the Lagrangian scheme presented in Sect. 5.The algorithms from Sect. 5 were implemented in C++ and all the tests were performed in an Intel(R) Core(TM)2 Duo CPU (T7100) 2.00 GHz processor and 4Gb of RAM.We present computational results for instances to the WMST defined on complete graphs with a number of nodes varying between 10 and 1000, in a total of 215 instances.
To generate an instance of the WMST, two values to associate to each edge e, a cost c e and a weight w e , have to be defined.Afterwards, a (feasible) value to the weight limit W must also be defined.We built three sets of instances, constituting three different ways of generating costs and weights.
In a first set of instances, costs c e and weights w e are generated similarly to a set of instances described in Pisinger (2005) and named therein as Spanner instances.We use the following values for W: W = 1000 for all instances with n ≤ 100 , W = 1500 for all instances with n = 150, 200 , W = 2000 for all instances with n = 300 , W = 2500 for all instances with n = 400 , W = 3000 for all instances with n = 500 and W = 3500 for all instances with n = 1000 .The costs and the weights are multiples of a small set (the so-called spanner set in Pisinger 2005) of costs and weights following one particular distribution, we use the Uncorrelated distribution (which is in the Pisinger's (2005) proposed list of distributions), and the following two parameters s and m, such that s is the size of the small set and m is the multiplier limit.Pisinger (2005) proposes s = 2 and m = 10 .The small set of s pairs of costs and weights is constructed by randomly selecting wj and cj both in [1,100], for j ∈ {1, … , s} .Then the s costs and weights in the small set are normal- ized by dividing them by m + 1 .After the costs c e and weights w e are constructed by (1) randomly selecting a pair of costs and weights (c k , wk ) , k ∈ {1, … , s} , from the normalized small set, (2) randomly selecting a multiplier a in [1, m] , (3) set- ting w e = a wk , (4) setting c e = a ck .Finally the weights of some edges are manipu- lated in such a way that the optimal solution has a desired predefined structure.After testing a few structures we obtained some challenging instances when the optimal structure of the WMST instance solution has large diameter values, almost n − 1 , but not equal to n − 1 , in such way that the tree is almost a path.Thus we name this instances set as "Almost Path" (AP).To obtain such structured instances, they are generated in such a way that the optimal solution is a graph with very large diameter, but not diameter equal to n − 1 , as follows.(i) Obtain the minimal spanning 1 3 Efficient lower and upper bounds for the weight-constrained… tree.(ii) Assign big weight values to the edges in the minimal spanning tree.We use w e = (W∕n) × a∕100 , with a ∈ [50, 90] . (iii) For the remaining edges, assign the value w e = round(r + W(1 − p)∕(n − 1)) to their weight, with some p ∈ [0.5, 1] and r randomly selected in the interval [1, W × p∕(n − 1)] .If w e = 1 , then assign the value w e = r × a 1 + W(1 − p × a 2 )∕(n − 1) , with a 1 and a 2 selected in the interval [0, 10].
For the second set of instances, named Random (R) instances, the costs c e and the weights w e are uniformly generated in the interval [1,1000].
For the third set of instances, named Euclidean (E) instances, costs c e and weights w e are obtained using Euclidean distances.After randomly generating the coordi- nates of n points/nodes in a 100 × 100 grid, the cost c e of each edge e = {i, j} is the integer part of the Euclidean distance between points/nodes i and j.We proceed independently and similarly to obtain the weights.
To define a (feasible) value to the weight limit W for each instance of these two sets of instances (sets R and E), we start by obtaining the weight of the minimum spanning tree W(T c ) and the weight of the minimum weight spanning tree W(T w ) and we select W to be one of the values , for i ∈ {1, … , 10}.
A total of 215 instances were generated, 95 of the set AP and 60 of each set R and E. For each set AP and each instance size between 10 and 150 we have 10 instances and for each instance size between 200 and 1000 we have 5 instances.For each set R and E and each instance size we have 5 instances.
To use the approximation schemes from Sect. 5, some parameters were defined as follows.After testing values within the interval [0.0001, 0.1] we fixed tol = 0.001 .In Alg3 we tested several values for ∈]0, 2[ .For the AP instances we fixed = 0.001 .For the R instances this value was fixed in 0.001, 0.0065, 0.075, 0.25, 0.095, 0.125, 0.085, 0.06, 0.045, 0.04, 0.04 and 0.03, depending on the number of nodes.For the E instances this value was fixed in 0.0001 for n = 10, 20, 40, 60, 80 , in 0.04 for n = 100, 150, 200, 300, 400 , in 0.035 for n = 500 and in 0.015 for n = 1000 .In Alg5 the interval upper bound u 0 must be initialized.We tested W−W(T w ) .Better results were obtained with the second initialization.In Alg7, for the construction of the cover inequality, after testing three ordering schemes [(1) increasing order of the edges costs c e , (2) decreasing order of the edges weights w e , (3) decreasing order of the quotient c e ∕w e ] for select- ing the edges of the tree T k to be in the cover, we noticed that better results are obtained with ordering scheme (ii).
In Jörnsten and Migdalas (1988) a Lagrangian decomposition procedure is proposed that separates the WMST problem into two subproblems, a MST and a Knapsack problem.At each iteration both a MST and a Knapsack subproblems have to be solved.In the algorithms described in Sect. 5 only a MST subproblem has to be solved.Although the reported theoretical bound in Jörnsten and Migdalas (1988) is superior, the performance of this decomposition approach depends greatly on the ability to find near optimal multipliers quickly and specific to each instance.Additionaly, when using this decomposition, the number of parameters to tune is larger.We were not able to obtain interesting computational results using this decomposition on our instances and the values obtained are far from the theoretical ones.Therefore we will not report computational results using this Lagrangian decomposition.
In Agra et al. (2011), Requejo et al. (2010) the best results to obtain the optimal value using the software Xpress 7.3 (Xpress Release 2012 with Xpress-Optimizer 23.01.03 and Xpress-Mosel 3.4.0)FICO Xpress Optimization Suite (2012), were obtained with the Branch and Cut algorithm based on a weighted MTZ (Miller-Tucker-Zemlin) formulation with the inclusion of cuts preventing cycles at the root node.This procedure will be denoted by HP (Hybrid Procedure) and is used to access the quality of the approximate solutions obtained with the Lagrangian based algorithms from Sect. 5. To compare the performance of those algorithms with the performance of the HP, two gaps are calculated, the upper bound gap, gap U , and the lower bound gap, gap L .Denote with OPT the optimal value obtained with the HP or the best obtained value with this procedure within a time limit of 10000 seconds.Denote with U L the upper bound and with L L the lower bound obtained with a Lagrangian based approximation scheme.The upper bound gap is gap U = U L −OPT OPT × 100, and the lower bound gap is gap L = OPT−L L OPT × 100, which is the Lagrangian relaxation bound.Table 3 presents, for each algorithm, the percentage of instances having gap U = 0 , for each instance set AP, R and E and, in the last line of the table, the percentage for all instances.Generally, algorithms Alg2, Alg5, Alg6 and Alg7 obtain the higher percentage of null upper bound gap, gap U , each with the same value of 26.05% (56 instances out of 215).Instances AP and E obtain the higher percentage of null gap U .
Table 4 presents, for each algorithm, the percentage of instances having the gap U less than the lower bound gap gap L , for each instance set AP, R and E and, in the last line, for all instances.This indicates when the gap U is closer to the optimum value than the gap L .For the AP instances the upper bound value is closer to the optimum value than the lower bound value.For R and E instances the lower bound, the Lagrangian bound, is closer to the optimal value than the upper bound value obtained using the Lagrangian scheme.
In Fig. 1 we compare the mean computational times (in seconds) between all the algorithms for all the instances sets.All the algorithms are fast in obtaining an approximate solution.Clearly Alg3, the classical subgradient algorithm for the Lagrangian relaxation, is more frequently the most time consuming, followed by Alg5.
Algorithms Alg5 and Alg6 are very similar and differ on the interval reduction procedure having, as a consequence, different number of calculated trees.In both algorithms the initialization used is . In Fig. 2 we compare the mean number of trees that each algorithm Alg5 and Alg6 has to build during its execution.Alg5 builds more trees than Alg6 and this may explain the execution time difference between both algorithms and why Alg5 is much more time consuming than Alg6.
It is worth to mention that in order to try to reduce the number of trees computed in Alg5 and in Alg6 we tested a Fibonacci search (Hassin 1981) for the 1 3 Efficient lower and upper bounds for the weight-constrained… interval reduction procedure.In the following Fibonacci search interval reduction procedure consider that Comparatively to Alg5 the number of computed trees is smaller, however comparatively to Alg6 the number of computed trees is higher.Further, for the trees with more than 400 nodes it is necessary to use a small value for parameter in order to obtain similar quality values for the bounds as Alg6, and the use of a small value for parameter implies an increase on the number of computed trees.Additionally, a drawback of this procedure is that its performance is highly dependent on the parameter that has to be tuned.We do not report computational results with this procedure because even testing several values to the parameter a superiority of this procedure over the Alg6 setting for the interval reduction was not evident.To compare the performance of the several Lagrangian based solution procedure, for each approximation scheme, for each instance set AP, R, and E and each instance size set we present the mean upper bound gap and the mean lower bound gap together with the corresponding standard deviation values.These results are presented in Tables 5, 7, 9, one table for each instance set.The top part of each table presents the mean gaps and the bottom parte of each table presents the corresponding standard deviation values.We also present the mean execution times (in seconds) and corresponding standard deviation values.These results are presented in Tables 6, 8, 10, one table for each instance set.The top part of each table presents the mean execution times and the bottom parte of each table presents the corresponding standard deviation values.In Fig. 3 we compare the lower bound gaps gap L between Alg6 and Alg7 for the three sets of instances AP, R and E.
Table 5 presents the mean gaps (top part) and corresponding standard deviation values (bottom part) for the AP instances.Algorithms Alg1 and Alg4 present the higher mean gap values.Algorithms Alg2, Alg5 and Alg6 have the same mean gap values.Algorithm Alg7 has the best mean lower bound gaps and has mean upper bound gap gap U equal to Alg2, Alg5 and Alg6.In Fig. 3 we compare the mean lower bound gaps gap L between Alg6 (which are the same as Alg2 and Alg5) and Alg7.For the upper bound gaps gap U , algorithm Alg6 obtains lower gaps than the Alg7 in 7.37% of the instances (7 out of 95 instances) and Alg7 presents lower gap U in 4.21% of the instances (4 out of 95 instances).For the remaining 84 instances the upper bound gaps gap U are equal in both algorithms.
Table 6 presents mean execution times, in seconds, (top part) and corresponding standard deviation values (bottom part) for the AP instances.Execution mean times are, almost all, less than 1 second, except for five occurrences for which four of these use less than 2 seconds.Algorithms Alg3 and Alg5 use more execution time than the others.
Table 7 presents the mean gaps (top part) and corresponding standard deviation values (bottom part) for the R instances.In general, and contrary to what happened Efficient lower and upper bounds for the weight-constrained… to the AP instances, algorithms Alg1 and Alg4 do not have much higher gaps when compared with the other algorithms.We have the same mean gap values for algorithms Alg2, Alg5 and Alg6.The upper bound mean gaps for Alg3 are the worse.In Fig. 3 we compare the lower bound gaps gap L between Alg6 (which are the same as Alg2 and Alg5) and Alg7.Algorithm Alg7 presents the lowest values for the lower bound gaps gap L .All algorithms have the same upper bound gaps gap U .
Table 8 presents mean execution times, in seconds, (top part) and corresponding standard deviation values (bottom part) for the R instances.As before, execution mean times are, almost all, less than 1 second, except for ten occurrences, for which six use less than 2 seconds.Except for Alg3 all the other algorithms use mean computational times less than 5 seconds.As before we also may say that algorithms Alg3 and Alg5 are the most time consuming.And we can notice that for instances with 300 or less nodes Alg5 is the most time consuming while for instances with 400 or more nodes Alg3 is the most time consuming.Efficient lower and upper bounds for the weight-constrained… Table 9 presents the mean gaps (top part) and corresponding standard deviation values (bottom part) for the E instances.In general, and contrary to what happened to the AP instances, algorithms Alg1 and Alg4 do not have higher gaps when compared with the other algorithms.In Fig. 3 we compare the lower bound gaps gap L between Alg6 and Alg7.We have the same mean gap values for algo- rithms Alg2, Alg5 and Alg6.Algorithm Alg7 has the best lower bound mean gaps and has mean upper bound gaps gap U equal to Alg2, Alg5 and Alg6, except for one instance with 1000 nodes.
Table 10 presents mean execution times, in seconds, (top part) and corresponding standard deviation values (bottom part) for the E instances.As before, execution mean times are, almost all, less than 1 second, except for four occurrences, that use less than 2 seconds.As before we may say that algorithms Alg3 and Alg5 are the most time consuming, being Alg3 more time consuming than Alg5.
In Table 11 we compare the HP procedure (see page 14 for more details) with Alg7, the algorithm that obtains the best lower bounds.For these two procedures we display the mean gaps for the three sets of instances considered, AP, R and E, and specified in the first line of the table.For each node set size (specified in each Efficient lower and upper bounds for the weight-constrained… line 3 to 14) and for each instance set (specified in the first line) we show in columns two, four and six named HP the mean of the linear programming gap.For each instance this gap is gap LP = OPT−LP OPT × 100 , where LP is the linear program- ming bound obtained by the weighted MTZ formulation used in HP.For each node set size (specified in each line 3 to 14) and for each instance set (specified in the first line) we show in columns three, five and seven named Alg7 the mean of the lower bound gaps for the Alg7, which is gap LB = OPT−LB OPT × 100 , where LB is the lower bound obtained by Alg7.
It is worth to note that, generally, the gaps decrease with the increase on the number of nodes.This can be explained because in such cases the number of edges that can replace an edge discarded from an infeasible solution also increases.Therefore the obtention of a feasible solution does not gets harder.
In Fig. 4, following Dolan and Moré (2002), Gould and Scott (2016), we present some performance profiles to compare the performance of the algorithms Alg1, Alg2, Alg3, Alg4, Alg5, Alg6, Alg7 for each one of the three sets of instances, sets AP, E and R. We used n AP = 95 instances of the set AP and n E = n R = 60 instances of each set E and R. For each set of instances two performance measures were Efficient lower and upper bounds for the weight-constrained… considered: the computational times (in seconds) presented in the left part of the figure, and the lower bound gaps in the right part of the figure.
We explain the construction of the performance profiles for the computational times.Similarly they are build for the lower bound gaps.Consider that t ia is the computational time (in seconds) used by algorithm a ∈ {Alg1, Alg2, Alg3, Alg4, Alg5, Alg6, Alg7} to obtain an approximate value to instance i from a set of instances.To build the performance profiles a baseline for comparisons is required.Therefore, we compare the performance of instance i by algorithm a with the best performance by any algorithm on this instance; that is, we use the performance ratio r ia = t ia ∕t i where t i = min{t ia , a ∈ {Alg1, … , Alg7}} .To obtain an overall assessment of the performance of the algorithms define a (T) = s a (T)∕n j where s a (T) is the number of instances such that the performance ratio r ia ≤ T and where n j = 95 or 60, depending on the set of instances in consider- ation.Thus, a (T) is a probability estimate for algorithm a that a performance ratio r ia is within a factor T ∈ R of the best possible ratio.The function a is the empirical (cumulative) distribution function for the performance ratio.
We presented several Lagrangian based schemes to approximate the WMST problem solution.Their simplicity has its price as the quality of the approximation depends greatly on their ability to find near optimal multipliers quickly and specific to each instance.In many cases the method can only give a coarse approximation of the optimal value.As a consequence different gaps and computational times are reported for the same problem instance depending on the specificity of the overall algorithm settings.
The following final remark can be done.When the computational time is a concern Alg2 is better suited if one is interested with obtaining a good solution fast.If the interest is with the quality of the solution, Alg7 is a good recommendation to obtain good lower and upper bounds.Efficient lower and upper bounds for the weight-constrained…

Conclusions
Our computational results show that the Lagrangian based algorithms are fast (use less than 13 seconds in our experiments) and present small gap values.Therefore these algorithms are a good choice in obtaining both a lower and an upper bound for the WMST.We present seven different settings, among them four were published by others, another, Alg3, is the classical subgradient setting and two other settings, Alg6 and Alg7, are new.Four of the algorithms Alg2, Alg5, Alg6 and Alg7 are very efficient in all instances sets, and several optimal solutions were obtained when using those settings.Algorithm Alg5 has the disadvantage of being very time consuming.
The lower bound values obtained using the Lagrangian based algorithms Alg2, Alg5 and Alg6 are equal to the lower bound values obtained with the linear programming of the weighted MTZ model used within the HP procedure.Algorithm Alg7 obtains better lower bounds than the lower bound values obtained with the linear programming of the weighted MTZ model.
w w e .The trees T c and T w are two spanning trees of G, tree T c is ofminimum cost C(T c ) = ∑ c ) ≤ (WMST) ≤ C(T w ).
p e = a w e + b c e , where a and b are real non-negative scalars.With these coefficients in the objective function, find a new spanning tree T p = (V, E T p ) with E T p ⊂ E , of mini- mum value P(T p ) = ∑ e∈E T p p e .The spanning tree T p of G has cost C(T p ) = ∑ Notice that, the Minimum-cost Spanning Tree prob- lem and the Minimum-weight Spanning Tree problem are particular cases of this Minimum Spanning Tree problem.If a = 0 and b = 1 , then T p ≡ T c .If a = 1 and b = 0 , then T p ≡ T w .The tree T p obtained with the coefficients p e may be feasible or unfeasible.It holds C(T p ) ≥ C(T c ) and the following result.

Fig. 1
Fig. 1 Comparing mean execution times (in seconds) for all the algorithms

Fig. 2
Fig. 2 Comparing the mean number of trees of algorithm Alg5 and Alg6

Fig. 3
Fig.3Comparing the lower bound gaps gap L between Alg6 and Alg7 for the three sets of instances AP, R and E

Fig. 4
Fig. 4 Performance profiles for the computational times (in seconds) in the left figure and for the gaps in the right figure, for each set of instances

Table 3
Percentage of instances with gap U = 0 , for each instance set AP, R and E and, in the last line, for all instances

Table 4
Percentage of instances with gap U < gap L , for each instance set AP, R and E and, in the last line, for all instances

Table 5
Mean gaps (top part) and corresponding standard deviation values (bottom part) for the AP instances

Table 10
Mean execution times, in seconds, (top part) and corresponding standard deviation values (bottom part) for the E instances