• Nem Talált Eredményt

An extended supporting hyperplane algorithm for convex MINLP problems

In document PROCEEDINGS OF THE (Pldal 30-34)

Andreas Lundell, Jan Kronqvist, and Tapio Westerlund

Optimization and Systems Engineering, Åbo Akademi University, Piispankatu 8, FI-20500 Turku, Finland, andreas.lundell@abo.fi

Abstract The extended cutting plane algorithm (ECP) is a deterministic optimization method for solving con-vex mixed-integer nonlinear programming (MINLP) problems to global optimality as a sequence of mixed-integer linear programming (MILP) problems. The algorithm is based on Kelley’s cutting plane method for continuous nonlinear programming (NLP) problems. In this paper, an extended supporting hyperplane (ESH) algorithm is presented. It is based on similar principles as in the ECP algorithm, however instead of utilizing cutting planes supporting hyperplanes are generated.

Keywords: Convex MINLP, Extended supporting hyperplane (ESH) algorithm, Extended cutting plane (ECP) algorithm, Supporting hyperplanes, Cutting planes

1. Introduction

Solving convex MINLP problems efficiently may still be a difficult task even if there currently are several versatile solution algorithms available, such as outer approximation [2, 4], general Bender’s decomposition [6], branch and bound techniques using different NLP subsolvers [1, 9] and the ECP algorithm [13]. Extensions of these algorithms are also available,e.g.,the ECP algorithm has been extended to handle quasi- and pseudoconvex [14] and nondifferentiable [3] MINLP problem classes. Various implementations of the algorithms exist and many are available in modeling frameworks or optimization systems like GAMS (www. gams. com), COIN-OR (www. coin-or. org) or the NEOS Server (www. neos-server. org). A review of MINLP methods can be found in [7]. The stability and efficiency of MINLP solvers is of paramount importance especially when utilized in real-world applications. Global solution techniques for nonconvex MINLP problems may also require convex MINLP subsolvers [5, 10], and then the performance of the parent solver is largely dependent on that of its subsolver.

In this paper, a new convex MINLP solution technique — the ESH algorithm — is proposed.

It is loosely based on the ECP algorithm (itself an extension of Kelley’s method in [8]) and has some similarities to the supporting hyperplane method in [12]. In the ECP algorithm MILP problems are iteratively solved until all nonlinear constraints of the MINLP problem are fulfilled to a given tolerance. In each iteration, the feasible region of the MILP problem is reduced by adding cutting planes. Each MILP solution provides a lower bound on the optimal solution of the MINLP problem. Important for the efficiency of cutting plane based algorithms is how and where cutting planes are generated. In the ECP algorithm, the solution point of the MILP problem is directly used, however, in the ESH algorithm only hyperplanes on the boundary of the nonlinear feasible region are generated. Two preprocessing steps to rapidly generate supporting hyperplanes, solving linear programming (LP) problems (instead of MILP problems) together with a line search strategy for selecting the generation point, are also used.

22 Andreas Lundell, Jan Kronqvist, and Tapio Westerlund

2. The extended supporting hyperplane algorithm

The ESH algorithm, described in this section, has connections to the ECP method [13], how-ever instead of cutting planes, it is based on generating supporting hyperplanes. It also uses two preprocessing steps to efficiently get a tight linear approximation of the feasible region of the convex MINLP problem to be solved and thereafter finally one or a few MILP problems are solved to satisfy the integer requirements.

The ESH algorithm can be used to find the optimal solutionxto the convex MINLP prob-lem

x= arg min

x∈C∩L∩Y

cTx (P)

wherex= [x1, x2, . . . , xN]T is a vector of variables in a bounded set

X ={x |xi≤xi ≤xi, i= 1, . . . , N} (1) and the feasible setL∩C∩Y is defined by

C = {x |gm(x)≤0, m= 1, . . . , M, x∈X}, L = {x |Ax≤a, Bx=b, x∈X},

Y = {x |xi ∈Z, i∈IZ, x∈X}.

(2)

X is a compact set of anN-dimensional Euclidean spaceX ⊂ RN restricted by the variable bounds. The setsLandC are the convex regions satisfying the linear and (convex) nonlinear constraints respectively. If the problem is a NLP problem,IZ =∅andY =X. If the variable vector x contains integer variables xi included in the index set IZ, then Y corresponds to the nonrelaxed values these variables can assume. The objective function is written in linear form. In case of a nonlinear convex objective function f, a new objective function constraint f(x)−xN+1≤0is included inCand the objective is to minimize the auxiliary variablexN+1.

2.1 NLP step

In the ESH algorithm an internal pointx˜NLP is first obtained from the convex NLP problem

˜

xNLP= arg min

xX

F(x), whereF(x) := max

m {gm(x)} (P-NLP) using a suitable method [11]. Observe thatF is minimized within the region defined by vari-able bounds only. Since F is given by a max-function, it is convex if all constraint functions gm are convex and generally quasiconvex if the functions gm give rise to convex level sets.

Note that (P-NLP) may be a nonsmooth problem ifM >1even if all functionsgmare smooth.

Assuming that (P) has a solution, there exists a solution to (P-NLP) such thatF(˜xNLP) ≤ 0.

After this step go to the first preprocessing step in Section 2.2. Note that it is not necessary to solve (P-NLP) to optimality if a strict feasible solutionF(˜xNLP)<0is obtained easier.

2.2 LP1 step

After the solution to (P-NLP) is obtained, a first iterative preprocessing step is performed where simple LP problems are solved (initially inX) and a line search procedure is conducted to obtain a tight overestimated setΩkof the convex setC. Initially, the countersk= 1,J0 = 0, the setΩ0 = X, and the following relaxation of (P), only considering the variable bounds, is solved:

˜

xkLP= arg min

k1

cTx. (P-LP1)

Assuming there exists a solution to (P), then F(˜xkLP) > 0 or F(˜xkLP) ≤ 0. In the latter case, stop iteration and go to the LP2 step in Section 2.3. Otherwise, i.e., ifF(˜xkLP) > 0, then the

An extended supporting hyperplanealgorithm for convex MINLP problems 23 valuesF(˜xNLP)andF(˜xkLP)have different signs (orF(˜xNLP)is already equal to zero) and it is possible to obtain points to generate new supporting hyperplanes.

The setΩkwhereC⊂Ωkis now defined as an ordered set defined by theJkfirst supporting hyperplanes,i.e.,

k={x |lj(x)≤0, j = 1, . . . , Jk, x∈X}. (3) After solving (P-LP1), a line search is performed betweenx˜NLPandx˜kLP,i.e.,the equation

xk =λ˜xNLP+ (1−λ)˜xkLP, (4) is used to find the value ofλ =λF ∈ [0,1]for whichF(xk) = 0. (In caseF(˜xNLP) = 0, then λF = 1). In the pointxka supporting hyperplane

lk =F(xk) +ξF(xk)T(x−xk)≤0 (5) is generated and added to Ωk. ξF(xk)T is a gradient or subgradient of the corresponding function F at xk. The counter Jk is increased by one if the line search is performed on F and one supporting hyperplane thus only created for F. Supporting hyperplanes can also be added for other constraints wheregm(xk) > 0. From the line search, it can be observed that for a violated constraint 0 < λm ≤ λF. If supporting hyperplanes are generated for a certain number of violated constraints (where gm(˜xkLP) > 0), they can be selected based on decreasing values of theλm-values (from equation (4)) starting fromλm≤λF. The number of hyperplanes added at iterationkisJk−Jk−1, whereJkis the total number of hyperplanes in Ωk.

The problem (P-LP1) is repeatedly resolved (increasing the counterkwith one for the next iteration) until a maximum number of iterations has been reached, i.e., k > KLP1, or until F(˜xkLP) < ǫLP1or Ωk has reached a maximum number of supporting hyperplanes, i.e.,Jk >

JLP1. Then continue to the LP2 step in Section 2.3.

2.3 LP2 step

In this preprocessing step, a corresponding problem to (P-LP1), with the linear constraints in Lincluded, is solved:

˜

xkLP= arg min

k1∩L

cTx. (P-LP2)

The solution x˜kLP gives F(˜xkLP) > 0 or F(˜xkLP) ≤ 0. In the latter case, x˜kLP is an optimal so-lution of (P) if it is a continuous problem. Otherwisex˜kLP is an integer-relaxed solution and we continue to the MILP step in Section 2.4. Ifx˜kLP > 0, the same line search procedure and supporting hyperplane generation strategy as in the LP1 step is performed. ThenkandJkare increased and (P-LP2) is resolved. This continues until a maximum number of iterations has been reached,i.e., k > KLP2, or untilF(˜xkLP) < ǫLP2or Ωk has reached a maximum number of supporting hyperplanes,i.e.,Jk > JLP2. After the preprocessing steps LP1 and LP2 have been performed, the setL∩Cis already tightly overestimated byΩk. When solving a convex MINLP problem, the integer requirements should be considered. This is finally done in the MILP step. In case the original problem is continuous, terminate withx˜kLPas the solution.

2.4 MILP step

In the final step of the ESH algorithm the integer requirements in (P) are considered by solving MILP relaxations of (P) inΩk,LandY. The problems solved in this step are, thus, defined as

˜

xkMILP= arg min

k1∩L∩Y cTx. (P-MILP)

Note that it is not necessary to solve the MILP problem to optimality in each iteration, the final MILP iteration need however be solved to optimality to guarantee that the solution is the global optimal one. Here the same MILP solution strategy as in [14] can be used.

24 Andreas Lundell, Jan Kronqvist, and Tapio Westerlund If the termination criterionF(˜xkMILP) < ǫMILPis not fulfilled, more supporting hyperplanes are added toΩksimilarly to the LP1 and LP2 steps, and the counterskandJkare increased.

IfF(˜xkMILP)< ǫMILPandx˜kMILPis a MILP optimal point,i.e.,x˜kMILP∈Y, thenx˜kMILPis the global solution of the original problem (P) (to a tolerance ofǫMILP) in a finite number of steps.

3. Conclusions

In this paper, an ESH algorithm for convex MINLP problems was presented. It incorporates two preprocessing steps utilizing LP to iteratively refine a setΩincluding supporting hyper-planes rendering a tighter linear overestimation Ω0 ⊇ Ω1 ⊇ · · · ⊇ Ωk ⊇ C ⊆ C∩Lof the convex setsCandC∩L. A MINLP optimal solution is finally guaranteed by subsequentially solving MILP relaxations including the integer restrictions and adding additional hyperplanes toΩ.

Acknowledgments

Financial support from the Foundation of Åbo Akademi University, as part of the grant for the Center of Excellence in Optimization and Systems Engineering, is gratefully acknowledged.

AL also acknowledges financial support from the Ruth and Nils-Erik Stenbäck Foundation.

References

[1] R. J. Dakin. A tree-search algorithm for mixed integer programming problems. The Computer Journal, 8(3):250–255, 1965.

[2] M. A. Duran and I. E. Grossmann. An outer-approximation algorithm for a class of mixed-integer nonlinear programs.Mathematical Programming, 36(3):307–339, 1986.

[3] V.-P. Eronen, M. M. Mäkelä, and T. Westerlund. Extended cutting plane method for a class of nonsmooth nonconvex MINLP problems.Optimization, (available online):1–21, 2013.

[4] R. Fletcher and S. Leyffer. Solving mixed integer nonlinear programs by outer approximation.Mathematical Programming, 66(1-3):327–349, 1994.

[5] C.A. Floudas and C.E. Gounaris. A review of recent advances in global optimization. Journal of Global Optimization, 45(1):3–38, 2009.

[6] A. M. Geoffrion. Generalized Benders decomposition. Journal of Optimization Theory and Applications, 10(4):237–260, 1972.

[7] I. E. Grossmann. Review of nonlinear mixed-integer and disjunctive programming techniques.Optimization and Engineering, 3(3):227–252, 2002.

[8] J. Kelley, Jr. The cutting-plane method for solving convex programs. Journal of the Society for Industrial and Applied Mathematics, 8(4):703–712, 1960.

[9] S. Leyffer. Integrating SQP and branch-and-bound for mixed integer nonlinear programming.Computational Optimization and Applications, 18(3):295–309, 2001.

[10] A. Lundell, A. Skjäl, and T. Westerlund. A reformulation framework for global optimization.Journal of Global Optimization, 57(1):115–141, 2013.

[11] Y. Nesterov.Introductory lectures on convex optimization: A basic course. Kluwer Academic Publishers, 2004.

[12] A. F. Veinott Jr. The supporting hyperplane method for unimodal programming. Operations Research, 15(1):147–152, 1967.

[13] T. Westerlund and F. Pettersson. An extended cutting plane method for solving convex MINLP problems.

Computers & Chemical Engineering, 19:131–136, 1995.

[14] T. Westerlund and R. Pörn. Solving pseudo-convex mixed-integer problems by cutting plane techniques.

Optimization and Engineering, 3:253–280, 2002.

Proceedings of MAGO 2014, pp. 25 – 27.

In document PROCEEDINGS OF THE (Pldal 30-34)