Read manual.dvi text version

User's Guide for FFSQP Version 3.7: A FORTRAN Code for Solving Constrained Nonlinear Minimax Optimization Problems, Generating Iterates Satisfying All Inequality and Linear Constraints1

Jian L. Zhou, Andr L. Tits, and Craig T. Lawrence e Electrical Engineering Department and Institute for Systems Research University of Maryland, College Park, MD 20742 Systems Research Center TR-92-107r2

Abstract

FFSQP is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions possibly a single one, or even none at all subject to general smooth constraints if there is no objective function, the goal is to simply nd a point satisfying the constraints. If the initial guess provided by the user is infeasible for some inequality constraint or some linear equality constraint, FFSQP rst generates a feasible point for these constraints; subsequently the successive iterates generated by FFSQP all satisfy these constraints. Nonlinear equality constraints are turned into inequality constraints to be satis ed by all iterates and the maximum of the objective functions is replaced by an exact penalty function which penalizes nonlinear equality constraint violations only. The user has the option of either requiring that the modi ed objective function decrease at each iteration after feasibility for nonlinear inequality and linear constraints has been reached monotone line search, or requiring a decrease within at most four iterations nonmonotone line search. He She must provide subroutines that de ne the objective functions and constraint functions and may either provide subroutines to compute the gradients of these functions or require that FFSQP estimate them by forward nite di erences. FFSQP implements two algorithms based on Sequential Quadratic Programming SQP, modi ed so as to generate feasible iterates. In the rst one monotone line search, a certain Armijo type arc search is used with the property that the step of one is eventually accepted, a requirement for superlinear convergence. In the second one the same e ect is achieved by means of a nonmonotone search along a straight line. The merit function used in both searches is the maximum of the objective functions if there is no nonlinear equality constraint.

This research was supported in part by NSF's Engineering Research Centers Program No. NSFD-CDR88-03012, by NSF grant No. DMC-88-15996 and by a grant from the Westinghouse Corporation.

1

Conditions for External Use

1. The FFSQP routines may not be distributed to third parties. Interested parties should contact the authors directly. 2. If modi cations are performed on the routines, these modi cations will be communicated to the authors. The modi ed routines will remain the sole property of the authors. 3. Due acknowledgment must be made of the use of the FFSQP routines in research reports or publications. Whenever such reports are released for public access, a copy should be forwarded to the authors. 4. The FFSQP routines may only by used for research and development, unless it has been agreed otherwise with the authors in writing.

User's Guide for FFSQP Version 3.7 Released April 1997

Copyright c 1989 | 1997 by Jian L. Zhou, Andr L. Tits, and Craig T. Lawrence e All Rights Reserved. Enquiries should be directed to Prof. Andr L. Tits e Electrical Engineering Dept. and Institute for Systems Research University of Maryland College Park, Md 20742 U. S. A. Phone : 301-405-3669 Fax : 301-405-6707 E-mail : [email protected]

1

Contents 1 Introduction 2 Description of the Algorithms 3 Speci cation of Subroutine FFSQP 4 User-Accessible Stopping Criterion and Flags 5 Description of the Output 6 User-Supplied Subroutines

6.1 6.2 6.3 6.4 Subroutine obj : : Subroutine constr : Subroutine gradob Subroutine gradcn

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

: : : :

2 3 11 17 18 21

21 21 22 23

7 Organization of FFSQP and Main Subroutines

7.1 Main Subroutines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 24 7.2 Other Subroutines : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 25 7.3 Reserved Common Blocks : : : : : : : : : : : : : : : : : : : : : : : : : : : : 25

24

8 Examples 9 Results for Test Problems 10 Programming Tips 11 Trouble-Shooting

25 38 39 39

2

1 Introduction

FFSQP FORTRAN Feasible Sequential Quadratic Programming is a set of FORTRAN subroutines for the minimization of the maximum of a set of smooth objective functions possibly a single one or none at all subject to nonlinear equality and inequality constraints, linear equality and inequality constraints, and simple bounds on the variables. Speci cally, FFSQP tackles optimization problems of the form P min maxffixg s.t. x 2 X i2I f where I f = f1; : : : ; nf g I f = ; if nf = 0 and X is the set of point x 2 Rn satisfying

bl x bu gj x 0; j = 1; : : : ; ni gj x hcj,n ; xi , dj,n 0; j = ni + 1; : : : ; ti hj x = 0; j = 1; : : : ; ne hj x haj,n ; xi , bj,n = 0; j = ne + 1; : : : ; te with bl 2 Rn; bu 2 Rn; fi : Rn ! R; i = 1; : : : ; nf smooth; gj : Rn ! R; j = 1; : : : ; ni nonlinear and smooth; cj 2 Rn, dj 2 R, j = 1; : : : ; ti , ni; hj : Rn ! R; j = 1; : : : ; ne nonlinear and smooth; aj 2 Rn, bj 2 R, j = 1; : : : ; te , ne. Note that it is allowed to have nf = 0, in which case problem P is one of nding a point that satis es a given set of

i i e e

constraints. If the initial guess provided by the user is infeasible for nonlinear inequality constraints and linear constraints, FFSQP rst generates a point satisfying all these constraints by iterating on the problem of minimizing the maximum of these constraints. Then, using MaynePolak's scheme 1 , nonlinear equality constraints are turned into inequality constraints2

hj x 0; j = 1; : : : ; ne

and the original objective function maxi2I f ffixg is replaced by the modi ed objective function ne X fmx; p = maxffixg , pj hj x; i2I f where pj , j = 1; : : : ; ne, are positive penalty parameters and are iteratively adjusted if nf = 0, the max" can be de ned to be any constant. The resulting optimization problem therefore involves only linear constraints and nonlinear inequality constraints. Subsequently, the successive iterates generated by FFSQP all satisfy these constraints. The user has 2 For every j for which h x 0 x0 is the initial point, h x = 0" is rst replaced by ,h x = 0" 0 and ,h is renamed h .

j j j j j

j =1

3 the option of either requiring that the exact penalty function the maximum value of the objective functions if without nonlinear equality constraints decrease at each iteration after feasibility for the original nonlinear inequality and linear constraints has been reached, or requiring a decrease within at most four iterations. He She must provide subroutines that de ne the objective functions and constraint functions and may either provide subroutines to compute the gradients of these functions or require that FFSQP estimate them by forward nite di erences. Thus, FFSQP solves the original problem with nonlinear equality constraints by solving a modi ed optimization problem with only linear constraints and nonlinear inequality constraints. For the transformed problem, it implements algorithms that are described and analyzed in 2 , 3 and 4 , with some additional re nements. These algorithms are based on a Sequential Quadratic Programming SQP iteration, modi ed so as to generate feasible iterates. The merit function is the objective function. An Armijo-type line search is used to generate an initial feasible point when required. After obtaining feasibility, either i an Armijo-type line search may be used, yielding a monotone decrease of the objective function at each iteration 2 ; or ii a nonmonotone line search inspired from 5 and analyzed in 3 and 4 in this context may be selected, forcing a decrease of the objective function within at most four iterations. In the monotone line search scheme, the SQP direction is rst tilted" if nonlinear constraints are present to yield a feasible direction, then possibly bent" to ensure that close to a solution the step of one is accepted, a requirement for superlinear convergence. The nonmonotone line search scheme achieves superlinear convergence with no bending of the search direction, thus avoiding function evaluations at auxiliary points and subsequent solution of an additional quadratic program. After turning nonlinear equality constraints into inequality constraints, these algorithms are used directly to solve the modi ed problems. Certain procedures see 6 are adopted to obtain proper values of pj 's in order to ensure that a solution of the modi ed problem is also a solution of the original problem, as described below. For the solution of the quadratic programming subproblems, FFSQP is set up to call QLD 7 which is provided with the FFSQP distribution for the user's convenience.

2 Description of the Algorithms

The algorithms described and analyzed in 2 , 3 and 4 are as follows. Given a feasible iterate x, the basic SQP direction d0 is rst computed by solving a standard quadratic program using a positive de nite estimate H of the Hessian of the Lagrangian. d0 is a direction of descent for the objective function; it is almost feasible in the sense that it is at worst tangent to the feasible set if there are nonlinear constraints and it is feasible otherwise. In 2 , an essentially arbitrary feasible descent direction d1 = d1x is then computed.

4 Then for a certain scalar = x 2 0; 1 , a feasible descent direction d = 1 , d0 + d1 ~ ~ is obtained, asymptotically close to d0: Finally a second order correction d = dx; d; H is computed, involving auxiliary function evaluations at x + d; and an Armijo type search is ~ ~ performed along the arc x + td + t2 d: The purpose of d is to allow a full step of one to be taken close to a solution, thus allowing superlinear convergence to take place. Conditions are ~ given in 2 on d1 , and d; that result in a globally convergent, locally superlinear convergent algorithm. The algorithm in 3 is somewhat more sophisticated. An essential di erence is that while feasibility is still required, the requirement of decrease of the max objective value is replaced by the weaker requirement that the max objective value at the new point be lower than its maximum over the last four iterates. The main payo is that the auxiliary function evaluations can be dispensed with, except possibly at the rst few iterations. First a direction d1 = d1x is computed, which is feasible even at Karush-Kuhn-Tucker points. Then for a certain scalar ` = `x 2 0; 1 ; a local" feasible direction d` = 1 , `d0 + `d1 is obtained, and at x + d` the objective functions are tested and feasibility is checked. If the requirements pointed out above are satis ed, x + d` is accepted as next iterate. This will always be the case close to a solution. Whenever x + d` is not accepted, a global" feasible descent direction dg = 1 , g d0 + g d1 is obtained with g = g x 2 0; ` : A second order ~ ~ correction d = dx; dg ; H is computed the same way as in 2 , and a nonmonotone" search ~ ~ is performed along the arc x + tdg + t2 d: Here the purpose of d is to suitably initialize the sequence for the four iterate" rule. Conditions are given in 3 on d1, `, g , and ~ d; that result in a globally convergent, locally superlinear convergent algorithm. In 4 , the algorithm of 3 is re ned for the case of unconstrained minimax problems. The major ~ di erence over the algorithm of 3 is that there is no need of d1. As in 3 , d is required to initialize superlinear convergence. ~ The FFSQP implementation corresponds to a speci c choice of d1, , d; , `, and g , with some modi cations as follows. If the rst algorithm is used, d1 is computed as a function not only of x but also of d0 thus of H , as it appears bene cial to keep d1 relatively close to d0. In the case of the second algorithm, the construction of d` is modi ed so that the function evaluations at di erent auxiliary points can be avoided during early ~ iteration when g 6= ` ; the quadratic program that yields d involves only a subset of active" functions, thus decreasing the number of function evaluations. The details are given below. The analysis in 2 , 3 and 4 can be easily extended to these modi ed algorithms. Also obvious simpli cations are introduced concerning linear constraints: the iterates are allowed for inequality constraints or are forced for equality constraints to stay on the boundary of these constraints and these constraints are not checked in the line search. Finally, FFSQP automatically switches to a phase 1" mode if the initial guess provided by the user is not in the feasible set.

5 Below we call FFSQP-AL the algorithm with the Armijo line search, and FFSQP-NL the algorithm with nonmonotone line search. Given I I f , we make use of the notation

fI x = maxffi xg; i2I f 0x; d; p = maxffix + hrfix; dig , fI x , i2I

f f

ne X j =1

pj hrhj x; di;

ne

and

i2I

X ~ ~ ~ f~I0 x + d; x; d; p = maxffix + d + hrfix; dig , fI x , pj hrhj x; di:

j =1

If nf = 0 I f = ; and ne 0 then

f 0x; d; p = ,

ne X j =1

pj hrhj x; di;

ne X j =1

~ f~I0 x + d; x; d; p = ,

~ pj hrhj x; di:

At each iteration k, the quadratic program QP xk ; Hk ; pk that yields the SQP direction d0 k is de ned at xk for Hk symmetric positive de nite by

d0 2Rn

min s:t:

hd0; Hk d0i + f 0xk ; d0; pk bl xk + d0 bu gj xk + hrgj xk ; d0i 0; j = 1; : : : ; ti hj xk + hrhj xk ; d0i 0; j = 1; : : : ; ne haj ; xk + d0i = bj ; j = 1; : : : ; te , ne:

1 2

f Let

k;j 's with Pn=1

k;j = 1, k;j 's, k;j 's, and k;j 's denote the multipliers, for the various j objective functions, simple bounds only n possible active bounds at each iteration, inequality, and equality constraints respectively, associated with this quadratic program. De ne the set of active objective functions, for any i such that

k;i 0, by if nf = 0,

is immaterial and the following set is empty

Ikf dk = fj 2 I f : jfj xk , fixk j 0:2kdkk krfj xk , rfi xk kg fj 2 I f :

k;j 0g

and the set of active constraints by

Ikg dk = fj 2f1; : : : ; tig : jgj xk j 0:2kdkk krgj xk kg fj 2 f1; : : : ; ti g : k;j 0g:

Algorithm FFSQP-AL.

6

Parameters. = 0:1, = 0:01, = 0:1, = 0:5, = 2:1, 1 = 2 = 2:5, t = 0:1, 1 = 1, = 2. 2 = 2, Data. x0 2 Rn, 0, e 0 and p0;j = 2 for j = 1; : : : ; ne. Step 0: Initialization. Set k = 0 and H0 = the identity matrix. Set nset = 0. If x0 is infeasible for some constraint other than a nonlinear equality constraint, substitute a feasible point, obtained as discussed below. For j = 1; : : : ; ne, replace hj x by ,hj x whenever hj x0 0. Step 1: Computation of a search arc. i. Compute d0 , the solution of the quadratic program p xk ; Hk ; pk . If kd0 k and QP k k Pne 0 jhj xk j e, stop. If kdk k minf0:5 ; 0:01 m g where m is the machine j =1 e precision and Pn=1 jhj xk j e, go directly to Step 3 iii. If ni + ne = 0 and nf 1; j ~ set dk = d0 and dk = 0 and go to Step 2. If ni + ne = 0 and nf 1, set dk = d0 and k k go to Step 1 iv. ii. Compute d1 by solving the strictly convex quadratic program k

min2R hd0 , d1; d0 , d1i + k 2 k d 2Rn ; s:t: bl xk + d1 bu f 0xk ; d1; pk gj xk + hrgj xk ; d1i ; j = 1; : : : ; ni hcj ; xk + d1 i dj ; j = 1; : : : ; ti , ni hj xk + hrhj xk ; d1i ; j = 1; : : : ; ne haj ; xk + d1i = bj ; j = 1; : : : ; te , ne

1

iii. Set dk = 1 , k d0 + k d1 with k = kd0 k =kd0 k + vk , where vk = max0:5; kd1 k 1 : k k k k k ~ iv. Compute dk by solving the strictly convex quadratic program

~ d2Rn 1 ~ ~ ~ min 2 hdk + d; Hk dk + di + f~I0 f dk xk + dk ; xk ; d; pk k ~ s:t: bl xk + dk + d bu ~ gj xk + dk + hrgj xk ; di , min kdk k; kdk k 2 ; j 2 Ikg dk fj : j nig ~ hcj,ni ; xk + dk + di dj,ni ; j 2 Ikg dk fj : j ni g ~ hj xk + dk + hrhj xk ; di , min kdk k; kdk k 2 ; j = 1; : : : ; ne ~ haj ; xk + dk + di = bj ; j = 1; : : : ; te , ne:

~ ~ If the quadratic program has no solution or if kdk k kdk k, set dk = 0.

7

Step 2. Arc search. Let k = f 0xk ; dk ; pk if ni + ne 6= 0 and k = ,hd0 ; Hk d0 i otherwise. k k 2 Compute tk , the rst number t in the sequence f1; ; ; : : :g satisfying

~ fmxk + tdk + t2 dk ; pk fm xk ; pk + t k ~ gj xk + tdk + t2 dk 0; j = 1; : : : ; ni ~ hcj,n ; xk + tdk + t2 dk i dj,n ; 8j ni & j 62 Ikg dk ~ hj xk + tdk + t2 dk 0; j = 1; : : : ; ne:

i i

Speci cally, the line search proceeds as follows. First, the linear constraints that were not ~ used in computing dk are checked until all of them are satis ed, resulting in a stepsize, say, tk . Due to the convexity of linear constraints, these constraints will be satis ed for any t tk . Then, for t = tk , nonlinear constraints are checked rst and, for both objectives and constraints, those with nonzero multipliers in the QP yielding d0 are evaluated rst. For k t tk , it may be more e cient to rst check the function that caused the previous trial value of t to be rejected intuitively, it may be more likely to fail the test again. If this function is a constraint, it will always be checked rst, followed by all other constraints, then objectives. If it is an objective however, there are two alternatives available to the user see mode in x 3: i the problem" objective is checked rst, followed by all other objectives, then constraints; or ii all constraints are checked rst, followed by the problem" objective, then the other objectives. The former is likely more e ective; the latter is helpful when some objectives are not de ned outside the feasible set. Step 3. Updates.

i. If nset 5n and tk t, set Hk+1 = H0 and nset = 0. Otherwise, set nset = nset + 1 and compute a new approximation Hk+1 to the Hessian of the Lagrangian using the BFGS formula with Powell's modi cation 8 . ii. Set xk+1 = xk + tk dk + t2 dk . k~ iii. Solve the unconstrained quadratic problem in

min 2Rne

nf X j =1

k;j rfj xk+1 + k +

ti X j =1

k;j rgj xk+1+ k;j rhj xk+1 +

ne X j =1

te X j =ne+1

j rhj xk+1 ;

2

where the

k;j 's, k , k;j 's and the k;j 's are the K-T multipliers associated with QP xk ; Hk ; pk for the objective functions, variable bounds, linear equality constraints,

8 and inequality constraints respectively.3 Update pk as follows: for j = 1; : : : ; ne, if pk;j + j 1 pk+1;j = and kd0 k minf0:5 ; 0:01p m g k : maxf , ; p g otherwise. 1 j k;j

8

pk;j

iv. Increase k by 1. v. Go back to Step 1.

2

Algorithm FFSQP-NL.

Parameters. = 3:0, = 0:01, = 0:1, = 0:5, = 0:2, = 0:5, = 2:5, C = 0:01, d = 5:0, t = 0:1, 1 = 0:1, 2 = 2, = 2. Data. x0 2 Rn, 0, e 0 and p0;j = 2 for j = 1; : : : ; ne. Step 0: Initialization. Set k = 0, H0 = the identity matrix, and C0 = C: If x0 is infeasible for constraints other than nonlinear equality constraints, substitute a feasible point, obtained as discussed below. Set x,3 = x,2 = x,1 = x0 and nset = 0. For j = 1; : : : ; ne, replace hj x by ,hj x whenever hj x0 0. Step 1: Computation of a new iterate. i. Compute d0 , the solution of quadratic program QP xk ; Hk ; pk . k 0 e If kdk k and Pn=1 jhj xkP e, stop. If kd0 k minf0:5 ; 0:01p m g where m is j j k e the machine precision and n=1 jhj xk j e , go directly to Step 2 iii. If ni + ne = 0 j ~ and nf 1; set dk = d0 and dk = 0 and go to Step 1 viii. If ni + ne = 0 and nf 1; k set ` = g = 0 and go to Step 1 v. k k ii. Compute d1 by solving the strictly convex quadratic program k

min2R kd1k2 + 2 d 2Rn ; s:t: bl xk + d1 bu gj xk + hrgj xk ; d1i ; j = 1; : : : ; ni hcj ; xk + d1 i dj ; j = 1; : : : ; ti , ni hj xk + hrhj xk ; d1i ; j = 1; : : : ; ne haj ; xk + d1i = bj ; j = 1; : : : ; te , ne

1

3

This is a re nement saving much computation and memory of the scheme proposed in 1 .

9

iii. Set vk = minfCk kd0 k2 ; kd0 kg. De ne values g for j = 1; : : : ; ni by k k k;j if gj xk + hrgj xk ; d0 i ,vk k or equal to the maximum in 0; 1 such that

g k;j

equal to zero

gj xk + hrgj xk ; 1 , d0 + d1 i ,vk k k h for j = 1; : : : ; ne. Let otherwise. Similarly, de ne values k;j

` k

= max j=1;:::;nif max

g g; max f h g k;j j =1;:::;ne k;j

:

iv. De ne

as the largest number in 0; ` such that k f 0xk ; 1 , d0 + d1 ; pk f 0xk ; d0 ; pk : k k k ` , set ` = minf ` ; g g: If k 1 & tk,1 1 or k k k k v. Construct a local" direction d`k = 1 , `k d0 + `k d1 : k k Set M = 3, k = f 0xk ; d0 ; pk if ni + ne 6= 0, and M = 2, k = ,hd0 ; Hk d0 i otherwise. k k k If fm xk + d`k ; pk `=0;:::;M ffmxk,`; pk g + k max and

g k

gj xk + d`k 0; j = 1; : : : ; ni

set tk = 1, xk+1 = xk + d` and go to Step 2. k

vi. Construct a global" direction

hj xk + d`k 0; j = 1; : : : ; ne;

~ vii. Compute dk by solving the strictly convex quadratic program 1 ~ ~ ~ min 2 hdg + d; Hk dg + di + f~I0 f dg xk + dg ; xk ; d; pk k k k ~ k k d2Rn s.t. bl xk + dg + d bu k ~ ~ gj xk + dg + hrgj xk ; di , min kdg k; kdg k ; j 2 Ikg dg fj : j ni g k k k k ~ hcj,ni ; xk + dg + di dj,ni ; j 2 Ikg dg fj : j ni g k k ~ hj xk + dg + hrhj xk ; di , min kdg k; kdg k ; j = 1; : : : ; ne k k k ~ haj ; xk + dg + di = bj ; j = 1; : : : ; te , ne: k ~ ~ If the quadratic program has no solution or if kdk k kdg k, set dk = 0. k

dg = 1 , g d0 + g d1 : k k k k k

10

viii. Set M = 3, k = f 0 xk ; dg ; pk if ni + ne 6= 0, and M = 2, k = ,hd0 ; Hk d0 i otherwise. k k k 2 Compute tk , the rst number t in the sequence f1; ; ; : : :g satisfying ~ fm xk + tdg + t2dk ; pk max ffmxk,`; pk g + t k k

`=0;:::;M g + t2 d 0; ~k gj xk + tdk ~ hcj,ni ; xk + tdg + t2 dk i dj,ni ; k ~ hj xk + tdg + t2 dk 0; k = xk + tk dg + t2 dk : k k~

and set xk+1 Speci cally, the line search proceeds as follows. First, the linear constraints that were ~ not used in computing dk are checked until all of them are satis ed, resulting in a stepsize, say, tk . Due to the convexity of linear constraints, these constraints will be satis ed for any t tk . Then, for t = tk , nonlinear constraints are checked rst and, for both objectives and constraints, those with nonzero multipliers in the QP yielding d0 are evaluated rst. For t tk , either the function that caused the previous value k of t to be rejected is checked rst and all functions of the same type objective" or constraint" as the latter will then be checked rst; or constraints will be always checked rst if it is a constraint that caused the previous value of t to be rejected, that constraint will be checked rst; see mode in x 3. Step 2. Updates. i. If nset 5n and tk t, set Hk+1 = H0 and nset = 0. Otherwise, set nset = nset + 1 and compute a new approximation Hk+1 to the Hessian of the Lagrangian using the BFGS formula with Powell's modi cation 8 . ii. If kd0 k d, set Ck+1 = maxf0:5Ck ; C g: Otherwise, if gj xk + d` 0; j = 1; : : : ; ni , k k set Ck+1 = Ck . Otherwise, if ` 1, set Ck+1 = 10Ck . k iii. Solve the unconstrained quadratic problem in

2Rne

j = 1; : : : ; ni j ni & j 62 Ikg dg k j = 1; : : : ; ne

min

nf X

j =1

k;j rfj xk+1 + k +

ti X j =1

k;j rgj xk+1+ k;j rhj xk+1 +

ne X j =1

te X j =ne+1

j rhj xk+1 ;

2

where the

k;j 's, k , k;j 's and the k;j 's are the K-T multipliers associated with QP xk ; Hk ; pk for the objective functions, variable bounds, linear equality constraints, and inequality constraints respectively.4

4

See footnote to corresponding step in description of FFSQP-AL.

11 Update pk as follows: for j = 1; : : : ; ne,

8 :

pk+1;j =

iv. Increase k by 1.

pk;j

if pk;j + j 1 and kd0 k minf0:5 ; 0:01p m g k maxf 1 , j ; pk;j g otherwise.

v. Go back to Step 1.

Remark: The Hessian matrix is reset in both algorithms whenever stepsize is too small

and the updating of the matrix goes through n iterations. This is helpful in some situations where the Hessian matrix becomes singular. If the initial guess x0 provided by the user is not feasible for some inequality constraint or some linear equality constraint, FFSQP rst solves a strictly convex quadratic program

v2Rn

2

min hv; vi s.t. bl x0 + v bu hcj ; x0 + vi dj ; j = 1; : : : ; ti , ni haj ; x0 + vi = bj ; j = 1; : : : ; te , ne:

Then, starting from the point x = x0 + v, it will iterate, using algorithm FFSQP-AL, on the problem min max fg xg x2Rn j =1;:::;ni j s.t. bl x bu hcj ; xi dj ; j = 1; : : : ; ti , ni haj ; xi = bj ; j = 1; : : : ; te , ne until j=1;:::;nifgj xg 0 is achieved. The corresponding iterate x will then be feasible for all max constraints other than nonlinear equality constraints of the original problem.

3 Speci cation of Subroutine FFSQP

Only a double precision version of FFSQP is currently available. The speci cation of FFSQP is as follows:

12

subroutine FFSQPnparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter, * * * double double * c c c c c c c external obj,constr,gradob,gradcn When nf=0 and or nineq+neq=0, the dimension for f and or g should be at least 1 due to F77 standard. See more details below. double * inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g, iw,iwsize,w,nwsize,obj,constr,gradob,gradcn integer nparam,nf,nineqn,nineq,neqn,neq,mode,iprint,miter,inform, iwsize,nwsize precision bigbnd,eps,epseqn,udelta precision bl1, bu1, x1, f1, g1, w1 precision blnparam, bunparam, xnparam, fnf, gnineq+neq, wnwsize integer iwiwsize

Important: all real variables and arrays must be declared as double precision in the routine

that calls FFSQP. The following are speci cations of parameters and workspace.

nparam nf

Input Number of free variables, i.e., the dimension of x. Input Number of objective functions nf in the algorithm description, which

could possibly be zero. algorithm description.

nineqn

Input Number possibly zero of nonlinear inequality constraints ni in the Input Total number possibly equal to nineqn of inequality constraints ti

in the algorithm description. algorithm description.

nineq

neqn

Input Number possibly zero of nonlinear equality constraints ne in the Input Total number possibly equal to neqn of equality constraints te in

the algorithm description.

neq

mode

Input mode = CBA with the following meanings:

13 = 0 : P is to be solved. A = 1 : PL1 is to be solved. PL1 is de ned as follows

A

PL1

min max jfixj s.t. x 2 X i2I f

B

=0: =1:

B

C

=1:

C

=2:

where X is the same as for P : It is handled in this code by splitting jfixj as fix and ,fi x for each i: The user is required to provide only fix for i 2 I f . Algorithm FFSQP-AL is selected, resulting in a decrease of the modi ed objective function at each iteration. Algorithm FFSQP-NL is selected, resulting in a decrease of the modi ed objective function within at most four iterations see Algorithm FFSQPNL. For t tk see the end of algorithm statement during the line search, the function that caused the previous value of t to be rejected is checked rst and all functions of the same type objective" or constraint" as the latter will then be checked rst. Recommended for most users. Constraints will be always checked rst at each trial point during the line search. If it is a contraint that caused the previous value of t to be rejected, that constraint will be checked rst. Useful when objective functions are not de ned or are di cult to evaluate outside of the feasible region; not however that if gradients are evaluated by nite di erences, in rare instances, objectives functions may be evaluated at infeasible perturbed" points.

iprint

Input Parameter indicating the desired output see x 5 for details:

= 0 : No information except for user-input errors is displayed. This value is imposed during phase 1. iprint = 1 : Objective and constraint values at the initial feasible point are displayed. At the end of execution, status inform, iterate, objective values,

iprint

14 constraint values, number of evaluations of objectives and nonlinear constraints, norm of the KuhnTucker vector, and sum of feasibility violation are displayed. iprint = 2 : At the end of each iteration, the same information as with iprint = 1 is displayed. iprint = 3 : At each iteration, the same information as with iprint = 2, including detailed information on the search direction computation, on the line search, and on the update, is displayed. iprint = 10N + M : N any positive integer, M =2 or 3. Information corresponding to iprint=M is displayed at every 10 N th iteration and at the last iteration.

miter

Input Maximum number of iterations allowed by the user before termination of execution. Output Parameter indicating the status of the execution of FFSQP:

= 0 : Normal termination of execution in P sense that the 0 e either kd k eps and if neqn 6= 0 n=1 jhj xj j epseqn or one of the user-supplied stopping criteria is satis ed see x 4. inform = 1 : The user-provided initial guess is infeasible for linear constraints and FFSQP is unable to generate a point satisfying all these constraints. inform = 2 : The user-provided initial guess is infeasible for nonlinear inequality constraints and linear constraints; and FFSQP is unable to generate a point satisfying all these constraints. inform = 3 : The maximum number miter of iterations has been reached before a solution is obtained. inform = 4 : The line search fails to nd a new iterate trial step size being smaller than the machine precision epsmac computed by FFSQP. inform = 5 : Failure of the QP solver in attempting to construct d0. A more robust QP solver may succeed.

inform

inform

15 = 6 : Failure of the QP solver in attempting to construct d1. A more robust QP solver may succeed. inform = 7 : Input data are not consistent with printout indicating the error. inform = 8 : Two consecutive iterates are numerically equivalent before a stopping criterion is satis ed. inform = 9 : One of the penalty parameters exceeded bigbnd. The algorithm is having trouble satisfying a nonlinear equality constraint.

inform bigbnd eps

description. It must be bigger than the machine precision epsmac computed by FFSQP. If the user does not have a good feeling of what value should be chosen, a very small number could be provided and iprint = 2 be selected so that the user would be able to keep track of the process of optimization and terminate FFSQP at appropriate time.

Input see also bl and bu below It plays the role of In nite Bound. Input Final norm requirement for the Newton direction d0 in the algorithm k

epseqn

Input Maximum violation of nonlinear equality constraints allowed by the

user at an optimal point e in the algorithm description. It is in e ect only if ne 6= 0 and must be bigger than the machine precision epsmac computed by FFSQP.

udelta

Input The perturbation size the user suggests to use in approximating gradients by nite di erence. The perturbation size actually used is de ned by signxi maxfudelta; rteps max1; jxijg for each component xi of x rteps is the square root of epsmac. udelta should be set to zero if the user has no idea how to choose it.

bl

bu

x

Input Array of dimension nparam containing lower bounds for the components of x. To specify a non-existent lower bound i.e., blj = ,1 for some j , the value used must satisfy blj ,bigbnd. Input Array of dimension nparam containing upper bounds for the components of x. To specify a non-existent upper bound i.e., buj = 1 for some j , the value used must satisfy buj bigbnd. Input Initial guess. Output Iterate at the end of execution.

16

f

Array of dimension maxf1; nfg. Output If nf 1, value of functions fi ; i = 1; : : : ; nf , at x at the end of execution. Array of dimension maxf1; nineq + neqg. Output If ni + ne 1, values of constraints at x at the end of execution. Workspace vector of dimension iwsize.

g

iw iwsize

Input Workspace length for iw. It must be at least as big as 6 nparam + 8 maxf1; nineq + neqg + 7 maxf1; nfg + 30. This estimate is usually very

conservative and the smallest suitable value will be displayed if the user-supplied value is too small.

w

Input Workspace of dimension nwsize. Output Estimate of Lagrange multipliers at the end of execution of phase 2

in the rst nparam +nineq + neq + nff entries; where nff = 0 if in mode A = 0 and nf = 1, and nff = nf otherwise. They are ordered as 's variables, 's inequality constraints, 's equality constraints, and

objective functions. j 0 8j = 1; : : : ; ti and j 0 8j = 1; : : : ; te: i 0 indicates that xi reaches its upper bound and i 0 indicates that xi reaches its lower bound. When in mode A = 0 and nf 1,

i 0: When B = 1,

i 0 refers to +fix and

i 0 to ,fi x.

nwsize

Input Workspace length for w. It must be at least as big as 4 nparam2 + 5 maxf1; nineq + neqg nparam + 3 maxf1; nfg nparam + 26 nparam + maxf1; nfg + 45 maxf1; nineq + neqg + 100. This estimate is usually very

obj

conservative and the smallest suitable value will be displayed if the user-supplied value is too small. Input Name of the user-de ned subroutine that computes the value of the objective functions fix; 8i = 1; : : : ; nf : This name must be declared as external in the calling routine and passed as an argument to FFSQP. The detailed speci cation is given in x 6.1 below.

constr

Input Name of the user-de ned subroutine that computes the value of the constraints. This name must be declared as external in the calling routine and passed as an argument to FFSQP. The detailed speci cation is given in x 6.2

below.

gradob

Input Name of the subroutine that computes the gradients of the objective functions fix; 8i = 1; : : : ; nf : This name must be declared as external in

17 the calling routine and passed as an argument to FFSQP. The user must pass the subroutine name grobfd and declare it as external, if he she wishes that FFSQP evaluate these gradients automatically, by forward nite di erences. The detailed speci cation is given in x 6.3 below. Input Name of the subroutine that computes the gradients of the constraints. This name must be declared as external in the calling routine and passed as an argument to FFSQP. The user must pass the subroutine name grcnfd and declare it as external, if he she wishes that FFSQP evaluate these gradients automatically, by forward nite di erences. The detailed speci cation is given in x 6.4 below.

gradcn

4 User-Accessible Stopping Criterion and Flags

As is clear from the description P the two algorithms, the optimization process normally of 0 e terminates if both kdk k and n=1 jhj xk j e are satis ed. Very small value of either j of these two parameters may request exceedingly long execution time, depending on the complexity of underlying problem and the nonlinearity of various functions. FFSQP allows users to specify three additional parameters that control three pre-selected stopping criteria via the following common block

double precision objeps,objrep,gLgeps logical xisnew common fsqpus objeps,objrep,gLgeps,xisnew

if they wish to. If the user does not assign any of these values, FFSQP will never terminate on the corresponding test. FFSQP will perform corresponding checks at appropriate places during the optimization process and will terminate when either the default stopping criterion is satis ed or one of the following conditions is met: 1. neqn = 0 and jfprev , fj objeps, where fprev and f are the value of the objective function at previous and at current iterate respectively. 2. neqn = 0 and jfprev , fj=jfprevj objrep. e 3. krxLk gLgeps and Pn=1 jhj xk j e, where L is the Lagrangian, i.e. j

rxLxk ;

k ; k ; k ; k ; pk =

nf P

j =1

k;j rfj xk + P k;j + P k;j rgj xk

ne P j =1

n

ti

+

k;j , pk;j rhj xk +

j =1

j =1

te P

and

k ; k ; k ; k are the multipliers from the computation of d0 . k

j =ne +1

k;j rhj xk ;

18 block. Therefore, the user has to use assignments instead of another block data to provide his her own values. The current implementation of these stopping criteria prevent users from having their own block data. The reason for this is primarily to maintain compatibility of FFSQP with earlier versions. In earlier versions, we used a common block "fsqpst" which is no longer in e ect. In addition to these alternative stopping criterion, the logical variable xisnew is initially set to .true. and reset to .true. whenever FFSQP changes the value of x that is to be sent to one of the user-de ned subroutines. The user may test this and do all function evaluations at once, when x is rst changed, and then set xisnew to .false.. On subsequent calls, while xisnew is still .false., the user need only return the already computed function value. See also x 10.

Note: FFSQP makes use of a block data to initialize the three variables in the common

5 Description of the Output

No output will be displayed before a feasible starting point is obtained. The following information is displayed at the end of execution when iprint = 1 or at each iteration when iprint = 2:

iteration inform x

Total number of iterations iprint = 1 or iteration number iprint = 2.

See x 3. It is displayed only at the end of execution. Iterate. Value of objective functions fix; 8i = 1; : : : ; nf at x. displayed only if nf 1 The maximum value of the set of objective functions i.e., max fix or max jfixj; 8i = 1; : : : ; nf at x. displayed only if B = 1 in mode Largest value of the maximum of the objective functions over the last four see FFSQP-NL iterations including the current one. Values of the constraints at x. Number of evaluations so far of individual scalar objective function fix for 1 i nf : Number of evaluations so far of individual scalar nonlinear constraints. Norm of the Newton direction d0 . k

objectives objmax

objective max4

constraints ncallf

ncallg d0norm

19

ktnorm

Norm of the Kuhn-Tucker vector at the current iteration. The Kuhn-Tucker vector is given by

rLxk ;

k ; k ; k ; k ; pk =

nf P j =1

k;j rfj xk + k + P k;j rgj xk

ne P j =1

ti

+

k;j , pk;j rhj xk +

j =1

te P

j =ne+1

k;j rhj xk :

SNECV

Sum of the violation of nonlinear equality constraints at a solution.

For iprint = 3, in addition to the same information as the one for iprint = 2, the following is printed at every iteration. Details in the computation of a search direction:

d0 d1 d1norm d dnorm rho dl dlnorm rhol dg dgnorm rhog dtilde dtnorm

Quasi-Newton direction d0 . k First order direction d1 . k Norm of d1 . k B = 0 in mode Feasible descent direction dk = 1 , k d0 + k d1 . k k B = 0 in mode Norm of dk . B = 0 in mode Coe cient B = 1 in mode Norm of d` . k B = 1 in mode Coe cient B = 1 in mode Norm of dg . k B = 1 in mode Coe cient ~ Second order correction dk . ~ Norm of dk . Trial steplength t in the search direction.

g k ` k k

in constructing dk .

B = 1 in mode Local direction d` = 1 , ` d0 + ` d1 . k k k k k in constructing d` . k

B = 1 in mode Global search direction dg = 1 , g d0 + g d1 . k k k k in constructing dg . k

Details in the line search:

trial step

20

trial point

Trial iterate along the search arc with trial

Pne

step

.

trial objectives

This gives the indices i and the corresponding values of the functions fix , j=1 pj hj x for 1 i nf up to the one which fails in line search at the trial point. The indices i are not necessarily in the natural order see remark at the end of Step 2 in FFSQP-AL and of the end of Step 1 viii in FFSQP-NL. This gives the value P the penalized objective function of e when there is no objective function, i.e. , n=1 pj hj x. j This gives the indices j and the corresponding values of nonlinear constraints for 1 j ni + ne up to the one which is not feasible at the trial point. The indices j are not necessarily in the natural order see remark at the end of Step 2 in FFSQP-AL and of the end of Step 1 viii in FFSQP-NL. Perturbation size for each variable in nite di erence gradients computation. Gradients of functions fix; 8i = 1; : : : ; nf ; at the new iterate. Gradients of constraints at the new iterate. Penalty parameters for nonlinear equality constraints at the new iterate. Solution of the least squares problem estimating the K-T multipliers for the nonlinear equality constraints. These values are used to update the penalty parameters. Multiplier estimates ordered as 's, 's, 's, and

's from quadratic program computing d0 . j 0 8j = 1; : : : ; ti and j 0 8j = 1; : : : ; te. i 0 k indicates that xi reaches its upper bound and i 0 indicates that xi reaches its lower bound. When in mode A = 0 and nf 1,

i 0. When in mode A = 1,

i 0 refers to +fix and

i 0 to ,fi x. cf. x 3 under item w. Note: The 's are not estimates of the equality constraint multipliers at the solution, they are estimates for the corresponding inequality constraints in the modi ed problem. Estimate of the Hessian matrix of the Lagrangian. The value Ck as de ned in Algorithm FFSQP-NL.

trial penalized objective

trial constraints

Details in the updates:

delta gradf gradg P psmu

multipliers

hess Ck

21

6 User-Supplied Subroutines

At least two of the following four Fortran 77 subroutines, namely obj and constr, must be provided by the user in order to de ne the problem. The name of all four routines can be changed at the user's will, as they are passed as arguments to FFSQP.

6.1 Subroutine obj The subroutine obj, to be provided by the user, computes the value of the objective functions.

A dummy subroutine must be provided due to Fortran 77 compiling requirement if nf = 0 This may happen when the user is only interested in nding a feasible point. The speci cation of obj for FFSQP is

subroutine objnparam,j,x,fj integer nparam,j double precision xnparam,fj c c c c return end for given j, assign to fj the value of the jth objective evaluated at x

Arguments:

nparam j x fj

Input Dimension of x. Input Number of the objective to be computed. Input Current iterate. Output Value of the jth objective function at x.

6.2 Subroutine constr The subroutine constr, to be provided by the user, computes the value of the constraints.

subroutine constrnparam,j,x,gj integer nparam,j double precision xnparam,gj

If there are no constraints, a dummy subroutine must be provided anyway due to Fortran 77 compiling requirement. The speci cation of constr for FFSQP is as follows

22

c c c c return end for given j, assign to gj the value of the jth constraint evaluated at x

Arguments:

nparam j x gj

Input Dimension of x. Input Number of the constraint to be computed. Input Current iterate. Output Value of the jth constraint at x.

The order of the constraints must be as follows. First the nineqn possibly zero nonlinear inequality constraints. Then the nineq , nineqn possibly zero linear inequality constraints. Finally, the neqn possibly zero nonlinear equality constraints followed by the neq , neqn possibly zero linear equality constraints.

6.3 Subroutine gradob The subroutine gradob computes the gradients of the objective functions. The user may

omit to provide this routine and require that forward nite di erence approximation be used by FFSQP via calling grobfd instead see argument gradob of FFSQP in x 3. The speci cation of gradob for FFSQP is as follows

subroutine gradobnparam,j,x,gradfj,dummy integer nparam,j double precision xnparam,gradfjnparam double precision dummy external dummy c c c c return end assign to gradfj the gradient of the jth objective function evaluated at x

23 Arguments:

nparam j x gradfj dummy

subroutine by FFSQP. Note that dummy is passed as arguments to computation of the gradient.

Input Dimension of x. Input Number of objective for which gradient is to be computed. Input Current iterate. Output Gradient of the jth objective function at x. Input Used by grobfd internally assigned the name of the objective function

gradob

to allow for forward nite di erence

6.4 Subroutine gradcn The subroutine gradcn computes the gradients of the constraints. The user may omit

to provide this routine and require that forward nite di erence approximation be used by FFSQP via calling grcnfd instead see argument gradcn of FFSQP in x 3. The speci cation of gradcn for FFSQP is as follows

subroutine gradcn nparam,j,x,gradgj,dummy integer nparam,j double precision xnparam,gradgjnparam double precision dummy external dummy c c c c return end assign to gradgj the gradient of the jth constraint evaluated at x

Arguments:

nparam j x gradgj

Input Dimension of x. Input Number of constraint for which gradient is to be computed. Input Current iterate. Output Gradient of the jth constraint evaluated at x.

24

dummy

Input Used by grcnfd internally assigned the name of the constraint function subroutine by FFSQP.

gradcn

Note that dummy is passed as arguments to computation of the gradients.

to allow for forward nite di erence

7 Organization of FFSQP and Main Subroutines 7.1 Main Subroutines

FFSQP rst checks for inconsistencies of input parameters using the subroutine check. It then checks if the starting point given by the user satis es the linear constraints and if not, generates a point satisfying these constraints using subroutine initpt. It then calls FFSQP1 for generating a point satisfying linear and nonlinear inequality constraints. Finally, it attempts to nd a point satisfying the optimality condition using again FFSQP1.

check

Check that all upper bounds on variables are no smaller than lower bounds; check that all input integers are nonnegative and appropriate nineq nineqn, etc.; and check that eps and if neqn 6= 0 epseqn e are at least as large as the machine precision epsmac computed by FFSQP. Attempt to generate a feasible point satisfying simple bounds and all linear constraints. Main subroutine used possibly twice by FFSQP, rst for generating a feasible iterate as explained at the end of x 2 and second for generating an optimal iterate from that feasible iterate. ~ Compute various directions d0 , d1 and dk . k 0 Compute a step size along a certain search direction. It is also called to check if xk + d` is acceptable in Step 1 v of Algorithm FFSQP-NL. k Perform the Hessian matrix updating. Print the output for iprint = 1 or iprint = 2. optional Compute the gradient of an objective function by forward nite di erences with mesh size equal to signxi maxfudelta; rteps max1; jxijg for each component xi of x rteps is the square root of epsmac, the machine precision computed by FFSQP.

initpt

FFSQP1

FFSQP1 uses the following subroutines:

dir step

hesian out grobfd

25

grcnfd

optional Compute the gradient of a constraint by forward nite di erences with mesh size equal to signxi maxfudelta; rteps max1; jxijg for each component xi of x rteps is the square root of epsmac, the machine precision computed by FFSQP.

7.2 Other Subroutines

In addition to QLD, the following subroutines are used:

diagnl matrvc di1 nullvc dqp resign error sbout1 estlam sbout2 fool scaprd fuscmp shift indexs slope matrcp small

7.3 Reserved Common Blocks

The following named common blocks are used in FFSQP and QLD:

fsqpp1 fsqpp2 fsqpp3 fsqpq1 fsqpq2 fsqplo fsqpqp fsqpus CMACHE

8 Examples

The rst problem is borrowed from 9 Problem 32. It involves a single objective function, simple bounds on the variables, nonlinear inequality constraints, and linear equality constraints. The objective function f is de ned for x 2 R3 by

f x = x1 + 3x2 + x3 2 + 4x1 , x2 2

The constraints are 0 xi; i = 1; : : : ; 3 3 x1 , 6x2 , 4x3 + 3 0 1 , x1 , x2 , x3 = 0 The feasible initial guess is: x0 = 0:1; 0:7; 0:2T with corresponding value of the objective function f x0 = 7:2. The nal solution is: x = 0; 0; 1T with f x = 1. A suitable main program is as follows.

c c c program sampl1 c integer iwsize,nwsize,nparam,nf,nineq,neq parameter iwsize=29, nwsize=219 problem description

26

parameter nparam=3, nf=1 parameter nineq=1, neq=1 integer iwiwsize double * c integer mode,iprint,miter,nineqn,neqn,inform double precision bigbnd,eps,epseqn,udelta c mode=100 iprint=1 miter=500 bigbnd=1.d+10 eps=1.d-08 epseqn=0.d0 udelta=0.d0 c c c nparam=3 nf=1 nineqn=1 neqn=0 c c c bl1=0.d0 bl2=0.d0 bl3=0.d0 bu1=bigbnd bu2=bigbnd bu3=bigbnd c c c x1=0.1d0 x2=0.7d0 x3=0.2d0 c give the initial value of x nineq=1 neq=1 precision xnparam,blnparam,bunparam, fnf+1,gnineq+neq+1,wnwsize external obj32,cntr32,grob32,grcn32

27

call FFSQPnparam,nf,nineqn,nineq,neqn,neq,mode,iprint, * * end miter,inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g, iw,iwsize,w,nwsize,obj32,cntr32,grob32,grcn32

Following are the subroutines de ning the objective and constraints and their gradients.

subroutine obj32nparam,j,x,fj integer nparam,j double precision xnparam,fj c fj=x1+3.d0*x2+x3**2+4.d0*x1-x2**2 return end c subroutine grob32nparam,j,x,gradfj,dummy integer nparam,j double c fa=2.d0*x1+3.d0*x2+x3 fb=8.d0*x1-x2 gradfj1=fa+fb gradfj2=fa*3.d0-fb gradfj3=fa return end c subroutine cntr32nparam,j,x,gj integer nparam,j double precision xnparam,gj external dummy c go to 10,20,j 10 20 gj=x1**3-6.0d0*x2-4.0d0*x3+3.d0 return gj=1.0d0-x1-x2-x3 return end precision xnparam,gradfjnparam,fa,fb external dummy

28

c subroutine grcn32nparam,j,x,gradgj,dummy integer nparam,j double c go to 10,20,j 10 gradgj1=3.d0*x1**2 gradgj2=-6.d0 gradgj3=-4.d0 return 20 gradgj1=-1.d0 gradgj2=-1.d0 gradgj3=-1.d0 return end precision xnparam,gradgjnparam external dummy

The le containing the user-provided subroutines is then compiled together with fsqpd.f and qld.f. After running the algorithm on a SUN 4 SPARC station 1, the following output is obtained:

FFSQP Version 3.7 Released April 1997

Copyright c 1989 --- 1997 J.L. Zhou, A.L. Tits, and C.T. Lawrence All Rights Reserved

The given initial point is feasible for inequality constraints and linear equality constraints: x 0.10000000000000E+00 0.70000000000000E+00 0.20000000000000E+00 objectives constraints 0.72000000000000E+01 -0.19990000000000E+01 0.55511151231258E-16

29

iteration x 3 -0.98607613152626E-31 0.00000000000000E+00 0.10000000000000E+01 objectives constraints d0norm ktnorm ncallf ncallg 0.10000000000000E+01 -0.10000000000000E+01 0.00000000000000E+00 0.13945222387368E-30 0.10609826585190E-29 3 5

inform

0

Normal termination: You have obtained a solution !!

Our second example is taken from example 6 in 10 . The problem is as follows. min max jf xj x2R6 i=1;:::;163 i s.t. ,x1 +s0 x1 , x2 +s0 x2 , x3 +s0 x3 , x4 +s0 x4 , x5 +s0 x5 , x6 +s0 x6 , 3:5 + s 0;

where

1 2 fix = 15 + 15 P6=1 cos2xj sin i + cos7sin i ; j i = 180 8:5 + 0:5i; i = 1; : : : ; 163; s = 0:425: The feasible initial guess is: x0 = 0:5; 1; 1:5; 2; 2:5; 3T with the corresponding value of the objective function i=1;:::;163 jfix0 j = 0:22051991555531. A suitable main program is as max

follows.

c c c

problem description

30

program sampl2 c integer iwsize,nwsize,nparam,nf,nineq,neq parameter iwsize=1029, nwsize=7693 parameter nparam=6, nf=163 parameter nineq=7, neq=0 integer iwiwsize double * c integer mode,iprint,miter,nineqn,neqn,inform double precision bigbnd,eps,udelta c mode=111 iprint=1 miter=500 bigbnd=1.d+10 eps=1.0d-08 epseqn=0.d0 udelta=0.d0 c c c nparam=6 nf=163 nineqn=0 neqn=0 c c c bl1=-bigbnd bl2=-bigbnd bl3=-bigbnd bl4=-bigbnd bl5=-bigbnd bl6=-bigbnd bu1=bigbnd bu2=bigbnd bu3=bigbnd nineq=7 neq=0 precision xnparam,blnparam,bunparam, fnf+1,gnineq+neq+1,wnwsize external objmad,cnmad,grobfd,grcnfd

31

bu4=bigbnd bu5=bigbnd bu6=bigbnd c c c x1=0.5d0 x2=1.d0 x3=1.5d0 x4=2.d0 x5=2.5d0 x6=3.d0 c call FFSQPnparam,nf,nineqn,nineq,neqn,neq,mode,iprint, * * end stop miter,inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g, iw,iwsize,w,nwsize,objmad,cnmad,grobfd,grcnfd give the initial value of x

We choose to compute the gradients of functions by means of nite di erence approximation. Thus only subroutines that de ne the objectives and constraints are needed as follows.

subroutine objmadnparam,j,x,fj integer nparam,j,i double precision xnparam,theta,pi,fj c pi=3.14159265358979d0 theta=pi*8.5d0+dblej*0.5d0 180.d0 fj=0.d0 do 10 i=1,6 10 * fj=fj+dcos2.d0*pi*xi*dsintheta fj=2.d0*fj+dcos2.d0*pi*3.5d0*dsintheta 15.d0 +1.d0 15.d0 return end c subroutine cnmadnparam,j,x,gj integer nparam,j double precision xnparam,ss,gj

32

c ss=0.425d0 goto10,20,30,40,50,60,70,j 10 20 30 40 50 60 70 gj=ss-x1 return gj=ss+x1-x2 return gj=ss+x2-x3 return gj=ss+x3-x4 return gj=ss+x4-x5 return gj=ss+x5-x6 return gj=ss+x6-3.5d0 return end

After running the algorithm on a SUN 4 SPARC station 1, the following output is obtained the results for the set of objectives have been deleted to save space

FFSQP Version 3.7

Released April 1997

Copyright c 1989 --- 1997 J.L. Zhou, A.L. Tits, and C.T. Lawrence All Rights Reserved

The given initial point is feasible for inequality constraints and linear equality constraints: x 0.50000000000000E+00 0.10000000000000E+01 0.15000000000000E+01 0.20000000000000E+01 0.25000000000000E+01 0.30000000000000E+01

33

objmax constraints 0.22051986506559E+00 -0.75000000000000E-01 -0.75000000000000E-01 -0.75000000000000E-01 -0.75000000000000E-01 -0.75000000000000E-01 -0.75000000000000E-01 -0.75000000000000E-01

iteration x

7 0.42500000000000E+00 0.85000000000000E+00 0.12750000000000E+01 0.17000000000000E+01 0.21840763196688E+01 0.28732755096448E+01

objective max4 objmax constraints

0.11421841317792E+00 0.11310472749825E+00 0.00000000000000E+00 0.00000000000000E+00 0.00000000000000E+00 0.00000000000000E+00 -0.59076319668800E-01 -0.26419918997596E+00 -0.20172449035524E+00

d0norm ktnorm ncallf

0.15659306304681E-09 0.20560174107850E-10 1141

inform

0

Normal termination: You have obtained a solution !!

34 Our third example is borrowed from 9 Problem 71. It involves both equality and inequality nonlinear constraints and is de ned by

x2R4

min x1 x4 x1 + x2 + x3 + x3 s.t. 1 xi 5; i = 1; : : : ; 4 x1 x2 x3 x4 , 25 0 x2 + x2 + x2 + x2 , 40 = 0: 1 2 3 4

The feasible initial guess is: x0 = 1; 5; 5; 1T with the corresponding value of the objective function f x0 = 16. A suitable program that invokes FFSQP to solve this problem is given below.

c c c program sampl3 c integer iwsize,nwsize,nparam,nf,nineq,neq parameter iwsize=33, nwsize=284 parameter nparam=4, nf=1 parameter nineq=1, neq=1 integer iwiwsize double * c integer mode,iprint,miter,neqn,nineqn,inform double precision bigbnd,eps,epseqn,udelta c mode=100 iprint=1 miter=500 bigbnd=1.d+10 eps=1.0d-07 epseqn=7.d-06 udelta=0.d0 c neqn=1 nineqn=1 precision xnparam,blnparam,bunparam,fnf+1, gnineq+neq+1,wnwsize external obj,cntr,gradob,gradcn problem description

35

c bl1=1.d0 bl2=1.d0 bl3=1.d0 bl4=1.d0 bu1=5.d0 bu2=5.d0 bu3=5.d0 bu4=5.d0 c c c x1=1.d0 x2=5.d0 x3=5.d0 x4=1.d0 c call FFSQPnparam,nf,nineqn,nineq,neqn,neq,mode,iprint, * * end miter,inform,bigbnd,eps,epseqn,udelta,bl,bu,x,f,g, iw,iwsize,w,nwsize,obj,cntr,gradob,gradcn give the initial value of x

Following are the subroutines that de ne the objective, constraints and their gradients.

subroutine objnparam,j,x,fj integer nparam,j double precision xnparam,fj c fj=x1*x4*x1+x2+x3+x3 return end c subroutine gradobnparam,j,x,gradfj,dummy integer nparam,j double precision xnparam,gradfjnparam external dummy c gradfj1=x4*x1+x2+x3+x1*x4 gradfj2=x1*x4

36

gradfj3=x1*x4+1.d0 gradfj4=x1*x1+x2+x3 return end c subroutine cntrnparam,j,x,gj integer nparam,j double precision xnparam,gj c goto 10,20,j 10 20 gj=25.d0-x1*x2*x3*x4 return gj=x1**2+x2**2+x3**2+x4**2-40.d0 return end c subroutine gradcnnparam,j,x,gradgj,dummy integer nparam,j double precision xnparam,gradgjnparam external dummy c goto 10,20,j 10 gradgj1=-x2*x3*x4 gradgj2=-x1*x3*x4 gradgj3=-x1*x2*x4 gradgj4=-x1*x2*x3 return 20 gradgj1=2.d0*x1 gradgj2=2.d0*x2 gradgj3=2.d0*x3 gradgj4=2.d0*x4 return end

After running the algorithm on a SUN 4 SPARC station 1, the following output is obtained

FFSQP Version 3.7 Released April 1997

Copyright c 1989 --- 1997

37

J.L. Zhou, A.L. Tits, and C.T. Lawrence All Rights Reserved

The given initial point is feasible for inequality constraints and linear equality constraints: x 0.10000000000000E+01 0.50000000000000E+01 0.50000000000000E+01 0.10000000000000E+01 objectives constraints 0.16000000000000E+02 0.00000000000000E+00 -0.12000000000000E+02

iteration x

7 0.10000000000000E+01 0.47429996304683E+01 0.38211499930642E+01 0.13794082919438E+01

objectives constraints SNECV d0norm ktnorm ncallf ncallg

0.17014017289156E+02 -0.40500935938326E-12 -0.40500935938326E-12 0.40500935938326E-12 0.11204977083373E-07 0.19065826201858E-07 7 30

inform

0

Normal termination: You have obtained a solution !!

38

9 Results for Test Problems

These results are provided to allow the user to compare FFSQP5 with his her favorite code see also 2 4 and were all obtained with C = 1 in mode. Table 1 contains results obtained for some non-minimax test problems from 9 the same initial points as in 9 were selected. prob indicates the problem number as in 9 , nineqn the number of nonlinear constraints, ncallf the total number of evaluations of the objective function, ncallg the total number of evaluations of the scalar nonlinear constraint functions, iter the total number of iterations, objective the nal value of the objective, d0norm the norm of SQP direction d0 at the nal iterate, eps the norm requirement for d0 in the stopping criterion. In most cases, eps was selected so as to achieve the same eld precision as in 9 . Whether FFSQP-AL 0 or FFSQP-NL 1 is used is indicated in column B". Results obtained on selected minimax problems are summarized in Table 2. Problems bard, davd2, f&r, hettich, and wats are from 11 ; cb2, cb3, r-s, wong and colv are from 12; Examples 5.1-5 the latest test results on problems bard down to wong can be found in 13 ; kiw1 and kiw4 are from 14 results for kiw2 and kiw3 are not reported due to data disparity; mad1 to mad8 are from 10, Examples 1-8 ; polk1 to polk4 are from 15 . Some of these test problems allow one to freely select the number of variables; problems wats-6 and wats-20 correspond to 6 and 20 variables respectively, and mad8-10, mad8-30 and mad8-50 to 10, 30 and 50 variables respectively. All of the above are either unconstrained or linearly constrained minimax problems. Unable to nd nonlinearly constrained minimax test problems in the literature, we constructed problems p43m through p117m from problems 43, 84, 113 and 117 in 9 by removing certain constraints and including instead additional objectives of the form fix = f x + igix where the i's are positive scalars and gix 0: Speci cally, p43m is constructed from problem 43 by taking out the rst two constraints and including two corresponding objectives with i = 15 for both; p84m similarly corresponds to problem 84 without constraints 5 and 6 but with two corresponding additional objectives, with i = 20 for both; for p113m, the rst three linear constraints from problem 113 were turned into objectives, with i = 10 for all; for p117m, the rst two nonlinear constraints were turned into objectives, again with i = 10 for both. The gradients of all the functions were computed by nite di erence approximation except for polk1 through polk4 for which gradients were computed analytically. In Table 2, the meaning of columns B, nparam, nineqn, ncallf,ncallg, iter, d0norm and eps are as in Table 1 but ncallf is the total number of evaluations of scalar objective functions. nf is the number of objective functions in the max, and objmax is the nal value

The results listed in the tables were actually obtained using CFSQP, the C version. The results for FFSQP should be identical.

5

39 of the max of the objective functions. Table 3 contains results of problems with nonlinear equality constraints from 9 . Most columns are the same as described above. eps is not displayed in the table as it is set to 10,4 for all of the problems except p46, where it was 5.E-3, and p27, where it was 1.E-3 increased due to slow convergence. epseqn is the norm requirement on the values of the equality constraints and is chosen close to the corresponding values in 9 . It can be checked that the second order su cient conditions of optimality are not satis ed at the known optimal solution for problems 26, 27, 46 and 47.

10 Programming Tips

In both FFSQP-AL and FFSQP-NL, at each trial point in the arc search, evaluation of objectives constraints is discontinued as soon as it has been found that one of the inequalities in the arc search test fails see Section 2. With a minor exception see below, objectives constraints within a given type linear equalities, linear inequalities, nonlinear inequalities, nonlinear equalities, objectives are evaluated in the order they are de ned in the user supplied subroutines. In consequence, the CPU-wise user will place earlier in his her list the functions whose evaluation is less expensive. The order in which FFSQP evaluates the various objectives and constraints during the line search varies from trial point to trial point, as the functions deemed more likely to cause rejection of the trial steps are evaluated rst. On the other hand, in many applications, it is far more e cient to evaluate all or at least more than one of the objectives and constraints concurrently, as they are all obtained as byproducts of expensive simulations e.g., involving nite element computation. This situation can be accommodated by making use of the ag xisnew as outlined in x 4. Note, however, that this will not be of much help if the gradients are computed via grobfd grcnfd, as xisnew will be set to .true. by FFSQP with every perturbation of a component of x. As an alternative to grobfd grcnfd, the user could provide his her own nite di erence gradient computation subroutines possibly minor modi cations to grobfd grcnfd with proper handling of xisnew.

11 Trouble-Shooting

It is important to keep in mind some limitations of FFSQP. First, similar to most codes targeted at smooth problems, it is likely to encounter di culties when confronted to nonsmooth functions such as, for example, functions involving matrix eigenvalues. Second, because FFSQP generates feasible iterates, it may be slow if the feasible set is very thin" or oddly shaped. Third, concerning equality constraints, if hj x 0 for all x 2 Rn and if hj x0 = 0 for some j at the initial point x0 , the interior of the feasible set de ned by hj x 0 for such

40

j is empty. This may cause di culties for FFSQP because, in FFSQP, hj x = 0 is directly turned into hj x 0 for such j . The user is advised to either give an initial point that is infeasible for all nonlinear equality constraints or change the sign of hj so that hj x 0

can be achieved at some point for all such nonlinear equality constraint. A common failure mode for FFSQP, corresponding to inform = 5 or 6, is that of the QP solver in constructing d0 or d1. This is often due to linear dependence or almost dependence of gradients of equality constraints or active inequality constraints. Sometimes this problem can be circumvented by making use of a more robust but likely slower QP solver. We have designed an interface, available upon request, that allows the user to use QPSOL 16 instead of QLD. The user may also want to check the jacobian matrix and identify which constraints are the culprit. Eliminating redundant constraints or formulating the constraints di erently without changing the feasible set may then be the way to go. Finally, when FFSQP fails in the line search inform=4, it is typically due to inaccurate computation of the search direction. Two possible reasons are: i Insu cient accuracy of the QP solver; again, it may be appropriate to substitute a di erent QP solver. ii Insu cient accuracy of gradient computation, e.g., when gradients are computed by nite di erences. A remedy may be to provide analytical gradients or, more astutely, to resort to automatic di erentiation".

Acknowledgment

The authors are indebted to Dr. E.R. Panier for many invaluable comments and suggestions. The authors would also like to acknowledge the many users of FFSQP whose valuable feedback has contributed greatly to the quality of the software.

References

1 D.Q. Mayne & E. Polak, Feasible Directions Algorithms for Optimization Problems with Equality and Inequality Constraints," Math. Programming 11 1976 , 67 80. 2 E.R. Panier & A.L. Tits, On Combining Feasibility, Descent and Superlinear Convergence in Inequality Constrained Optimization," Math. Programming 59 1993 , 261 276. 3 J.F. Bonnans, E.R. Panier, A.L. Tits & J.L. Zhou, Avoiding the Maratos E ect by Means of a Nonmonotone Line Search. II. Inequality Constrained Problems Feasible Iterates," SIAM J. Numer. Anal. 29 1992 , 1187 1202. 4 J.L. Zhou & A.L. Tits, Nonmonotone Line Search for Minimax Problems," J. Optim. Theory Appl. 76 1993 , 455 476. 5 L. Grippo, F. Lampariello & S. Lucidi, A Nonmonotone Line Search Technique for Newton's Method," SIAM J. Numer. Anal. 23 1986 , 707 716.

41 6 C.T. Lawrence & A.L. Tits, Nonlinear Equality Constraints in Feasible Sequential Quadratic Programming," Optimization Methods and Software 6 1996 , 265 282. 7 K. Schittkowski, QLD : A FORTRAN Code for Quadratic Programming, User's Guide, Mathematisches Institut, Universitat Bayreuth, Germany, 1986. 8 M.J.D. Powell, A Fast Algorithm for Nonlinearly Constrained Optimization Calculations," in Numerical Analysis, Dundee, 1977, Lecture Notes in Mathematics 630, G.A. Watson, ed., Springer-Verlag, 1978, 144 157. 9 W. Hock & K. Schittkowski, Test Examples for Nonlinear Programming Codes, Lecture Notes in Economics and Mathematical Systems 187, Springer Verlag, 1981. 10 K. Madsen & H. Schj r-Jacobsen, Linearly Constrained Minimax Optimization," Math. Programming 14 1978 , 208 223. 11 G.A. Watson, The Minimax Solution of an Overdetermined System of Non-linear Equations," J. Inst. Math. Appl. 23 1979 , 167 180. 12 C. Charalambous & A.R. Conn, An E cient Method to Solve the Minimax Problem Directly," SIAM J. Numer. Anal. 15 1978 , 162 187. 13 A.R. Conn & Y. Li, An E cient Algorithm for Nonlinear Minimax Problems," University of Waterloo, Research Report CS-88-41, Waterloo, Ontario, N2L 3G1 Canada, November, 1989 . 14 K.C. Kiwiel, Methods of Descent in Nondi erentiable Optimization, Lecture Notes in Mathematics 1133, Springer-Verlag, Berlin, Heidelberg, New-York, Tokyo, 1985. 15 E. Polak, D.Q. Mayne & J.E. Higgins, A Superlinearly Convergent Algorithm for Minmax Problems," Proceedings of the 28th IEEE Conference on Decision and Control, Tampa, Florida December 1989 . 16 P.E. Gill, W. Murray, M.A. Saunders & M.H. Wright, User's Guide for QPSOL Version 3.2: A Fortran Package for Quadratic Programming," Systems Optimization Laboratory, Stanford University, Technical Report SOL 84-6, Stanford, CA 94305, 1984.

42

prob B nparam nineqn ncallf ncallg iter p12 0 2 1 7 14 7 p29 p30 p31 p32 p33 p34 p43 p44 p51 p57 p66 p67 p70 p76 p84 p85 p86 p93 p100 p110 p113 p117 p118

1 0 1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

3 3 3 3 3 3 4 4 5 2 3 3 4 4 5 5 5 6 7 10 10 15 15

1 1 1 1 2 2 3 0 0 1 2 14 1 0 6 38 0 2 4 0 5 5 0

7 11 11 18 24 9 9 3 3 4 5 7 9 10 11 6 6 8 9 7 7 8 9 21 62 44 39 6 6 4 4 46 28 7 7 15 13 21 18 9 9 12 12 20 18 19 19

12 20 15 35 24 19 17 5 4 11 10 28 24 46 46 0 0

,.300000000E+02 7 ,.300000000E+02 10 ,.226274170E+02 10 ,.226274170E+02

18 24 7 9 3 3 4 5 7 9 8 11 6 6 6 8 3 3 8 9 21 62 31 36 6 6 4 4 45 28 5 6 12 13 14 15 8 8 12 12 19 17 19 19 .100000000E+01 .100000000E+01 .600000000E+01 .600000000E+01 .100000000E+01 .100000000E+01 ,.400000000E+01 ,.400000000E+01 ,.834032443E+00 ,.834032445E+00 ,.440000000E+02 ,.440000000E+02 ,.150000000E+02 ,.150000000E+02 .193107145E,15 .229528403E,17 .306463061E,01 .306463061E,01 .518163274E+00 .518163274E+00 ,.116211927E+04 ,.116211927E+04 .940197325E,02 .940197325E,02 ,.468181818E+01 ,.468181818E+01 ,.528033513E+07 ,.528033513E+07 ,.240503113E+01 ,.173019329E+01 ,.323486790E+02 ,.323486790E+02 .135075968E+03 .135076796E+03 .680630057E+03 .680630106E+03 ,.457784697E+02 ,.457784697E+02 .243063768E+02 .243064566E+02 .323486790E+02 .323486790E+02 .664820450E+03 .664820450E+03

objective

5 5 30 24 305 868 34 38 0 30 29 1819 1064 0 58 36 102 94 0 108 99 219 93 0

.17E-06 .19E-06 .51E-06 .25E-05 .41E-07 .62E-07 .10E-05 .22E-05 .14E-30 .0 .16E-09 .30E-10 .19E-08 .39E-11 .71E-05 .11E-05 .0 .0 .12E-06 .15E-08 .26E-05 .26E-05 .19E-08 .54E-08 .23E-05 .25E-05 .49E-08 .99E-07 .16E-04 .16E-04 .0 .19E-14 .60E-03 .23E-03 .45E-06 .45E-08 .30E-03 .26E-03 .20E-04 .86E-04 .77E-07 .77E-07 .13E-03 .14E-03 .12E-04 .52E-05 .42E-29 .42E-29

d0norm

.10E-05 .10E-05 .10E-04 .10E-04 .10E-06 .10E-06 .10E-04 .10E-04 .10E-07 .10E-07 .10E-07 .10E-07 .10E-07 .10E-07 .10E-04 .10E-04 .10E-07 .10E-07 .10E-05 .10E-05 .10E-04 .10E-04 .10E-07 .10E-07 .10E-04 .10E-04 .10E-06 .10E-06 .10E-03 .10E-03 .10E-07 .10E-07 .10E-02 .10E-02 .10E-05 .10E-05 .10E-02 .10E-02 .10E-03 .10E-03 .10E-05 .10E-05 .10E-02 .10E-02 .10E-03 .10E-03 .10E-07 .10E-07

eps

Table 1: Results for Inequality Constrained Problems

43

0 1 cb2 0 1 cb3 0 1 colv 0 1 dav 0 1 fr 0 1 hett 0 1 rs 0 1 wat6 0 1 wat20 0 1 wong 0 1 kiwi1 0 1 kiwi4 0 1 mad1 0 1 mad2 0 1 mad4 0 1 mad5 0 1 mad6 0 1 mad810 0 1 mad830 0 1 mad850 0 1 polk1 0 1 polk2 0 1 polk3 0 1 polk4 0 1 p43m 0 1 p84m 0 1 p113m 0 1 p117m 0 1

prob bard B nparam

3

nineqn

0

15 3 3 6

nf

ncallf

2 2 15 4 2 4 4 6 20 7 5 2 2 2 2 2 6 10 30 50 2 10 11 2 4 5 10 15

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 4 5 3

20 2 5 4 31 31 5 10 2 3 3 3 3 163 18

168 105 30 18 15 15 240 102 342 220 32 20 125 75 71 68 623 433 1953 1023 182 171 159 130 40 23 24 18 21 15 24 21 31 21 1084 1141 291 234 1044 2932 1986 42 22 217 133 236 180 45 24 67 55 17 9 108 84 124 54

ncallg

0

iter

0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

98 2 2 10 3 3 3 4 3

0 0 0 0 0 32 22 20 12 127 105 144 57

8 7 6 6 3 5 21 17 12 11 9 10 13 11 9 12 12 13 32 32 19 26 11 13 9 9 5 6 5 5 5 7 7 7 6 7 10 13 * 17 20 20 11 11 45 46 17 17 8 8 13 15 4 3 14 14 21 17

.50816326E,01 .50816868E,01 .19522245E+01 .19522245E+01 .20000016E+01 .20000000E+01 .32348679E+02 .32348679E+02 .11570644E+03 .11570644E+03 .49489521E+01 .49489521E+01 .24593569E,02 .24593670E,02 ,.44000000E+02 ,.44000000E+02 .12717343E,01 .12717091E,01 .895507R08,07 .89770948E,07 .68063006E+03 .68063006E+03 .22600162E+02 .22600162E+02 .22204460E,15 .16254000E,08 ,.38965952E+00 ,.38965951E+00 ,.33035714E+00 ,.33035714E+00 ,.44891079E+00 ,.44891077E+00 ,.10000000E+01 ,.99999971E+00 .11310473E+00 .11310473E+00 .38117396E+00 .38117396E+00 .54762051E+00 .57927622E+00 .57927694E+00 .27182818E+01 .27182818E+01 .54598152E+02 .54598150E+02 .37034827E+01 .37034827E+01 .40993875E,06 .36460425E+00 ,.44000000E+02 ,.44000000E+02 ,.52803351E+07 ,.52803351E+07 .24306209E+02 .24306210E+02 .32348679E+02 .32348679E+02

objmax

.63E-10 .42E-05 .10E-06 .82E-06 .75E-06 .94E-09 .66E-06 .16E-05 .10E-05 .47E-06 .28E-06 .16E-06 .20E-06 .81E-06 .19E-06 .30E-07 .19E-05 .18E-08 .13E-05 .15E-05 .27E-05 .40E-05 .37E-06 .60E-06 .26E-07 .47E-07 .40E-10 .89E-10 .42E-08 .21E-07 .85E-08 .12E-07 .91E-11 .29E-06 .20E-10 .16E-09 .24E-15 .19E-09 .53E-08 .42E-08 .16E-06 .27E-05 .42E-05 .27E-05 .22E-09 .29E-05 .37E-05 .76E-08 .18E-05 .18E-05 .48E-05 .0 .17E-07 .39E-05 .41E-05 .16E-05 .39E-05

d0norm

.50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .10E-05 .10E-05 .50E-07 .50E-07 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05 .50E-05

eps

Table 2: Results for Minimax Problems

44

prob B nparam ncallf ncallg iter p6 0 2 15 26 9 p7 p26 p39 p40 p42 p46 p47 p56 p60 p61 p63 p71 p74 p75 p77 p78 p79 p80 p81 p107 p325

1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

2 3 4 4 4 5 5 7 3 3 3 4 4 4 5 5 5 5 5 9 2

15 35 15 38 38 14 12 5 5 9 7 33 29 21 24 18 14 8 10 20 10 9 5 7 6 16 41 15 34 13 14 7 8 12 9 17 7 24 8 19 28 6 6

16 49 16 79 38 56 26 27 21 15 12 108 136 146 88 147 60 18 15 57 24 17 7 30 19 93 123 87 102 51 42 44 30 77 35 78 21 108 24 211 220 23 19

10 12 12 31 32 13 12 5 5 6 6 18 29 20 24 15 14 8 10 9 8 8 5 7 6 17 42 16 35 12 13 7 8 12 9 10 7 13 8 15 22 6 6

.795230184E,12 .933789794E,16 ,.173205081E+01 ,.173205081E+01 .265644172E,13 .234894106E,13 ,.100000000E+01 ,.100000064E+01 ,.250000002E+01 ,.250000822E+01 .138578644E+02 .138578645E+02 .143831290E,04 .324777175E,05 .418148076E,11 .156450390E,11 ,.345600001E+01 ,.345600000E+01 .325682026E,01 .325682007E,01 ,.143646142E+03 ,.143646142E+03 .961715172E+03 .961715179E+03 .170140173E+02 .170140173E+02 .512649811E+04 .512649811E+04 .517441270E+04 .517441270E+04 .241505129E+00 .241505239E+00 ,.291970042E+01 ,.291970041E+01 .787768236E,01 .787768274E,01 .539498478E,01 .539498474E,01 .539498479E,01 .539498419E,01 .505501180E+04 .505501180E+04 .379134146E+01 .379134153E+01

objective

.19E-05 .10E-07 .26E-08 .22E-08 .86E-04 .83E-04 .52E-07 .25E-04 .14E-05 .48E-05 .27E-05 .94E-05 .14E-02 .19E-03 .90E-05 .78E-04 .23E-04 .47E-06 .35E-04 .11E-05 .14E-08 .87E-06 .23E-06 .13E-04 .11E-07 .16E-04 .59E-05 .21E-04 .11E-05 .12E-06 .99E-05 .18E-04 .66E-05 .33E-04 .48E-04 .14E-04 .13E-06 .49E-07 .47E-04 .23E-04 .28E-08 .85E-08 .15E-07 .21E-07

d0norm epseqn

.40E-06 .40E-06 .35E-08 .35E-08 .16E-04 .16E-04 .75E-04 .75E-04 .85E-04 .85E-04 .45E-05 .45E-05 .50E-04 .50E-04 .60E-04 .60E-04 .25E-06 .25E-06 .55E-04 .55E-04 .25E-06 .25E-06 .60E-05 .60E-05 .70E-05 .70E-05 .65E-05 .65E-05 .10E-07 .10E-07 .35E-04 .35E-04 .15E-05 .15E-05 .15E-03 .15E-03 .15E-07 .15E-07 .80E-06 .80E-06 .10E-07 .10E-07 .10E-07 .10E-07

.87E-09 .22E-06 .29E-10 .13E-12 .11E-09 .36E-07 .48E-08 .64E-06 .10E-07 .43E-05 .16E-08 .58E-07 .14E-06 .45E-04 .71E-08 .38E-07 .24E-07 .11E-08 .70E-07 .44E-07 .11E-09 .10E-07 .54E-08 .54E-05 .41E-12 .28E-08 .24E-06 .74E-09 .53E-09 .16E-06 .67E-09 .19E-05 .35E-07 .10E-08 .41E-08 .38E-06 .46E-11 .11E-07 .99E-09 .17E-06 .60E-15 .22E-11 .56E-07 .74E-09

SNECV

Table 3: Results for General Problems

Information

manual.dvi

46 pages

Find more like this

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

344289

You might also be interested in

BETA
manual.dvi