9512.net
甜梦文库
当前位置:首页 >> >>

Discretization and localization in successive convex relaxation methods for nonconvex quadr



Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan

Discretization and Localization in Successive Convex Relaxation Methods for Nonconvex Quadratic Optimization Problems?
Masakazu Kojimay and Levent Tunelz c July1998, B-341 Based on the authors' previous work which established theoretical foundations of two, conceptual, successive convex relaxation methods, i.e., the SSDP (Successive Semide nite Programming) Relaxation Method and the SSILP (Successive Semi-In nite Linear Programming) Relaxation Method, this paper proposes their implementable variants for general quadratic optimization problems. These problems have a linear objective function cT x to be maximized over a nonconvex compact feasible region F described by a nite number of quadratic inequalities. We introduce two new techniques, \discretization" and \localization," into the SSDP and SSILP Relaxation Methods. The discretization technique makes it possible to approximate an in nite number of semi-in nite SDPs (or semi-in nite LPs) which appeared at each iteration of the original methods by a nite number of standard SDPs (or standard LPs) with a nite number of linear inequality constraints. We establish:  Given any open convex set U containing F , an implementable discretization of the SSDP (or SSILP) Relaxation Method generates a compact convex set C such that F  C  U in a nite number of iterations. The localization technique is for the cases where we are only interested in upper bounds on the optimal objective value (for a xed objective function vector c) but not in a global approximation of the convex hull of F . This technique allows us to generate a convex relaxation of F that is accurate only in certain directions in a neighborhood of the objective direction c. This cuts o redundant work to make the convex relaxation accurate in unnecessary directions. We establish:  Given any positive number , an implementable localization-discretization of the SSDP (or SSILP) Relaxation Method generates an upper bound of the objective values within  of their maximum in a nite number of iterations.
Abstract. Key words

Nonconvex Quadratic Program, Semide nite Programming, Linear Matrix Inequality, Global Optimization, SDP Relaxation, Semi-In nite LP Relaxation, Interior-Point Method ? This manuscript was also issued as Research Report 98-34, Department of Combinatorics and Optimization, University of Waterloo, Ontario, Canada, July 1998. y Department of Mathematical and Computing Sciences Tokyo Institute of Technology, 212-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan. e-mail: kojima@is.titech.ac.jp z Department of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. e-mail:ltuncel@math.uwaterloo.ca

1.

Introduction.

We consider a general quadratic maximization problem (abbreviated by QP) having linear and quadratic terms both in its objective function and in its inequality constraints. We can always reduce such a problem to a QP with a linear objective function and quadratic terms only in its inequality constraints. In fact, if we have a quadratic objective function f (x) to be maximized in a given problem, we can replace f (x) by x0 by adding the inequality x0 0 f (x)  0 to the constraints. Therefore we deal with the QP of the form maximize cT x subject to x 2 F: Here the n-dimensional Euclidean space; a : the transposition of a vector a 2 Rn; F  fx 2 Rn : p(x)  0 (8p(1) 2 P F )g; P F  Q  the set of quadratic functions on Rn:
T

(1)

c 2

Rn



We note that QP (1) covers various important nonconvex mathematical programs such as general quadratic programs [12, 16, etc.], mixed 0-1 linear and quadratic integer programs [15, etc.], linear complementarity problems [5, etc.], and bilinear matrix inequalities [13, 17, etc.]. Our major concern is a convex relaxation of QP (1): maximize cT x subject to x 2 C; (2) where C denotes a compact convex subset of Rn including F . Between QP (1) and its convex relaxation (2), we know that the maximum objective value of (1) is bounded from above by the ^ maximum objective value of (2) and that if a maximizer x 2 C of (2) lies in F , then it is a maximizer of QP (1). Furthermore, since the objective function cT x of QP (1) is linear, if we take the convex hull of F , denoted by c.hull(F ), for C in (2), then the maximum objective values of QP (1) and its convex relaxation (2) coincide with each other. Throughout the paper we assume that F is compact, and that a compact convex subset C0 of Rn which includes F is known in advance. The set C0 will serve as an initial relaxation of c.hull(F ). In their previous paper [10], the authors presented two types of successive convex relaxation methods for approximating c.hull(F ), which are extensions of the Lovsz-Schrijver lift-and-project a procedures [11] with the use of SDP (semide nite programming) and semi-in nite LP (linear programming) relaxations originally developed for combinatorial optimization problems to general nonconvex quadratic programs. We will call the rst method the SSDP (Successive Semide nite Programming) Relaxation Method, and the second the SSILP (Successive Semi-In nite Linear Programming) Relaxation Method. See also [1, 2, 6, 7, 8, 14, 15, 16, 18, 19, 21, etc.] for SDP and semiin nite LP relaxations. The semi-in nite LP relaxation is called \a reformulation-convexi cation" in [18]. In [10], the authors established the following theoretical foundation of their methods:



Both of the methods generate a sequence of compact convex sets Ck of Rn (k = 0; 1; . . . ) such that (a) c.hull(F )  Ck+1  Ck (monotonicity), (b) if F = ; then Ck = ; for 9 nite number k (detecting infeasibility),

1

(c) if F 6= ; then
Remark 1.1

+ \

1

k =1

Ck = c.hull(F ) (asymptotic convergence).

The paper [10] dealt with a quadratic \minimization" problem while we deal with a quadratic \maximization" problem (1) for the convenience of the succeeding discussions. This di erence is not essential at all. In particular, all the main results in [10] remain unchanged and valid for QP (1) because they are independent of the objective function vector c. The current paper is a continuation of their study [10] toward an implementation of the SSDP and SSILP Relaxation Methods. A major diculty in implementing their methods is that we are required to solve an in nite number of semi-in nite SDPs (or semi-in nite LPs) to generate a new approximation Ck+1 of c.hull(F ). Here we mean a semi-in nite SDP (or a semi-in nite LP) by a semide nite program (or a linear program, respectively) with in nitely many inequality constraints. Under the current technology, we are not able to solve an in nite number of semiin nite programs in practice. Another diculty lies in the slow convergence of Ck to c.hull(F ). See the numerical example in Section 7.2 of [10]. Among these two major diculties, the rst one is more crucial than the second one; we rst need to resolve the rst diculty to implement the methods, and then we will be able to start discussing various practical issues including the global and local convergence of the methods and applications of the methods to special cases. The purpose of this paper is to propose implementable variants of the SSDP and SSILP Relaxation Methods for QP (1). We will not address the slow convergence and a possible computational ineciency of the proposed methods. To make the SSDP and SSILP Relaxation Methods more practical, it seems necessary to combine the methods with some existing practical techniques such as branch-and-bound and cutting plane methods. But those issues are beyond the scope of the current paper, and they should be future research subjects. For our theoretical analyses and results, we allow cases where the representation P F of the feasible region F consists of in nitely many quadratic inequalities. To develop implementable variants of the SSDP and SSILP Relaxation Methods for QP (1), however, we need to assume that P F is nite. We will bring two new techniques, \discretization" and \localization," into the SSDP and SSILP Relaxation Methods. The \discretization" makes it possible to approximate an in nite number of semi-in nite SDPs (or semi-in nite LPs) which appear at each iteration of the method by a nite number of standard SDPs (or standard LPs, respectively) with a nite number of inequalities. We establish the following result in Theorem 3.3 :  For any open convex subset U of Rn including F , a discretization of the SSDP (or SSILP) Relaxation Method generates a sequence of compact convex sets Ck of Rn (k = 0; 1; . . . ) satisfying (a), (b) above and
(c)' if F 6= ; then c.hull(F )  Ck

 U for 9 nite number k.

The properties (a), (b), (c) and (c)' mentioned above are independent of the objective function In other words, the original successive convex relaxation methods and their discretized variant mentioned above do not utilize any information on the objective function cT x of QP (1). In many cases, however, we are only interested in a good approximate solution of QP (1) and a good upper bound for the objective values of QP (1) but not an approximation of the entire c.hull(F ). For such cases, it is reasonable to consider a convex relaxation of F that is accurate only in certain directions that lie in a neighborhood of the objective direction c but not necessarily accurate in other directions. The localization technique is developed for generating such a convex relaxation of F . We establish the following result in Theorem 3.5:

cT x of QP (1).

2



For any  > 0, a localization-discretization of the SSDP (or SSILP) Relaxation Method generates a sequence of compact convex sets Ck of Rn (k = 0; 1; . . . ) satisfying (a), (b) above and (c)" if F 6= ; then maxfcT x : x 2 Ck g < maxfcT x : x 2 F g +  for

9 nite number k.

Theoretically we could impose property (c)" on a discretized variant of the SSDP (or SSILP) Relaxation Method without using the localization technique, but we study the localization technique for practical purposes because it eliminates some redundant work to make the convex relaxation accurate in unnecessary directions. Section 2 is devoted to notation, symbols and conditions used throughout the paper. In Section 3, we present our main results including Theorems 3.3 and 3.5 referred above. We sketch an outline of the proofs of Theorems 3.3 and 3.5 in Section 4.1, and give a series of fundamental lemmas in Sections 4.2 and 4.3, which are necessary to prove Theorem 3.3 in Section 5 and Theorem 3.5 in Section 6. Section 7 discusses relations of the SSDP and SSILP Relaxation Methods applied to 0-1 quadratic programs and the Lovsz-Schrijver lift-and-project procedures [11]. a

2.

Notation, Symbols and Conditions.

Let S n and S n denote the set of n 2 n symmetric matrices and the set of n 2 n symmetric + positive semide nite matrices, respectively. We often use the notation qf (1; ; q; Q) to designate the constant term , the linear term 2qT x and the quadratic term xT Qx involved in a quadratic function p(1) de ned on Rn ; where 2 R; q 2 Rn and Q 2 S n . With this convention, we can write the set Q of quadratic functions on Rn , the set Q+ of convex quadratic functions on Rn and the set L of linear functions on Rn as

p(x) = qf (x; ; q; Q)  + 2qT x + xT Qx (8x 2 Rn );

closed subset C of Rn if

Q  fqf (1; ; q; Q) : 2 R; q 2 Rn; Q 2 S n g; Q  fqf (1; ; q; Q) : 2 R; q 2 Rn; Q 2 S n g; L  fqf (1; ; q; Q) : 2 R; q 2 Rn; Q = Og; respectively. We say that P  Q (P  L) is a quadratic (or linear) inequality representation of a
+ +

C = fx 2 Rn : p(x)  0 (8p(1) 2 P )g

holds. Concerning QP (1), we use the following notation:

P F  Q; F  fx 2 Rn : p(x)  0 (8p(1) 2 P F )g;  3  supfcT x : x 2 F g: Here we assume that supfdT x : x 2 ;g = 01 for 8d 2 Rn . Throughout the paper, we assume:
Rn ;
Condition 2.1

c 2

(i) F is compact. 3

(ii) A compact convex set C0  Rn such that

F 0
(iii)

 C;  the diameter of C
0

0

= maxfkx0 0 xk : x0 ;

x2C g>0
0

is known in advance.

kck = 1.

Note that if 0 was zero in (ii) then F would consist of at most one point; hence QP (1) would be trivial. Also (iii) is not essential because we may assume it without loss of generality whenever c 6= 0. In addition to Condition 2.1 above, we need to assume the following condition to implement Algorithms 3.1 and 3.2, which are variants of the SSDP and SSILP Relaxation Methods [10], respectively, on a computer.
Condition 2.2

(iv) The quadratic inequality representation P F of F is nite.

(v) The compact convex set C0 containing F (see (ii) above) has a nite convex quadratic (or e linear) inequality representation P 0 ;

F

e  C  fx 2 Rn : p(x)  0 (8p(1) 2 P )g e e for 9 nite P  Q ( or 9 nite P  L):
0 0 0 + 0

)

(3)

We remark that all the theoretical analyses and results in Sections 3, 4, 5 and 6 remain valid even e when Condition 2.2 is not satis ed, i.e., the representations P F and P 0 are in nite. Let c.hull(A) c.cone(P )

  QX 

the convex hull of A  Rn ; the closure of the convex cone generated by P  Q; the (trace) inner product of two symmetric matrices Q and X ; XX i.e., Q  X  Qij Xij :
i j

We let K and K3 = dual. Also let

fv 2 Rm : uT v  0 (8u 2 K)g denote a closed convex cone in Rm and its        
the closed ball with the center xc 2 Rn and the radius  > 0 fx 2 Rn : kx 0 xck  g (8xc 2 Rn and 8 > 0); the open ball with the center xc 2 Rn and the radius  > 0 fx 2 Rn : kx 0 xck < g (8xc 2 Rn and 8 > 0); the set of direction vectors with the unit length fd 2 Rn : kdk = 1g; fA = (a1; a2 ; . . . ; an) : aj 2 D (8j = 1; 2; . . . ; n)g; D \ B (w; ) = fu 2 D : ku 0 wk  g (8 > 0 and 8w 2 D):

B (xc ; ) Bint(xc ; ) D Dn D(w;  )

We will regard A 2 Dn as an n 2 n matrix.

4

8d ; 8d 2 D and 8x 2 Rn, de ne (C; d)  supfdT x : x 2 C g; `sf (x; C; d)  dT x 0 (C; d); r2sf (x; C; d ; d )  0(dT x 0 (C; d ))(dT x 0 (C; d )); r1sf (x; C; d)  r2sf (x; C; d; 0d) = 0(dT x 0 (C; d))(0dT x 0 (C; 0d)): We call `sf (1; C; d) a linear supporting function for C in a direction d 2 D, r2sf (1; C; d ; d ) a rank-2 (quadratic) supporting function for C in a pair of directions d ; d 2 D, and r1sf (1; C; d) a rank-1 (quadratic) supporting function for C in a direction d 2 D. Let P L(C; D)  f`sf (1; C; d) : d 2 Dg (8D  D); P (C; D ; D )  fr2sf (1; C; d ; d ) : d 2 D ; d 2 D g (8D ; D  D); P (C; D)  fr1sf (1; C; d) : d 2 Dg (8D  D):
1 2 1 2 1 1 2 2 1 2 1 2 2 1 2 1 2 1 1 2 2 1 2 1

Let C be a compact subset of Rn . For 8d;

For 8P

 Q, let
b F (C0 ; P )

 

8 > < > :

x2C

0

:

bL F (C

0

; P)

 

an SDP relaxation of fx 2 C0 : qf (x; ; q; Q)  0 (8qf (1; ; q; Q) 2 P )g; ( 9 2 n that x 2 C0 : X 2qS x such  X  0 (8qf (1; ; q; Q) 2 P ) T +Q + a semi-in nite LP relaxation of fx 2 C0 : qf (x; ; q; Q)  0 (8qf (1 ; q; Q) 2 P )g:

T 9X 2 S n such that x x 2 S n and X T x + Q  X  0 (8qf (1; ; q ; Q) 2 P ) + 2q

1

!

1+ +

9 > = > ;

)

Throughout the paper, We will use the notation

V

= (v 1 ; v 2 ; . . . ; v n ) 2 Dn denotes a xed n 2 n nonsingular matrix.
1 2 1 2

 fv ; v ; . . . ; v n ; 0 v ; 0 v ; . . . ; 0 v n g;  fe ; e ; . . . ; e n ; 0 e ; 0 e ; . . . ; 0 e n g: Let D  D. For 8  0, a subset D0 of D is a  -net of D if for 8d 2 D there exists a d0 2 D0 such that kd0 0 dk  . By de nition, if  > 0  0 and D0 is a  0 -net of D then D0 is a -net of D. In particular, D itself is a  -net of D for any   0. It should be noted that we can take a nite  -net
DV DI
1 2 1 2

D0 of D whenever  > 0. Thus a -net D 0 of D with  > 0 leads us to a nite discretization of D.

3.

Main Results.

Algorithms 3.1 and 3.2 below are variants of the SSDP (Successive SDP) Relaxation Method given in Section 3 of [10] and the SSILP (Successive Semi-In nite LP) Relaxation Method given in Section 6 of [10], respectively.
Algorithm 3.1

(SSDP Relaxation Method) 5

Step 0: Choose a pair of direction sets D1 ; D2  D  fd 2 Rn : kdk = 1g. Let k = 0. Step 2: Compute (Ck ; d) = supfdT x : x 2 Ck g (8d 2 D1 [ D2 ). Step 1: If Ck = ; then stop. Compute k = supfcT x :

x 2 Ck g:

Step 3-L: Let P k = P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ). Step 4: Let
b Ck+1 = F (C0 ; P F [ P k ) 8 > < > :



x2C

0

:

T > = 9X 2 S n such that x x 2 S n and X : > T x + Q  X  0 (8qf (1; ; q ; Q) 2 P [ P ) ; + 2q F k

1

!

9

1+ +

Step 5: Let k = k + 1, and go to Step 1.
Algorithm 3.2

(SSILP Relaxation Method)

Step 0, 1, 2 and 3-L: The same as Steps 0, 1, 2 and 3-L of Algorithm 3.1, respectively. Step 4: Let
b Ck+1 = F (C0 ; P F [ P k )
L



(

x2C

0

:

9X 2 S n such that + 2qT x + Q  X  0 (8qf (1; ; q; Q) 2 P F [ P k ) :

)

Step 5: The same as Step 5 of Algorithm 3.1. The di erence between the two algorithms above lies in their Step 4; Algorithm 3.1 employs b the SDP relaxation F (C0 ; P F [ P k ), while Algorithm 3.2 employs the semi-in nite LP relaxation L b F (C0 ; P F [ P k ). Apparently, the SDP relaxation is at least as good as the semi-in nite LP b bL relaxation, i.e., F (C0 ; P F [ P k )  F (C0 ; P F [ P k ). Each algorithm above generates a sequence fCk (k = 0; 1; 2; . . . ; )g of convex subsets of C0 and a sequence fk (k = 0; 1; 2; . . . ; )g of real numbers. If we replaced Step 3-L by Step 3-R: Choose a quadratic inequality representation P k for Ck , then Algorithms 3.1 and 3.2 would be identical to the original SSDP Relaxation Method in Section 3 of [10] and the original SSILP Relaxation Method given in Section 6 of [10], respectively. In this case, the choice of direction sets D1 ; D2  D in Step 0 and the computation of (Ck ; d) (8d 2 D1 [ D2 ) in Step 2 would be unnecessary.

In both Algorithms 3.1 and 3.2, P k = P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ) does not form a quadratic inequality representation of Ck in general, i.e., Ck can be a proper subset of fx 2 Rn : p(x)  0 (8p(1) 2 P k )g. But, if we take

D1 = D2 = D  fd 2 Rn : kdk = 1g;
6

then P k = P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ) forms a quadratic inequality representation of Ck . Thus, we can regard this case as a common special case of Algorithm 3.1 and the original SSDP Relaxation Method given in Section 3 of [10] or a common special case of Algorithm 3.2 and the original SSILP Relaxation Method given in Section 6 of [10]. In this case, we can derive the convergence of the sequence fCk (k = 0; 1; 2; . . . ; )g to the convex hull c.hull(F ) of the feasible region F of QP (1) and the convergence of the sequence fk (k = 0; 1; 2; . . . ; )g to the optimal value  3 of QP (1). More precisely, we have that (a) c.hull(F )  Ck+1  Ck and  3  k+1  k (8k = 0; 1; 2; . . . ; ) (monotonicity), (b) if F = ; then Ck = ; for 9 nite number k (detecting-infeasibility), +1 \ Ck = c.hull(F ) and k !  3 as k ! 1 (asymptotic convergence). (c) if F 6= ; then
k=1

See Theorems 3.4 and 6.1 of [10]. Another common special case is obtained by taking

representation of Ck (see (i) of Lemma 4.2 in Section 4). We can derive the properties (a), (b) and (c) above (see Corollary 3.4). We should mention, however, that Algorithms 3.1 and 3.2 with this choice of directions sets D1 ; D2  D are still impractical because D2 = D consists of a continuum of direction vectors;

 fv ; v ; . . . ; vn ; 0v ; 0v ; . . . ; 0vng and D = D: In this case, D is nite and P k = P L (Ck ; D ) [ P (Ck ; D ; D ) also forms a quadratic inequality
D1 = DV
1 2 1 2 2 1 1 2 1 2

 

Step 2 requires to compute a continuum of scalars

(Ck ; d) = supfdT x : x 2 Ck g (8d 2 DV

[ D);

Step 4 requires a continuum of linear inequalities to represent Ck+1 .

+ 2qT x + Q  X  0 (8qf (1; ; q ; Q) 2 P F [ P k )

To resolve these diculties in implementing Algorithms 3.1 and 3.2, we need to choose a nite set for D2 too in Step 0. In Theorem 3.3 below, we will employ a  -net of D, a nite discretization of D which uniformly approximates D, for the direction set D2 . Even with this compromise, we can maintain the monotonicity property (a) and the detecting-infeasibility property (b). But we can not expect the asymptotic convergence property (c) any more, so that we need to introduce an approximation of c.hull(F ) in a nite number of iterations: Given a small convex neighborhood U of F (a small open convex set U containing F ), we will regard a convex set Ck as a good approximation of c.hull(F ), relative to U , if c.hull(F )  Ck  U holds. Now we are ready to state our main results.
Suppose that Condition 2.1 holds. Let U  Rn be an arbitrary open convex set containing F . There exists a  > 0 such that if we take a direction set D1 containing DV and a -net D2 of D in Step 0 of Algorithm 3.1 (or Algorithm 3.2), then fCk (k = 0; 1; 2; . . . )g and fk (k = 0; 1; 2; . . . )g generated by Algorithm 3.1 (or Algorithm 3.2, respectively) satisfy the above properties (a), (b) and the following property (c)'.
Theorem 3.3

7

(c)' If F 6= ; then c.hull(F )  Ck

 U for 9 nite number k.

We will give a proof of the theorem in Section 5 after preparing several fundamental lemmas in Section 4. As a corollary, we obtain:
Corollary 3.4

Suppose that Condition 2.1 holds. Take

 fe ; e ; . . . ; en; 0e ; 0e ; . . . ; 0en g and D = D in Step 0 of Algorithm 3.1 (or Algorithm 3.2). Then fCk (k = 0; 1; 2; . . . )g and fk (k = 0; 1; 2; . . . )g generated by Algorithm 3.1 (or Algorithm 3.2, respectively) satisfy the
D1 = DI
1 2 1 2 2

properties (a), (b) and (c).

Proof: Since D itself is a -net of D for any  > 0, the desired result follows from Theorem 3.3.

We will see in Section 7.1 that if we apply Algorithms 3.1 and 3.2 to a 0-1 convex quadratic program using D1 = DI and D2 = D as in the corollary above then they work in a similar way as the Lovsz-Schrijver lift-and-project procedures [11]. See also Remark 3.8. a Since c.hull(F ) is compact, given any  > 0, we can choose a convex neighborhood U of F such that if property (c)' holds then so does the following property (c)" if F 6= ; then  3 number k.

 maxfcT x : x 2 F g  k  maxfcT x : x 2 Ck g <  3 +  for 9 nite

Usually property (c)' is unnecessary in practice but property (c)" is enough because we are only interested in a good approximate solution of QP (1) and a good bound for the optimal value but not in an approximation of the entire convex hull, c.hull(F ), of the feasible region F of QP (1). For such cases, we will present a localization technique for generating a sequence fCk (k = 0; 1; 2; . . . )g of convex relaxations of F that becomes accurate only in certain directions in a neighborhood of the objective direction c (in the sense of (c)" with a suciently small positive ) and cuts o redundant work to make the convex relaxations accurate in unnecessary directions.
Theorem 3.5

Suppose that Condition 2.1 holds. Let  > 0 and  > 0. There exists a  > 0 such that if we take D1  DV and a -net D2 of D(c; )  fd 2 D : kd 0 ck  g in Step 0 of Algorithm 3.1 (or Algorithm 3.2), then fCk (k = 0; 1; 2; . . . )g and fk (k = 0; 1; 2; . . . )g generated by Algorithm 3.1 (or Algorithm 3.2, respectively) satisfy the properties (a), (b) and (c)".

We give a proof of Theorems 3.5 in Section 6. As a corollary, we obtain:
Corollary 3.6

Suppose that Condition 2.1 holds. Let  > 0. Take

D1 = DI

and D2 = D(c; ) in Step 0 of Algorithm 3.1 (or Algorithm 3.2). Then fCk (k = 0; 1; 2; . . . )g and fk (k = 0; 1; 2; . . . )g generated by Algorithm 3.1 (or Algorithm 3.2, respectively) satisfy the properties (a), (b) and the following property (d). (d) if F 6= ; then k converges to  3 as k ! 1.

 fe ; e ; . . . ; en; 0e ; 0e ; . . . ; 0en g
1 2 1 2

8

Proof: Since D(c; ) itself is a  -net of D(c; ) for any  > 0, the desired result follows from Theorem 3.5.

In the remainder of this section, we assume that Condition 2.2 holds in addition to Condition 2.1, and that we take nite direction sets both for D1 and D2 in Step 0 of Algorithms 3.1 e and 3.2. Let P 0 denote the nite convex quadratic inequality representation of the compact convex set C0 containing F assumed in (v) of Condition 2.2. We rst make some comments on implementation of Algorithm 3.1. We can rewrite C0 in terms of a nite number of linear matrix inequalities [4, 20, etc.]

C0 = >x 2 Rn :
:

8 > <

be an LP with nitely many inequality constraints. In particular, Steps 1 and 2 of Algorithm 3.2 are carried out by solving nitely many LPs with the linear objective functions cT x and dT x (d 2 D1 [ D2 ) to be maximized over Ck . Furthermore, the number of LPs to be solved in each iteration is nite. Therefore Algorithm 3.2 is implementable on a computer in this case.
Remark 3.7

e fx 2 Rn : + 2qT x  0 (8qf (1; ; O) 2 P g; ( ) 9X 2 S n such that n Ck = x 2 R : : e + 2qT x + Q  X  0 (8qf (1; ; q; Q) 2 P F [ P [ P k0 ) In both cases with k = 0 and k  1, the maximization of a linear function aT x over Ck turns out to

iteration is nite. Therefore Algorithm 3.1 is implementable on a computer under Conditions 2.1 and 2.2. Now we suppose that the compact set C0 containing F has a nite linear inequality represene tation P 0 , and we make some comments on implementation of Algorithm 3.2. In this case, C0 and Ck (k  1) are written as

T > = 9X 2 S n such that x x 2 S n and n X : Ck = >x 2 R : > : e + 2qT x + Q  X  0 (8qf (1; ; q; Q) 2 P F [ P [ P k0 ) ; In both cases with k = 0 and k  1, the maximization of a linear function aT x over Ck turns out to be an SDP with nitely many inequality constraints. In particular, Steps 1 and 2 of Algorithm 3.1 are carried out by solving nitely many SDPs with the linear objective functions cT x and dT x (d 2 D [ D ) to be maximized over Ck . Furthermore, the number of SDPs to be solved in each

This follows from Theorem 2.4 of [10]. For 8k inequalities such that
8 > <

T 9X 2 S n such that x x 2 S n and X T x + Q  X  0 (8qf (1; ; q ; Q) 2 P ) e + 2q

1

!

1+ +

9 > = > ;

:

 1, Ck
1

0

is described in terms of linear matrix
! 9
1+ +

0

1

1

2

C0 =

0

0

1

We can replace Step 3-L of Algorithms 3.1 and 3.2 by

Step 3-1: Let P k = P 1 (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ). In fact, Step 3-1 works more e ectively than Step 3-L since we know that

  F  
F

b F (C0 ; P F [ P 1 (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )) b F (C0 ; P F [ P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )); bL F (C0 ; P F [ P 1 (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )) bL F (C0 ; P F [ P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )):

9

These relations follow from Lemma 4.1. All the theorems, corollaries above and the lemmas in the succeeding sections remain valid.
Remark 3.8

Suppose that 0D1  f0d1 2 D : d1 2 D1 g  D2 . (Speci cally, this case applies to Corollary 3.4 where we take D2 = D). In this case, we see that

P 1 (Ck ; D1 ) b F (C0 ; P F [ P L(Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ))

 P (Ck ; D ; D ); b  F (C ; P F [ P (Ck ; D ) [ P (Ck ; D ; D )) b = F (C ; P F [ P (Ck ; D ; D )) b  F (C ; P F [ P L(Ck ; D ) [ P (Ck ; D ; D ));
2 1 2 0 1 1 2 1 2 0 2 1 2 0 1 2 1 2

hence Similarly

b b F (C0 ; P F [ P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )) = F (C0 ; P F [ P 2 (Ck ; D1 ; D2 )):

b b F (C0 ; P F [ P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 )) = F (C0 ; P F [ P 2 (Ck ; D1 ; D2 )):
L L

Therefore, we can replace Step 3-L of Algorithm 3.1 and 3.2 by Step 3-2: Let P k = P F

[ P (Ck ; D ; D ).
2 1 2

It should be noted that all Steps 3-L, 3-1 and 3-2 work equally e ectively.

4.
4.1.

Lemmas.
Outline of the Proofs of Theorems 3.3 and 3.5.

The proofs of our main theorems (Theorems 3.3 and 3.5) are rather lengthy and complicated requiring several lemmas presented in Sections 4.2, 4.3, 5 and 6. Before going into technical details, we brie y state an outline of the proofs and roles of those lemmas. Among the lemmas, Lemma 4.1 plays an important role not only in the proofs of our main theorems but also in Lemmas 4.2 and 4.3. It gives us an alternative representation of the SDP b relaxation F (C0 ; P F [ P k ) of F used in Step 3 of Algorithm 3.1 in terms of convex quadratic bL inequalities and an alternative representation of the semi-in nite LP relaxation F (C0 ; P F [ P k ) of F used in Step 3 of Algorithm 3.2 in terms of linear inequalities. These representations are originally due to [6] and [3], respectively. By Lemmas 4.1 and 4.2, we know that Ck (k = 0; 1; 2; . . . ; ) are compact convex subsets of C0 , and that property (a) holds whenever we take D1  DV as in 1 \ Ck is a compact convex Theorems 3.3 and 3.5. These facts imply that the intersection C1  set containing F and that C1 = ; if and only if Ck = ; for 9 nite number k. If C1 = ; then F = ; and Ck = ; for 9 nite number k. Therefore we may restrict ourselves to the case C1 6= ; from then on to prove the properties (b), (c)' and (c)". To prove properties (b) and (c)' of Theorem 3.3, we prepare a ( ; )-approximation of c.hull(F ), denoted by c.hull(F; ; ), at the beginning of Section 5, where  0 and  > 0 are parameters
k=0

10

which determine the accuracy of the approximation. Lemma 5.1 presents the existence of and  > 0 such that

>0

F

 c.hull(F; ; )  U

and c.hull(F; ; ) = ; if F = ;:

Thus both properties (b) and (c)' are reduced to the relation C1  c.hull(F; ; ). We prove this relation by contradiction. In other words, we assume that 9x 2 C1 nc.hull(F; ; ); and show that ^ ^ ^ for 8 suciently large k, the kth iteration of Algorithms 3.1 and 3.2 \cuts o " x; x 62 Ck+1 . We construct such a linear cut based on Lemmas 4.1 and 4.6. To prove (b) and (c)" of Theorem 3.5, we will introduce the set

0 C0 

n

at the beginning of Section 6. 0 = C0 . Then 0 both properties (b) and (c)" of Theorem 3.5 are reduced to the relation C1 \ C0 = ;. We prove this relation by contradiction using Lemmas 6.1, 6.2, 4.1 and 4.6 in a similar way as in the proof of Theorem 3.3. Lemmas 4.4 and 4.5 are used to derive Lemma 4.6, and Lemma 4.3 is used to simplify the discussions in the proofs of Theorems 3.3 and 3.5.
4.2. Basic Properties of Algorithm 3.1.

x0 2 C : cT x0  supfcT x : x 2 F g +  If F = ; then supfcT x : x 2 F g = 01; hence C 0
0

o

Lemma 4.1
b (i) F (C0 ; P F [ P k ) = fx 2 C0 : p(x)  0 (8p(1) 2 c.cone(P F [ P k ) \ Q+ )g, where Q+ fqf (1; ; q ; Q) : 2 R; q 2 Rn ; Q 2 S n g  the set of convex quadratic functions on Rn. + b (ii) F (C0 ; P F [ P k ) = fqf (1; ; q ; Q) : 2 R;
L



fx 2 C : p(x)  0 (8p(1) 2 c.cone(P F [ P k ) \ L)g, q 2 Rn ; Q = Og  the set of linear functions on Rn.
0

where

L

Proof:

See Theorem 2.4 and Corollary 2.5 of [10].

The next lemma provides us a proof of assertion (a) of Theorems 3.3 and 3.5.
Lemma 4.2 Suppose that D1 Algorithm 3.1 or 3.2.

 DV .

Let fCk (k = 0; 1; . . . )g denote the sequence generated by

(i) Ck+1  fx 2 C0 : dT x 0 (Ck ; d)  0 (8d 2 D1 [ D2 )g. (ii) Ck+1  Ck (8k = 0; 1; 2; . . . ). (iii) F Proof:

 Ck (8k = 0; 1; 2; . . . ).
(i) By de nition, Ck+1  C0 . Letting x0 2 Ck+1 , we will show that dT x0 0 (Ck ; d)  0 (8d 2 D1 [ D2)

Since L  Q+ , we know by Lemma 4.1 that

Ck+1  fx 2 C0 : p(x)  0 (8p(1) 2 c.cone(P F [ P k ) \ L)g:
11

(4)

Recall that P k = P L (Ck ; D1 )[P 2 (Ck ; D1 ; D2 ). See also Section 2 for the de nition of P L (Ck ; D1 ) and P 2 (Ck ; D1 ; D2 ). If p(1) is a linear function in P k then

p(1) 2 c.cone(P F [ P k ) \ L  c.cone(P F [ P k ) \ Q+ ;
hence 0  p(x0 ). Speci cally, `sf (1; Ck ; d1 ) 2 P L(Ck ; D1 )  P k is a linear function (8d1 so that we have 0  `sf (x0 ; Ck ; d1 )  dT x0 0 (Ck ; d1 ) (8d1 2 D1 ): 1 Therefore it remains to show that 0  `sf (x0 ; Ck ; d2 )  dT x0 0 (Ck ; d2 ) (8d2 2 D2 ): 2 First we consider the case that 0 = 0 = (5)

2 D ),
1

vT x0 0 (Ck ; v) (8v 2 DV  D ); i.e., vT x0 0 maxfvT x : x 2 Ck g = 0vT x0 0 maxf0vT x : x 2 Ck g (j = 1; 2; . . . ; n): j j j j Since the vectors v ; v ; . . . ; v n are linearly independent, we conclude that Ck consists of the single point x0 . Hence 0 = dT x0 0 (Ck ; d ) (8d 2 D ): Thus (5) holds. Now assume that      0 > v T x0 0 (Ck ; v)  v T x0 0 maxfv T x : x 2 Ck g (9v 2 DV ):
1 1 2 2 2 2 2

Then 0 >

      vT x0 0 (Ck ; v) + 0vT x0 0 (Ck ; 0v) = 0 ( (Ck ; v) + (Ck ; 0v)) : To derive (5), let d 2 D . We consider the quadratic function    p(x) = 0( T x 0 (Ck ; v))(dT x 0 (Ck ; d )) 0 (0vT x 0 (Ck ; 0v))(dT x 0 (Ck ; d )) v T = ( (Ck ; v) + (Ck ; 0v )) (d x 0 (Ck ; d ))  
2 2 2 2 2 2









2 

c.cone(P (Ck ; D1 ; D2 )) \ L c.cone(P F [ P k ) \ L  c.cone(P F
2

2

2

(ii) We shall only deal with the case where the sequence fCk (k = 0; 1; . . . )g is generated by Algorithm 3.1. We can similarly deal with the other case where the sequence is generated by Algorithm 3.2 if we replace Q+ by L below. By de nition, we know that C1  C0 . Let k  1. Assuming that Ck  Ck01 , we will show that Ck+1  Ck . Let x0 2 Ck+1 . It follows from Ck  Ck01 that

By (4), we have that p(x0 )  0. (Ck ; d2 )  0. Thus (5) holds.

[ P k) \ Q : Since ( (Ck ; v ) + (Ck ; 0v)) > 0, we conclude that dT x0 0  
+ 2

(Ck ; d)  supfdT x : x 2 Ck g  (Ck01 ; d)  supfdT x : x 2 Ck01 g (8d 2 D1 [ D2 ):
In view of (i), we know that 0 0

 dT x0 0 (Ck ; d)  dT x0 0 (Ck0 ; d) (8d 2 D [ D );  0(dT x0 0 (Ck ; d ))(dT x0 0 (Ck ; d ))  0(dT x0 0 (Ck0 ; d ))(dT x0 0 (Ck0 ; d )) (8d 2 D
1 1 2 1 1 2 2 1 1 1 2 1 2 1

9 > =

1

and 8d2 2 D2 ):

> ;

(6)

12

Now we will show that if p(1) 2 c.cone(P F [ P k01 ) \ Q+ then 0  p(x0 ); then x0 from Lemma 4.1. We can represent p(1) 2 c.cone(P F [ P k01 ) \ Q+ as

2 Ck follows

p(x) =

m X i=1

i pi(x) +
X
1 1 2

X
1 1

0

for 9pi (1) 2 P F (i = 1; 2; 1 1 1 ; m), 9i  0 (i = 1; 2; 1 1 1 ; m), 9(d1 ) 9 (d1 ; d2)  0 (d1 2 D1 and d2 2 D2). By (6), we then obtain that

d 2D ;d 2D

d 2D  (d ; d )(dT x 0 (Ck0 ; d ))(dT x 0 (Ck0 ; d ))
1 2 1 1 1 2 1 2

(d1 )(dT x 0 (Ck01 ; d1 )) 1

2



0 (d

2 D)
1

and

p(x0 )



m X

0

i=1

ipi (x0 ) +
X
1 1 2

X
1 1

Here the last inequality holds because x0 2 Ck+1 and the quadratic function appeared on the right hand side belongs to c.cone(P F [ P k ) \ Q+ . Thus we have shown that x0 2 Ck . (iii) By de nition, we know that F obtain that

 0:

d 2D ;d 2D

d 2D  (d ; d )(dT x0 0 (Ck ; d ))(dT x0 0 (Ck ; d ))
1 2 1 2 2 2

(d1 )(dT x0 0 (Ck ; d1 )) 1

2

C.
0

Now assume that F

 Ck .

By Lemma 4.1, we )

F

= Ck+1 in the case of Algorithm 3.1; F  fx 2 C0 : p(x)  0 (8p(1) 2 c.cone(P F = F (C0 ; P F [ P k ) = Ck+1 in the case of Algorithm 3.2:
bL

 fx 2 C : p(x)  0 (8p(1) 2 c.cone(P F [ P k ) \ Q b = F (C ; P F [ P k )
0 0

+

[ P k ) \ L)

We will show below the fact that Algorithms 3.1 and 3.2 are invariant under any one-toone ane transformation. Let h : Rn ! Rn be a one-to-one ane transformation of the form h(x) = A(x 0 x0) for x 2 Rn , where x0 2 Rn denote a constant vector and A an n 2 n constant nonsingular matrix. Let fCk (k = 0; 1; . . . )g denote the sequence generated by Algorithm 3.1 or 3.2 with D1 ; D2  D at Step 0. Let

F0 0 C0 0 D1

 fy = h(x) : x 2 F g; P 0F  fp(h0 (1)) : p(1) 2 P F g;  fy = h(x) : x 2 C g;  fA0T d =kA0T d k : d 2 D g; D0  fA0T d =kA0T d k : d 2 D g:
1 0 1 1 1 1 2 2 2 2 2

Then P 0F forms a quadratic inequality representation of F 0 . Now, applying Algorithm 3.1 or 3.2 to the compact set F 0 (with its quadratic inequality representation P 0F of F 0 ), the initial compact 0 0 0 convex set C0  F 0 and the pair D1 and D2 of direction sets, we obtain a sequence of compact 0 (k = 0; 1; . . . ) such that c.hull(F 0 )  C 0  C 0 (k = 0; 1; . . . ): sets Ck k+1 k
Lemma 4.3

0 Ck = fy = h(x) : x 2 Ck g (k = 0; 1; . . . ).
13

Proof: We shall only deal with the case where the sequence fCk (k = 0; 1; . . . )g is generated by Algorithm 3.1. We can similarly deal with the other case where the sequence is generated 0 by Algorithm 3.2 if we replace Q+ by L in the last part of the proof. We will prove Ck = 0 = fy = fy = h(x) : x 2 Ck g (k = 0; 1; . . . ) by induction. By de nition, we know that C0 h(x) : x 2 C0g. Assuming that Ck0 = fy = h(x) : x 2 Ck g, we will show that Ck0 +1 = fy = h0(x) : xT2 Ck+10gT. Since Ck0 = fy = h(x) : x 2 Ck g by the induction hypothesis, we see that if 0 0 d = A0 d=kA dk 2 D1 [ D2 for 9d 2 D1 [ D2 then 0 0 (Ck ; d0 )  maxf(d0)T y : y 2 Ck g = maxf(A0T d)T A(x 0 x0 )=kA0T dk : x 2 Ck g

= =

(Ck ; d) 0 dT x0

maxfdT (x 0 x0 ) : x 2 Ck g kA0T dk

0 0 Hence we can rewrite each linear supporting function `sf (1; Ck ; d0 ) for Ck in a direction d0 0 [ D0 as D
1 2

kA0T dk

:

2

0 0 `sf (y ; Ck ; d0 ) = (d0 )T y 0 (Ck ; d0) (A0T d)T y 0 (Ck ; d) + dT x0
= = =

kA0T dk ; where d0 = A0T d=kA0T dk 2 D 0 [ D0 for 9d 2 D [ D , and each rank-2 quadratic supporting 0 0 function r2sf (1; Ck ; d0 ; d0 ) for Ck in a pair of directions d0 2 D0 and d0 2 D0 as r2sf (y; Ck0 ; d0 ; d0 ) = 0((d0 )T y 0 (Ck0 ; d0 ))((d0 )T y 0 (Ck0 ; d0 ))   dT h0 (y) 0 (Ck ; d ) dT h0 (y) 0 (Ck ; d ) = 0 kA0T d kkA0T d k
1 2 1 2 1 2 1 1 2 2 1 2 1 1 2 2 1 1 1 2 1 2

`sf (h01 (y); Ck ; d)

kA0T dk dT h0 (y) 0 (Ck ; d) kA0T dk
1

0 0 where d01 = A0T d1 =kA0T d1 k 2 D1 for 9d1 2 D1 and d02 = A0T d2 =kA0T d2 k 2 D2 for D2 . Thus we obtain that 0 0 0 0 0 1 c.cone(P L (Ck ; D1 )) = c.cone f`sf (1; Ck ; d01 ) : d01 2 D1 g ( )! `sf (h01(1); Ck ; d1 ) = c.cone kA0T d1k : d1 2 D1 0 0 0 c.cone(P 2 (Ck ; D1 ; D2 )) = c.cone r2sf (h01 (1); Ck ; d1 ; d2 ) : d1 2 D1 ; d2 2 D2   0 0 0 0 0 c.cone(P 0k ) = c.cone P L (Ck ; D1 ) [ P 2 (Ck ; D1 ; D2 ) n o = c.cone p(h01 (1)) : p(1) 2 P k :
14 = c.cone
n n

r2sf (h01(y); Ck ; d1; d2) ; = kA0T d1 kkA0T d2k

1

2

9d 2
2

`sf (h01 (1); Ck ; d1 ) : d1 2 D1

o

;

o

;

Also it is easily veri ed that p(1) 2 Q+ if and only if p(h01 (1)) 2 Q+ : Therefore c.cone(P 0F

[ P 0k )

= c.cone

n

p(h01 (1)) : p(1) 2 P F [ P k

o

:

Applying Lemma 4.1, we consequently obtain that 0 0 Ck+1 = fy 2 C0 : p0(y)  0 (8p0 (1) 2 c.cone(P 0F [ P 0k ) \ Q+ )g = fy 2 C 0 : p(h01 (y))  0 (8p(1) 2 c.cone(P F [ P k ) \ Q+ )g = =

fh(x) : p(x)  0 (8p(1) 2 c.cone(P F [ P k ) \ Q )g fh(x) : x 2 Ck g:
+ +1

0

In view of Lemma 4.3, we only have to deal with the special case where we take the ith unit vector ei for v i (i = 1; 2; . . . ; n) when we prove Theorem 3.3 in Section 5 and Theorem 3.5 in 0 0 Section 6. To see this, suppose that we want to construct direction sets D1 ; D2  D in the transformed space such that 0 D1  fe1 ; e2 ; . . . ; en ; 0e1 ; 0e2 ; . . . ; 0en g; e D0 : a 0 -net of a direction set D0  D for 90 > 0:
2 2

Consider the one-to-one ane transformation h(x) = V T x, and de ne the homeomorphism g from D  fd 2 Rn : kdk = 1g onto itself by g(d)  V 0T d=kV 0T dk (8d 2 D). Then if D1  DV then D0  fg (d1 ) : d1 2 D1 g  fe1 ; e2 ; . . . ; en ; 0e1 ; 0e2 ; . . . ; 0en g:
1

0 e e0 Furthermore, there exists a  > 0 such that if D2 is a -net of D2  g 01 (D2 ) then D2  g(D2 ) is e0 a  0 -net of D2  D.
4.3. Convex Cones of Rank-2 Quadratic Supporting Functions for Compact Convex Sets.

Throughout this section, we assume that
f Let M f 2 S n. Then we can write M as f M f M

DV = DI  fe1 ; e2 ; . . . ; en ; 0e1 ; 0e2 ; . . . ; 0en g:
= [~ij ]; ij = + 0 0 ; +  1; 0  1;  ~ ~ij ~ij ~ij ~ij + 0 f+ f0 f f ij = M 0 M ; M = [~+ ] 2 S n ; M = [~0 ] 2 S n : ij

We will construct a family of quadratic functions f + (1; ; ; w) and f 0 (1; ; ; w) with parameters  2 (0; =8] and w 2 D which satis es the properties (i), (ii), (iii) and (iv) listed in Lemma 4.4. For 8 2 (0; =8], 8w 2 D and 8i = 1; 2; . . . ; n, de ne

i (; w) = kw cos  + ei sin k; +e bi(; w) = w cosi(; wi)sin  2 D;  i(; w) i (; w) = ; 2 sin 

i (; w) = kw cos  0 ei sin k;  0e  bi(; w) = w cosi(; wi)sin  2 D;     i (; w) = i (; w) :  2 sin 

9 > > > > = > > > > ;

(7)

15

For 8 > 0, 8 2 (0; =8], 8w 2 D and 8i = 1; 2; . . . ; n,

are linear valid inequalities for the n-dimensional ball B (0; ). For 8x 2 Rn , 8 > 0, 8 2 (0; =8] and 8w 2 D, de ne

bi(; w)T x 0   0; 0eT x 0   0; i i (; w)T x 0   0; eT x 0   0 b i
n n XX i=1 j =1

)

f + (x; ; ; w) f 0 (x; ; ; w)

 0  0

+ i (; w)(bi (; w)T x 0 )(0eT x 0 ) ~ij j




  +i(; w)(bi (; w)T x 0 )(eT x 0 ) ; j
n n XX i=1 j =1

(8)

0 i (; w)(bi (; w)T x 0 )(eT x 0 ) ~ij j




  +i(; w)(bi (; w)T x 0 )(0eT x 0 ) : j
Lemma 4.4

(9)

 (i) Let D0 = fbj (; w); bj (; w) (8j = 1; 2; . . . ; n)g. Then
for 8 > 0; 8 2 (0; =8] and (ii) For

f + (1; ; ; w); f 0 (1; ; ; w) 2 c.cone(P 2 (B (0; ); DV ; D0))

8x 2 Rn, 8 > 0, 8 2 (0; =8] and 8w 2 D, 8w 2 D,

w 2 D.

f + (x; ; ; w) = 2 f + (x; 1; ; w) and f 0 (x; ; ; w) = 2 f 0 (x; 1; ; w): f + (w; 1; ; w) ! 0 and f 0 (w; 1; ; w) ! 0 as  2 (0; =8] ! 0:
f the Hessian matrix of f + (1; ; ; w) = M ;
+

(iii) For

(iv) For 8 > 0, 8 2 (0; =8] and

8w 2 D,

f0 the Hessian matrix of f 0 (1; ; ; w) = 0M :

Proof: (i) and (ii) follow directly from the construction of the quadratic functions f + (1; ; ; w) and f 0 (1; ; ; w) above. To prove (iii), let w 2 D and i be xed. It suces to show that

i () i ()  i ()

converge to zero as  2 (0; =8] tends to 0 (i = 1; 2; . . . ; n). We will only show the convergence ! 0 as  2 (0; =8] ! 0 below. The convergence i () ! 0 as  2 (0; =8] ! 0 can be  derived similarly. We see from (7) that

 i (; w)(bi (; w)T w 0 1);   i (; w)(i (; w)T w 0 1) b

i () = i(; w)(bi (; w)T w 0 1) (cos  + wT ei sin  0 kw cos  + ei sin k) = 2 sin    T e sin  0 (2w T e sin  cos  + 1)1=2 cos  + w i i = 2 sin 
16

(10)

Since both the numerator and the denominator above converge to zero as  2 (0; =8] tends to 0, we calculate their derivatives at  = 0. The derivative of the numerator turns out to be
T cos  0 sin  + w ei cos  + (2w T ei (sin  0  + 1))= w ei sin cos T
2 2

!

1 2

;

which vanishes at  = 0. On the other hand, the derivative \2 cos " of the denominator \2 sin " in (10) does not vanish at  = 0. Thus, we see that i () converges to 0 as  2 (0; =8] tends to 0. Finally, to prove (iv), let  > 0,  2 (0; =8] and w 2 D be xed. By (7) and (8), we see that the Hessian matrix of the quadratic function f + (1; ; ; w)      n n b b 0i(; w) bi (; w)eT + ej bi (; w)T + i (; w) i (; w)eT + ej i (; w)T XX j j = 0 + ~ij 2 i=1 j =1 = =
n n XX i=1 j =1
+

ij ~

+

eieT + ej eT i j
2

f M

:

Similarly we can derive the latter identity in assertion (iv). The lemma above is a variant of Lemma 4.2 of [10]. See also Lemma 6.2 of [10]. Although the proof above is quite similar to that of Lemma 4.2 of [10], we have given a complete proof for the current paper to be self-contained.
Lemma 4.5

Suppose that

Then there exists a  > 0 such that for every -net D2 of D(c; ) satis es the following property.

0 < ; 0 <  and 0 < : ~

(11)

P1: Assume that

 2 (0; ] and w 2 D(c; =2): ~ g+ (1); g0 (1) 2 c.cone(P 2 (B (0; ); DV ; D2 ))

(12)

Then there are quadratic functions

such that

0 =4  g (w)  0; 0 =4  g0 (w)  0;
+

f the Hessian matrix of g + (1) = M ; f0 the Hessian matrix of g 0 (1) = 0M :
+

)

(13)

Proof: We shall only show the existence of  =  + > 0 such that 8  + -net D2 of D(c; ) satis es property P1 restricted to the relations involving a quadratic function g+ (1). We can similarly derive the existence of  =  0 > 0 such that 8 0 -net D2 of D(c; ) satis es property P1 restricted to the relations involving a quadratic function g0 (1). Letting  = minf + ;  0 g, we obtain the desired  > 0.

For 8 > 0, 8

2 (0; =8] and 8w 2 D(; =2), we consider the quadratic function f (1)
+

17

of the form (8). By the de nitions of bi (; w) and i (; w) in (7) and by Lemma 4.4, for b 8w0 2 D(c; =2), there is a (w0) > 0 such that

bi((w0); w0) 2 D(w0; =2)  D(c; ) (8i = 1; 2; . . . ; n); 9 > > =  bi((w0); w0) 2 D(w0; =2)  D(c; ) (8i = 1; 2; . . . ; n); > 0 ; 1; (w0 ); w 0 )  0; 0 =(8~ )  f (w  > > ; 0 ); w 0 ) = M (8 > 0): > f the Hessian matrix of f (1; ; (w
2 + + +

(14)

For simplicity of notation, we will write   bi(w0 )  bi((w0); w0); bi(w0 )  bi((w0); w0);   i(w0)  i ((w0 ); w0 ); i (w0 )equivi ((w0); w0) (8i = 1; 2; . . . ; n). For

8x 2 Rn ; 8 2 (0; ]; 8w0 2 D; ~  a  8A = (a ; a ; . . . ; an) 2 Dn; 8A = ( ; a ; . . . ; an ) 2 Dn; 8M
1 2 1 2

+

= [  + ]; ij

de ne

g+ (x; ; w0; A; A; M + )
Then

 0

n n XX i=1 j =1

+ i (w0 )(aT x 0 )(0eT x 0 ) i j ij




 +i(w0 )( T x 0 )(eT x 0 ) : ai j

g+ (1; ; w0 ; A; A; M + )

2

c.cone(P 2 (B (0; ); DV ; D)) (8w 0 2 D; 8 2 (0; ]; 8A 2 Dn and 8A 2 Dn ): ~

We observe that

Here

the Hessian matrix of g + (1; ; w 0 ; A; A; M + ) n n   XX a j a j (0i (w0 )ai eT + i(w0 ) ieT ) + (0i (w0 )ai eT + i(w0 ) i eT )T j j = 0 + ij 2 i=1 j =1 n n   XX a (i (w0 )ai 0 i (w0 ) i )eT + ej (i (w0 )ai 0 i(w0 ) i )T a j = + ij 2 i=1 j =1 n n XX hi(w0; A; A)eT + ej hi(w0 ; A; A)T j = + ij 2 i=1 j =1 H (w0 ; A; A)M + + M +H (w0 ; A; A)T : = 2

 a hi(w0 ; A; A)  i(w0 )ai 0 i(w0 ) i (i = 1; 2; . . . ; n); 0 ; A; A)  (h (w0 ; A; A); h (w0 ; A; A); . . . ; hn (w0 ; A; A)); H (w M  [ij ] 2 S n :   Note that if we take ai = bi(w0 ) and ai = bi (w0 ) (i = 1; 2; . . . ; n), we know that g (x; ; w0 ; A; A; M ) = f (x; ; (w0); w0 ) (8 > 0 and 8x 2 Rn ); H (w0; A; A) = I :
1 2 + + + + +

18

 By the continuity with respect to a perturbation to bi (w0 ), bi(w0 ) at  = 1 and x = w0 , there + 0 ) > 0 such that whenever is a  (w   u 2 D(w0; + (w0 )); ai 2 D(bi(w0); +(w0 )); ai 2 D(bi(w0 ); +(w0)) (8i = 1; 2; . . . ; n); the Lyapunov type linear equation H (w0; A; A)M + + M +H (w0 ; A; A)T = M + f 2 with the Hessian matrix H (w0 ; A; A) of g+ (1; ; w 0 ; A; A; M + ) as its coecient matrix has a c+ solution M + = M (w0 ; A; A) 2 S n such that
c M (w0 ; A; A) = [^ij (w0; A; A)] 2 S n;  ij (w0; A; A)  1=2 (i = 1; 2; . . . ; n; j = 1; 2; . . . ; n); ^ c g (1; ; w0 ; A; A; M (w0; A; A)) 2 c.cone(P (B (0; ); DV ; D)); c 0 =(4~ )  g (u; 1; w0 ; A; A; M (w0 ; A; A))  0;  c f the Hessian matrix of g (1; ; w 0 ; A; A; M (w0 ; A; A)) = M : f (Note that the Lyapunov type linear equation has a unique solution M for every M if H is positive de nite.) Since D(c; =2) is compact, we can nd a nite number of unit direction vectors w ; w ; . . . ; w ` 2 D(c; =2) such that
+ + + + + 2 2 + + + + + + + 1 2

` [ j =1

D(wj ; + (wj ))  D(c; =2):

Let +  minf + (w1 );  + (w2 ); . . . ;  + (w` )g, and let D2 be a  + -net of D(c; ). Now assume (12). Then we can nd a j 2 f1; 2; . . . ; `g such that w 2 D(wj ;  + (wj )); and the relations in (14) hold for w0 = w j . Furthermore there exist   ai 2 D(bi(wj );  (wj )) \ D ; ai 2 D(bi(wj );  (wj )) \ D (8i = 1; 2; . . . ; n):
+ 2 + 2 +

Therefore we can conclude that
c g+ (1; ; wj ; A; A; M (wj ; A; A)) 2 c.coneP 2 (B (0; ); DV ; D2 ); c 0 =(4~2)  g+(w; 1; wj ; A; A; M +(wj ; A; A))  0;  c c 0 =4  2 g+(w; 1; wj ; A; A; M +(wj ; A; A)) = g+ (w; ; wj ; A; A; M + (wj ; A; A))  0; c+ f+ the Hessian matrix of g + (1; ; w j ; A; A; M (wj ; A; A)) = M :

Lemma 4.6

-net D2 of D(c; ) satis es the following property. Ck

As in Lemma 4.5, suppose that (11) holds. Then there exists a  > 0 such that

8

P2: In addition to (12) of Lemma 4.5, assume that

w

: a compact convex subset of Rn ; Ck+1  Ck (k = 0; 1; . . . ); 1 \ 2 C1  Ck  B (0; );
k=0

9 > = > ;

(15)

19

Then there are quadratic functions
+ 0 fk (1); fk (1); fk (1) 2 c.cone(P 2 (Ck ; DV ; D2 )) (k = 0; 1; . . . )

that satisfy

0 =2  fk (w)  0; 0 =2  fk0 (w)  0; 0  fk (w)  0;
+

the Hessian matrix of the Hessian matrix of the Hessian matrix of

+ f fk (1) = M ; 0 f0 fk (1) = 0M ; f f+ f0 fk (1) = M  M 0 M : +

9 > > = > > ;

(16)

for

8 suciently large k.

Proof: By Lemma 4.5, there exists a  > 0 such that 8 -net D2 of D(c; ) satis es property P1. Assume that (12) and (15) hold. Then, by property P1, there are quadratic functions g+ (1); g0 (1) 2 c.cone(P 2 (B (0; ); DV ; D2 )) for which the relations in (13) hold. In the rest of the proof, modifying g + (1) 2 c.cone(P 2 (B (0; ); DV ; D2 )), we shall only construct quadratic + functions fk (1) 2 c.cone(P 2 (Ck ; DV ; D2 )) (k = 0; 1; . . . ) that satisfy the rst two relations in (16) for 8 suciently large k. Modifying g 0 (1) 2 c.cone(P 2 (B (0; ); DV ; D2 )), we can similarly 0 construct quadratic functions fk (1) 2 c.cone(P 2 (Ck ; DV ; D2 )) (k = 0; 1; . . . ) that satisfy the + 0 middle two relations in (16) for 8 suciently large k. Finally letting fk (1)  fk (1) + fk (1) (k = 0; 1; 2; . . . ), we obtain quadratic functions fk (1) 2 c.cone(P 2 (Ck ; DV ; D2 )) (k = 0; 1; 2; . . . ) that satisfy the last two relations in (16) for 8 suciently large k.

We recall that the quadratic function g + (1) 2 c.cone(P 2 (B (0; ); DV ; D2 )) can be written as

g (x) = 0
+

n n XX i=1 j =1

 ai + i (aT x 0 )(0eT x 0 ) + i ( T x 0 )(eT x 0 ) ; i j j ij





  for 9+  1=2, 9i  0, 9i  0, 9ai 2 D2 and 9ai 2 D2 (i = 1; 2; . . . ; n; j = 1; 2; . . . ; m). For ij 8k = 0; 1; 2; . . . and 8i = 1; 2; . . . ; m, let  i ik = supfaT x : x 2 Ck g; ik = supfaT x : x 2 Ck g; i  ik = supfeT x : x 2 Ck g; ik = supf0eT x : x 2 Ck g: i i

Then, for 8k = 0; 1; 2; . . . ,  i aT x 0 ik  0; aT x 0 ik  0; eT x 0 ik  0; 0eT x 0 ik  0  i i i that w satis es all the inequalities above for 8k = 0; 1; 2; . . . . 1 \ Ck  B (0; ) that (8k = 0; 1; 2; . . . ) and C1 =
k=0

(i = 1; 2; . . . ; n) are all linear valid inequalities for Ck . Since w

2

 B (0; ), we know It follows from Ck  Ck
k =0

1 \

Ck

+1

ik ik  ik
 ik

   

i(k+1)  i1  lim ik = supfaT x : x 2 C1 g  ; i
k!1  i(k+1)  i1  lim ik = supf0eT x : x 2 C1g    i k!1

i(k+1)  i1  lim ik = supfeT x : x 2 C1g  ; i

i i(k+1)  i1  lim ik = supfaT x : x 2 C1 g  ;   
k!1

k!1

20

+ (i = 1; 2; . . . ; n). For 8k = 0; 1; 2; . . . , we de ne the quadratic function fk (1) by + fk (x)

 0

n n XX i=1 j =1

  ai + i (aT x 0 ik )(0eT x 0 jk ) + i ( T x 0 ik )(eT x 0 jk ) :  i j j ij





+ Then each quadratic function fk (1) belongs to c.cone(P 2 (Ck ; DV ; D2 )) and has the same Hessian matrix as the quadratic function g+ (1); hence the second identity of (16) holds for 8k = 0; 1; 2; . . . . We also see that

0 0

 fk (w)  fk (w) (8k = 0; 1; 2; . . . );  klim fk (w) !1
+ +1 + +

=

0

n n XX

 g (w): Hence, for 8 suciently large k, we have that 0 =2  g (w) 0 =4  fk (w)  0: Thus we obtain the rst relation of (16) for 8 suciently large k.
+ + +

i=1 j =1

  ai  + i (aT w 0 i1 )(0eT w 0 j 1) + i ( T w 0 i1 )(eT w 0 i1 ) i j j ij





5.
For 8

Proof of Theorem 3.3.

 0 and 8 > 0, de ne F ( )  \ 2 C : p(x)  (8p(1) 2 P F )g; fx c.hull(F; ; )  fB (xc; 0 ) : F ( )  B (xc; 0); xc 2 Rn ; 0 < 0  g
0

= the intersection of all n-dimensional balls including F ( ) with radii not greater than :

By de nition, we know that

F = F (0)  F ( 1 )  F ( 2 ) (0  8 1 < 8 2 ); 9 c.hull(F; ; 1 )  c.hull(F; ; 2 ) > > \ = c.hull(F; ; )  (0  8 ; 0 < 81 < 82 ): > >0 > ; = c.hull(F ( ))

(17)

(For the last identity, see Lemma 4.1 of [10]). Note that F ( ) depends on the quadratic inequality representation P F , which is xed throughout the paper. We call c.hull(F; ; ) a ( ; )approximation of c.hull(F ).
Lemma 5.1

Let U  Rn be an open convex set containing F . Then there exist > 0 and  > 0 such that c.hull(F; ; ) is contained in U ; F  c.hull(F; ; )  U: (Note that U can be empty when F is empty). Proof: We rst show that c.hull(F ( ))  U for 9 > 0. Assume on the contrary that there exist sequences f k > 0 (k = 1; 2; . . . )g and fxk 2 Rn (k = 1; 2; . . . )g for which

! 0 as k ! 0; xk 2 F ( k ); i:e:; p(xk )  xk 62 U (8k = 1; 2; . . . ):
k

k

(8p(1) 2 P F ) (8k = 1; 2; . . . );

21

Since the sequence fxk (k = 1; 2; . . . )g lies in the compact set C0 nU , we may assume that it  converges to 9x 2 C0 nU . On the other hand, we see by the continuity that for 8p(1) 2 P F ,

p( ) = lim p(xk )  x
k!1

hence x 2 F . This contradicts the assumption that F  U . Thus we have shown that  c.hull(F ( ))  U for 9 > 0. Now recall the relations in (17). Since each c.hull(F; ; ) is a compact subset of C0 , we can conclude that c.hull(F; ; )  U for 8 suciently large  > 0:

k! inf ty

lim

k

= 0;

(i) By Lemma 5.1, c.hull(F; ; )  U for 9 > 0 and 9 > 0. If F is empty, we can   choose > 0 and  > 0 such that c.hull(F; ; ) is empty. Let   4,    + 0 , where   ~  0  fkx0 0 xk : x0 ; x 2 C0 g. Then D(c; =2) = D. By Lemma 4.6, there exists a  > 0 such that 8 -net of D2 of D satis es property P2. We will prove this  > 0 ful lls the requirement of Theorem 3.3. Let a -net of D2 of D be xed. 1 \ Ck . Since each Ck is compact, so is C1. If C1 = ; then Ck = ; for 9 nite (ii) Let C1  number k and F = ;. Hence property (b) follows in this case. Therefore we will only deal with the case C1 6= ; in the rest of the proof.
k =0

We will simultaneously deal with the two cases, the one where the sequence f(Ck (k = 0; 1; 2 . . . )g is generated by Algorithm 3.1 and the other where the sequence is generated by Algorithm 3.2. By Lemma 4.3, we may assume without loss of generality that vi = ei (i = 1; 2; . . . ; n). See also the paragraph below Lemma 4.3. By Lemma 4.2, we know that property (a) holds whenever we take D1  DV as assumed in Theorem 3.3. So we only deal with properties (b) and (c)'.
Proof of Theorem 3.3:

(iii) Since we know that c.hull(F; ; )  U . It suces to show that C1  c.hull(F; ; ):    We assume on the contrary that 9x 2 C1nc.hull(F; ; ), and we will derive a contradiction. If  ~ F ( ) = ;, let 0 > 0, xc 2 Rn and x 2 Rn such that  ~  0 < kx 0 xc k <  and x = x: 

~  Otherwise, we can choose a x 2 F ( ), and by x 62 c.hull(F; ; ), we can take an n-dimensional  0 ) with 0 2 (0; ] such that ball B (xc ;   In both cases, let   supfkx 0 xc k : x 2 C1 g, and choose an x 2 C1 such that   kx 0 xc k. ^ ^ ^ Then x 62 F ( ). We also see that ^ ~ ~ 0   = kx 0 x + x 0 xc k  0 +  = :  ~ Hence we obtain that  F ( )  B (xc ; 0 ) and x 62 B (xc ; 0):

 2 (0; ]; x 62 F ( ) and x 2 C1 = ~ ^ ^
22

1 \
k=0

Ck  B (xc ; ):

(18)

f (iv) Let M be the negative of the Hessian matrix of the quadratic function p0 (1). By Lemma 4.3, we may assume without loss of generality that xc = 0 in (18); if necessary, apply the parallel transformation which maps xc to the origin. Note that such a transformation preserves the Hessian matrix of the quadratic function p0 (1). Then all the assumptions (12) and (15) in property P2 are satis ed. Therefore, if k is suciently large then there is a quadratic function

Let w  (^ 0 xc )= x p0 (^ ) > . x

2 D.

We also see from x ^

62 F (

) that there is a p0 (1)

2 PF

such that

fk (1) 2 c.cone(P 2 (Ck ; DV ; D2 ))  c.cone(P k )
for which (16) holds. We consider the quadratic function p0 (1) + fk (1). Then we observe that the Hessian matrix of the function vanishes; hence

p0 (1) + fk (1) 2 c.cone(P F [ P 2 (Ck ; DV ; D2 )) \ L:
We also see that

p0 (^ ) + fk (^ ) = p0 (w) + fk (w) > x x
Therefore, ^ x = w 62 fx 2 C
bL

0

= 0:

= F (C0 ; P F

0

: p(x)  0 (8p(1) 2 c.cone(P F

[ P k ) (by Lemma 4.1):
L

[ P k ) \ L)g

b ^ On the other hand, we know that Ck+1  F (C0 ; P F [ P k ); hence x = w 62 Ck+1 . This is a 1 \ ^ Ck . This completes the proof of the theorem. contradiction to x = w 2 C1 =
k=0

6.

Proof of Theorem 3.5.

Theorem 3.5 is proved in a similar way as Theorem 3.3. In particular, the proof relies on Lemma 4.6. Lemmas 6.1 and 6.2 furnish two constants  > and > 0 which appear in the assumption (11) of ~ Lemma 4.6.
Lemma 6.1

Let  > 0 and 0 = maxfkx 0 x0 k : x; x0

2 C g. Suppose that the set
0

0 C0 
is nonempty.

n

x0 2 C : cT x0  supfcT x : x 2 F g + 
0

o

(19)

2 (i) Let   (0 + 22 )=. Then

0

F

 int @

\
0

x 2C

0

0

Bint(x0 0 c; )A :

1

23

(ii) Let  > 0. Suppose that C1  C0 is a nonempty compact convex set which contains F and 0 0  intersects with C0 at an x 2 C1 \ C0 . Let
2 ~    max (0 + 22 )=; 40 = ;  1  + 0 ;  0

8

9

G  C0 nint @
^ Then there exist x 2 C1 ,

\

~ w 2 D and  2 [; ] such that ^ ^ x 2 G; (hence x 62 F ); w 2 D(c; =2) (i.e., w 2 D and kw 0 ck < =2) ; C1  B (^ 0 w; ) (i.e., kx 0 (^ 0 w)k   for 8x 2 C1 ): x x

x 2C
0

0

0

Bint(x0 0 c; )A :  

9 > > = > > ;

(20)

(21) (22) (23)

(Note that F can be empty).
Proof: (i) If F = ; then the inclusion relation in (i) is obvious. So we only deal with the case that F 6= ;. By assumption, we rst observe that

2  (2 = + 2)2 > 2 + 32 :
q

2 Let  be a positive number such that  + 2 0 0 0 32  . We will show that B (x; )  B (x0 0 c; ) (8x 2 F and 8x0 2 C 0 ): 0

Here the last inequality follows from the relations kck = 1 (by (iii) of Condition 2.1) and x; x0 2 C0 . Hence  0    + cT x 0 cT x0   0 0 > 0: (26) Let x00 = cT (x 0 x0 )c + x0 . Note that the rst term cT (x 0 x0 )c is the orthogonal projection of the vector x 0 x0 onto the direction c. Then we obtain that cT (x00 0 x) = cT x00 0 cT x = cT (x 0 x0 ) + cT x0 0 cT x = 0 (27)

0 Then the inclusion relation in (i) follows. Assume that x 2 F and x0 2 C0 . Then   (2 + 22 )= > 0 and cT x0 0   cT x = cT x0 + cT (x 0 x0)  cT x0 0 0 :
0

(24) (25)



2 0

kx 0 (x0 0 c)k2

(since kck = 1 by (iii) of Condition 2.1):  kx 0 x0k2 (since x0 ; x 2 C0) = kx 0 x00 + x00 0 x0 k2 2 = kx 0 x00 k2 + cT (x 0 x0 )c (by (27))  kx 0 x00k2 ; = k(x 0 x00 ) + (c + x00 0 x0 )k2
( 2

(28)

= =

 

+ ( 0 )2 (by (28) and (26)) =  + 2 0 2 + 2   2 2  0 + 2 + 2 0 2 (0 + 22)= (by (25))
2 0 2 0 2 = 2 0 0 0 32 :

x 0 x00) + ( + cT x 0 cT x0 )c (since x00 = cT (x 0 x0 )c + x0 ) x 0 x00 + ( + cT x 0 cT x0 ) (by (27) and kck = 1)
2 2

24

Hence we obtain that

Now assume that x 2 B (x; ). Then we see that ~ ~ ~ kx 0 (x0 0 c)k = kx 0 x + x 0 (x0 0 c)k ~  kx 0 xk + kx 0 (x0 0 c)k

0 x 2 B x0 0 c; 2 0 0 0 32 :



q



  +  0  0 3  :
2 0 0

q

2

Thus we have shown relation (24).

^ ^ x  (ii) Let   maxfkx 0 ( 0 c)k : x 2 C1 g, and choose x 2 C1 such that  = kx 0 ( 0 c)k: x  We then see that

 B ( 0 c; ); x     (since x 2 C1); x 2 C nB ( 0 c; )  G; (hence (21) holds); ^ x   ^ x  ^   = kx 0 ( 0 c)k  kx 0 xk +    +   ;   ~   ^ hence  2 [; ]. Now we de ne w = (^ 0 ( 0 c))= 2 D. Then x 0 c = x 0 w: Hence  ~ x x  C1  B ( 0 c; ) = B (^ 0 w; ): x  x
C1  
0 int 0

Thus we have shown (23). We also see that

kw 0 c k

= =

k(^ 0 ( 0 c))= 0 ck x x  k(^ 0 x)= + (( 0 )=)ck x    ^  2 = (since x; x 2 C ,      +     2 = (since  2 [; ])   ~  =2 (since 4 =  ): 
0 0 0 0

0

and kck = 1)

Thus we have shown (22).
Lemma 6.2

Let G be a compact subset of C0 such that F
()

inf x2G p 1sup F p(x) > 0: 2P

\ G = ;. Then

Assume on the contrary that inf sup p(x)  0 Then there exists a sequence x2G p(1)2P F fxk 2 G (k = 1; 2; . . . ; )g such that klim sup p(xk )  0: It follows that for 8p(1) 2 P F , !1 p(1)2P F p(xk )  sup p(xk ) (k = 1; 2; . . . ): Since G is compact, we may assume without loss of p(1)2P F  generality that xk ! x 2 G as k ! 1. Taking the limit as k ! 1 in the inequality above, we  obtain that p( )  0 for 8p(1) 2 P F , which implies that x 2 F \ G. This is a contradiction to x the assumption F \ G = ;.
Proof:
Proof of Theorem 3.5: We will simultaneously deal with the two cases, the one where the sequence f(Ck (k = 0; 1; 2 . . . )g is generated by Algorithm 3.1 and the other where the sequence

25

property (a) holds whenever we take D1  DV as assumed in Theorem 3.3. So we only deal with properties (b) and (c)". 0 0 (i) De ne C0 by (19). Assume rst that C0 = ;. In this case, we must have  3  supfcT x : x 2 F g > 01; hence F 6= ;, and (b) of Theorem 3.3 can not occur. If F 6= ; then 0 = maxfcT x : x 2 C0 g <  3 + ; hence (c)" holds with k = 0. Therefore we only deal with the case where C00 6= ;. De ne  > 0,  > 0 and G  C0 by (20). Then G forms a compact subset of C0 and by (i) of  ~ Lemma 6.1, we know that F \ G = ;. Hence, by Lemma 6.2, we can take a > 0 such that for 8x 2 G there is a p(1) 2 P F for which p(x) > . Applying Lemma 4.6, we can nd a  > 0 such that 8 -net of D2 of D(c; ) satis es property P2. We will prove this  ful lls all the requirements of Theorem 3.5. Let a -net of D2 of D(c; ) be xed. 1 \ Ck . Since each Ck is compact, so is C1 . De ne 1  lim k : We will show (ii) Let C1  k!1 that
k=0

vi = ei (i = 1; 2; . . . ; n). See also the paragraph below Lemma 4.3. By Lemma 4.2, we know that

is generated by Algorithm 3.2. By Lemma 4.3, we may assume without loss of generality that

1 = supfcT x : x 2 C1g: (29) Since C1  Ck (k = 0; 1; 2; . . . ), we know that k  supfcT x : x 2 C1 g (k = 0; 1; 2; . . . ); hence 1  lim k  supfcT x : x 2 C1 g:
k!1

supfcT x : x 2 C1 g  cT x1 = lim k = 1: Thus we have shown the identity (29). k !1 (iii) To prove property (c)", it suces to show that 1 <  3 +  if F 6= ;, while, to prove property (b), it suces to show that C1 = ; if F = ;. Assume on the contrary that 1   3 +  0 when F 6= ; and that C1 6= ; when F = ;. In both cases, there is an x 2 C1 \ C0 . By (ii)  ^ of Lemma 6.1, there exist x 2 C1 , w 2 D and  2 [; ] satisfying (21), (22) and (23). De ne  ~ ^ xc  x 0 w. We see from (21) that here is a p0 (1) 2 P F for which p0(^ ) > . x (iv) The rest of the proof is exactly the same as the argument given in part (iv) of the proof of Theorem 3.3, and it is omitted here.

Thus it remains to show 1  supfcT x : x 2 C1 g: If Ck = ; for 9 nite number k, then both 1 and supfcT x : x 2 C1 g becomes 01. Now assume that Ck 6= ; (8k = 0; 1; . . . ). Then, by the de nition of the sequence fk (k = 0; 1; 2; . . . )g  R, there exists a sequence fxk (k = 0; 1; 2; . . . )g  Rn such that xk 2 Ck  C0 and k = cT xk . Since C0 is compact, the sequence fxk (k = 0; 1; 2; . . . )g has a limit point x1 2 C0. Since xk 2 Cj ( 8j = 0; 1; 2; . . . ; k; 8k = 0; 1; . . . ), we see that x1 is a common accumulation point of Ck (8k = 0; 1; . . . ). Since Ck is compact, we 1 \ Ck . By the continuity, we obtain that have that x1 2 Ck (8k = 0; 1; . . . ); hence x1 2 C1 
k=0

7.

The Successive Convex Relaxation Methods and the Lovsza Schrijver Procedures.

In this section, we connect our Algorithm 3.1 (the SSDP Relaxation Method) and Algorithm 3.2 (the SSILP Relaxation Method) to the Lovsz-Schrijver lift-and-project procedures given in the a paper [11], and our Corollary 3.4 to the related theoretical results of [11]. To make the connection, we focus on 0-1 programs, and assume throughout this section that F is a subset of f0; 1gn . 26

7.1.

0-1 Convex Programs.

We begin with the case where a compact convex subset C0 of [0; 1]n satisfying F = C0 \ f0; 1gn is given in advance as in the original Lovsz-Schrijver procedures. We will discuss a more general a case later in Section 7.2. Let

K 
0

(

c.cone
(

1

 KI 


c.cone

1

x 2R
(

!

x 2R
1+n

!

1+n

: x 2 C0

)!

:   0 and x 2 C0 ;
)!

)

1

!

= the convex cone spanned by the 0-1 vectors in K0 : Here the 0th coordinate is special. It is used in homogenizing the sets of interest from Rn to R1+n . Clearly,
(

x

: x 2 c.hull(F )

C0 =

x2R

n

:

1

x 2K

!

)
0

(

and c.hull(F ) =

x2R

n

:

1

x 2 KI

!

)

:

Now, we de ne the lifting operation in general. Let K and T be closed convex cones in R1+n , and K3  fv 2 R1+n : uT v  0 (8u 2 K)g and T 3 their duals, respectively. A (1 + n) 2 (1 + n) symmetric matrix Y is in M+ (K; T ) if (i)
1+ +

The closed convex cone K0 serves as an initial convex relaxation of KI in the Lovsz-Schrijver a lift-and-project procedures. Given the current convex relaxation Kk of KI , rst a closed convex cone in the space S 1+n of (1 + n) 2 (1 + n) symmetric matrices is de ned (the lifting operation). Then a projection of this cone onto R1+n gives the next convex relaxation Kk+1 of KI .

Y 2 S n, (ii) Y e = Diag(Y ), where e 2 R n denotes the unit vector with the 0th coordinate 1 and Diag(Y ) the (1 + n)-dimensional vector composed of the diagonal elements of Y , (iii) uT Y v  0 (8u 2 K3 and 8v 2 T 3 ). (This condition is equivalent to Y K3  T or Y T 3  K.)
0 0 1+

Now, we describe the projection operation:

N+ (K; T )  fY e0 : Y
k The iterated operators N+ (K; T ) are de ned as k01 N+ (N+ (K; T ); T ) (8k = 1; 2; . . . ).

2 M (K; T )g: follows. N (K; T )  K,
+ 0 +

k and N+ (K; T )



In Section 5 of the authors' previous paper [10], they used the choice T  Kk , and related their original SSDP and SSILP Relaxation Methods to the Lovsz-Schrijver lift-and-project procedures a 27

Another procedure studied in [11], uses a weaker relaxation by removing the condition (i) in the lifting operation. Let M (K; T )  S 1+n and N (K; T )  R1+n denote the corresponding sets for this procedure. We will refer to the rst procedure using the lifting M+ (K; T ) and the projection N+ (K; T ) as the N+ procedure. We will call the other (using M (K; T ) and N (K; T )) the N procedure.

mentioned above. See Theorem 5.3 and its proof of [10]. In the case of 0-1 integer linear programs, one of the important choices for T is to x it to be the convex cone of all 0-1 vectors in R1+n with x0 = 1; we assume in the remainder of this section that where ei 2 R1+n denotes the unit vector with the ith coordinate 1. (We used the notation N+ (K; T ), whereas N+ (K) is used in [11] for this choice of T .) Lovsz and Schrijver proved the a following two theorems.
Theorem 7.1

T  c.conef(e

0

+ e1 ); (e0 + e2 ); . . . ; (e0 + en )g;

(Theorem 1.4 of [11])

K  N (K ; T )  N (K ; T )  . . .  N n (K ; T ) = KI
0 + 0 2 + 0 + 0

and
Theorem 7.2

K  N (K ; T )  N (K ; T )  . . .  N n (K ; T ) = KI :
0 0 2 0 0

(Theorem 1.6 of [11]) Let K be a closed convex cone in R1+n . Suppose that there exists a weak separation oracle for K. Then the weak separation problem for N (K; T ) and for N+ (K; T ) can be solved in polynomial time. (For the de nition of weak separation oracle, see [9].) This result hinges upon the facts that conditions (i) and (ii) can be checked in polynomial time and that condition (iii) on Y can also be checked in polynomial time since T 3 turns out to be the polyhedral cone, and a weak separation oracle for K exists. Now let us see how our Algorithm 3.1 (the SSDP Relaxation Method) and Algorithm 3.2 (the SSILP Relaxation Method) apply to 0-1 convex quadratic programs. Let P I denote the set of quadratic functions c.conefe0 ; e1 ; . . . ; en ; (e0 0 e1 ); (e0 0 e2 ); . . . ; (e0 0 en )g;

xi (xi 0 1)  0 and
maximize subject to

0 xi(xi 0 1)  0 (8i = 1; 2; . . . ; n):
)

(30)

Then we can rewrite the 0-1 quadratic program under consideration as

cT x x 2 F  fx 2 C

0

: p(x)  0 (8p(1) 2 P I )g:

(31)

We will show below that if we take direction sets D1 and D2 as in Corollary 3.4, then Algorithm 3.1 and Algorithm 3.2 work in a similar way as the Lovsz-Schrijver N+ and N procedures, respectively. a
Lemma 7.3 Let fCk (k = 0; 1; . . . )g be the sequence generated by Algorithm 3.1 (the SSDP Relaxation Method) or Algorithm 3.2 (the SSILP Relaxation Method) applied to 0-1 QP (31), using D1 = DI  fe1 ; e2 ; . . . ; en ; 0e1 ; 0e2 ; . . . ; 0en g and D2 = D. Then

N+ (K0 ; T )
k

 

(

c.cone
(

1 1

!

x x

: x 2 Ck : x 2 Ck

)!

when we use Algorithm 3.1;
)!

(32) (33)

N (K0 ; T )
k

!

c.cone

when we use Algorithm 3.2

(k = 0; 1; 2; . . . ). 28

Proof: We will only prove the relation (32); we could prove the other relation (33) similarly. For simplicity of notation, de ne

Ck
+1 0 + 0

(

= c.cone
+

1

!

Kk = N (Kk ; T ) (8k = 0; 1; 2; . . . ): By de nition, N (K ; T ) = K = C . Assuming that Kk  C k , we will prove that Kk  C k . If Ck is empty then so are C k and C k ; hence the desired relation follows. If Ck consists of a ^ ^ single point x, then we have either fxg = F = Ck or F = ;. We can derive that
0 0 +1 +1 +1

x

: x 2 Ck

)!

(8k = 0; 1; 2; . . . );

Ck Ck
+1

= =

Ck

+1

= KI = c.cone

(

; if F = ;:

1 ^ x

!)!

^ if fxg = F = Ck ;

Thus the desired relation follows in both cases. Now we assume that Ck consists of at least two points. Then there is a j 2 f1; 2; . . . ; ng such that 0 < maxfeT x : x 2 Ck g 0 minfeT x : x 2 Ck g j j T = maxfej x : x 2 Ck g + maxf0eT x : x 2 Ck g j = (Ck ; ej ) + (Ck ; 0ej );

(34)

where ej that

2 Rn denotes the unit vector with the j th coordinate 1. Since Ck  [0; 1]n, we also see 0 1  (Ck ; 0ei )  0  (Ck ; ei )  1 (8i = 1; 2; . . . ; n):
! b Ck+1 = F (C0 ; P I [ P 2 (Ck ; DI ; D)):
+1

(35)

On the other hand, we see by Remark 3.8 that 1

Hence each y = 

x 2 Ck

for 9 > 0 is characterized as 9X
!
1+ +

2 S n;

xT Y   x X 2 S n; + 2qT x + Q  X  0 (8qf (1; ; q ; Q) 2 P I ); + 2qT x + Q  X  0 (8qf (1; ; q ; Q) 2 P (Ck ; DI ; D)):
1
2

(36) (37)

Recall that P I consists of the quadratic functions in (30), so that we can rewrite inequality (36) as the equality Y e0 = Diag(Y ) imposed on each Y 2 M+ (Kk ; T ), where e0 2 R1+n denotes the unit vector with the 0th coordinate 1. Letting

J 
1

(

c.cone
(

(Ck ; d1 )

! !

0d

J 
2

c.cone

(Ck ; d2 )

1

: d1 2 DI : d2 2 D

)!

; ;

)!

0d

2

we can rewrite (37) as 0

 uT Y v (8u 2 J
29

2

and 8v 2 J 1 ):

We then see that

J

( (

1

= c.cone

(Ck ; ei )

!

0e i

;
!

 J
2

c.cone

= =

T 3; C 3  K3 (since Kk  C k ). k k
0

1 0

!

ei
;

(Ck ; 0ei )
1
!

! )!

)!

(i = 1; 2; . . . ; n) (by (34) and (35))

;

0

ei

0e i

(i = 1; 2; . . . ; n)

Hence (37) implies

 uT Y v (8u 2 K3 k
1

Thus we have shown that if y =  1
!
1+ +

xT 2 S n , (i)' Y =  x X (ii)' Y e = Diag(Y ), (iii)' uT Y v  0 (8u 2 K3 and 8v 2 T 3 ). k
0

x 2 Ck

!

and 8v 2 T 3 ):

+1

for 9 > 0 then 9X

2 Sn;

shown that C k+1  Kk+1 .
Remark 7.4

These three conditions characterize each y = 

1

x 2 Kk

!

+1

= N+ (Kk ; T ). Therefore we have

we have that

If F does not lie on any of the hyperplanes xj = 0 or xj = 1 (8j = 1; 2; . . . ; n) then

(Ck ; 0ei ) = 0 and (Ck ; ei ) = 1 (8i = 1; 2; . . . ; n; 8k = 1; 2; . . . ) and J 1 = T 3 :
In this case, we see that C k+1 = Kk+1 and the equality holds in (32) and (33).
Remark 7.5

If (Ck ; ei) < 1 and/or (Ck ; 0ei ) < 0 in (35) then we know that

F \ fx 2 [0; 1]n : xi = 1g = ; and/or F \ fx 2 [0; 1]n : xi = 0g = ;:

In this case, using Lemma 4.1, we can derive that

Ck+1 \ fx 2 [0; 1]n : xi = 1g = ; and/or Ck+1 \ fx 2 [0; 1]n : xi = 0g = ;:
This fact is consistent with Lemma 1.3 and the proof of Theorem 1.6 in [11]. Actually, Lemma 1.3 of [11] combined with our Lemma 7.3 implies

Ck+1  fx 2 [0; 1]n : xi = 0g and/or Ck+1  fx 2 [0; 1]n : xi = 1g:
Now, we are ready to prove that equality holds in (32) and (33).

30

Theorem 7.6

Let fCk (k = 0; 1; . . . )g be the sequence generated by Algorithm 3.1 (the SSDP Relaxation Method) or Algorithm 3.2 (the SSILP Relaxation Method) applied to 0-1 QP (31), using D1 = DI  fe1 ; e2 ; . . . ; en ; 0e1 ; 0e2 ; . . . ; 0en g and D2 = D. Then

N+ (K0 ; T ) = c.cone
k

(

1 1

!

N (K0 ; T ) = c.cone
k

(

x x

: x 2 Ck : x 2 Ck

)!

when we use Algorithm 3.1;
)!

(38) (39)

!

when we use Algorithm 3.2

(k = 0; 1; 2; . . . ).
Proof: Consider the proof of Lemma 7.3. To prove the equations (38) and (39), we take the induction hypothesis to be the equality (which holds for k = 0) rather than the containment. So, it suces to prove that Y 2 M(Kk ; T ) implies

0

 uT Y v (8u 2 J

2

and 8v 2 J 1 ):

By Remark 7.4, the equations (38) and (39) hold if Now, suppose there exists r 2 f1; 2; . . . ; ng such that (Ck ; er ) < 1. Then by Remark 7.5, Y 2 M(Kk ; T ) implies yrr = yr0 = 0. Let yj denote the j th column of Y . Y 2 M(Kk ; T ) also implies y0 0 y j 2 Kk (j = 1; 2; . . . ; n). Since yr0 = 0 and Kk lies in the nonnegative orthant, we must have yrj = 0 (j = 1; 2; . . . ; n). Since Y is symmetric, we also have yir = 0 (i = 1; 2; . . . ; n). We proved that ! (Ck ; er ) 2 K for all such r: Y k We have the same arguments for the indices r for which (Ck ; 0er ) < 0 (by the symmetry of the algorithms for 0-1 convex quadratic programs, under the complementation of the variables). We proved that all the extreme rays v of J 1 that are di erent than those of T 3 also satisfy 0 for every proof.
7.2.

(Ck ; 0ei) = 0 and (Ck ; ei ) = 1 (8i = 1; 2; . . . ; n; 8k = 1; 2; . . . ):

0er

Y 2 M(Kk ; T ).

 uT Y v (8u 2 K3 ) k

By induction hypothesis, we obtain

J

2

= K3 which completes the k

0-1 Nonconvex Quadratic Programs.

We now deal with the general case where any compact convex subset C0 of [0; 1]n satisfying F = C0 \ f0; 1gn is not given in advance. Instead, we consider the 0-1 nonconvex quadratic program of the form maximize subject to Here P I

cT x x 2 F  fx 2 [0; 1]n : p(x)  (8p(1) 2 P F )g:

)

(40)

 P F  Q.
31

Lemma 7.7
8 > < > :

Let
T > = 9X 2 S such that Y = x x 2 S n and X > + 2qT x + Q  X  0 (8qf (1; ; q ; Q) 2 P F [ P ([0; 1]n ; DI ; D)) ; n

b C0 = F ([0; 1]n ; P F [ P 2 ([0; 1]n ; DI ; D))

=

x 2 [0; 1]n :

1

!

9

1+ +

2

(an SDP relaxation of F ): Then F = C0 \ f0; 1gn . Proof:

It was shown in Lemma 5.2 of [10] that
b F = F ([0; 1]n ; P F [ P 1 ([0; 1]n ; DI ) \ f0; 1gn :

Since

b b F  F ([0; 1]n ; P F [ P 2 ([0; 1]n ; DI ; D))  F ([0; 1]n ; P F [ P 1 ([0; 1]n ; DI )); F = C0 \ f0; 1gn follows.
L

b b This lemma stays valid even if we replace F ([0; 1]n ; P F [P 2 ([0; 1]n ; DI ; D)) by F ([0; 1]n ; P F [ 2 n ; D ; D )) (a semi-in nite LP relaxation). (For the proof, we need Lemma 4.4.) As a P ([0; 1] I consequence, we can reduce the 0-1 nonconvex quadratic program (40) to the 0-1 convex quadratic program (31) by applying one iteration of the SDP Relaxation Method or one iteration of the Semi-in nite LP relaxation Method. Therefore, in the case of 0-1 nonconvex quadratic programs (40), our Algorithms 3.1 and 3.2 terminate in at most (1 + n) iterations if we choose the direction sets D1 and D2 as in Corollary 3.4. By using similar arguments to those in the proof of Theorem 1.6 of [11], we can also prove that whenever the representation P F of F is nite, we can solve the weak separation problem for Ck in a polynomial time for any k = O(1).

8.
(A)

Concluding Discussions.

Let  > 0 and  > 0. By Theorem 3.5, there exists a  > 0 such that if we take D1  DV and a  -net D2 of D(c; ) in Step 0 of Algorithm 3.1 (or Algorithm 3.2) then we obtain an upper bound on the optimal objective value of the QP (1) within  in a nite number of iterations. From the proof of Theorem 3.5 given in Section 6, we know that the value of such a  depends on  > 0,  > 0, D1 , the diameter 0 of C0 and the quadratic inequality representation P F of the feasible region F of the QP (1). However it seems dicult to characterize the largest  which works for the purpose at hand. In practice, we might want to avoid guessing  , but dynamically adjust it. We may start with a large value of  and decrease it from one iteration to the next as we see appropriate. Some computational experience can be very helpful in designing strategies for choosing appropriate sequences k > 0.
(B) Application to Quadratic Matrix Inequalities.

let C0 be a nonempty compact convex subset of R linear objective function cT x over the feasible region
n.

Let Aij 2 S n (i; j = 0; 1; 2; . . . ; n), and Consider the maximization problem with a
n n XX i=0 j =0

F = fx = (x1 ; x2 ; . . . ; xn )T

2C :x
0

0 = 1;

Aij xixj 2 S n g:
+

32

This problem covers bilinear matrix inequalities with bounded variables. We can rewrite F in terms of quadratic inequalities:

F = fx = (x1 ; x2 ; . . . ; xn )T

2C :x
0

0

= 1;

0d

0 n n XX T @
i=0 j =0

 Aij xixj A d  0 (8d 2 D)g:

1

Therefore all the theoretical results (Theorems 3.3, Corollary 3.4, Theorem 3.5 and Corollary 3.6) hold for this problem. But the quadratic inequality representation of F is not nite, so that Condition 2.2 which we have imposed on the quadratic inequality representation of F to implement Algorithms 3.1 and 3.2 does not hold. In this case, however, we can rewrite Ck+1 to be computed in Step 3 as
8 > > > > > > < > > > > > > :

Ck+1 =

x 2 Rn :

9X 2 S
n n XX
=0 =0

n

such that Y =
+

: Aij Yij 2 S n and > > > > > i j > T x + Q  X  0 (8qf (1; ; q ; Q) 2 P [ P ) ; e + 2q k
0

xT 2 S x X
1

!

1+n +

;

9 > > > > > > =

e (See Section 2.6 of [10] to derive the representation above). Here P 0 denotes a convex quadratic e inequality representation of C0 . If both P 0 and P k are nite, then the maximization of a linear T x over C function a k +1 turns out to be a standard SDP having a nite number of linear matrix inequality constraints. Therefore Algorithm 3.1 applied to this problem is implementable.

Finally, our methods provide a canonical way of generating very large scale LP and SDP problems (essentially of arbitrary size) with certain structure. This provides yet more motivation for the study of computational techniques for the solution of such problems.
(C)

The authors are grateful to Professor Yang Dai, Ms. Akiko Takeda and Mr. Mituhiro Fukuda for their careful reading of the draft of the paper and their valuable comments and suggestions.
Acknowledgment:

References

[1] F. Alizadeh, \Interior point methods in semide nite programming with applications to combinatorial optimization," SIAM Journal on Optimization 5 (1995) 13{51. [2] E. Balas, S. Ceria and G. Cornujols, \A lift-and-project cutting plane algorithm for mixed e 0-1 programs," Mathematical Programming 58 (1993) 295{323. [3] E. Balas and W. R. Pulleyblank, \The perfect matching subgraph polytopes of a bipartite graph," Networks 13 (1983) 495{516. [4] S. Boyd, L. E. Ghaoui, E. Feron and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, (SIAM, Philadelphia, 1994). [5] R. W. Cottle, J. S. Pang and R. E. Stone, The Linear Complementarity Problem, (Academic Press, New York, 1992). 33

[6] T. Fujie and M. Kojima, \Semide nite relaxation for nonconvex programs," Journal of Global Optimization 10 (1997) 367{380. [7] M. X. Goemans, \Semide nite programming in combinatorial optimization," Mathematical Programming 79 (1997) 143{161. [8] M. X. Goemans and D. P. Williamson, \Improved approximation algorithms for maximum cut and satis ability problems using semide nite programming," Journal of Assoc. Comput. Mach. 42 (1995) 1115{1145. [9] M. Grtschel, L. Lovsz and A. Schrijver, Geometric algorithms and combinatorial optimizao a tion (Springer, New York, 1988). [10] M. Kojima and L. Tunel, \Cones of matrices and successive convex relaxations of nonconvex c sets," Technical Report B-338, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Meguro, Tokyo, Japan, March 1998. Also issued as CORR 98-6, Dept. of Combinatorics and Optimization, Faculty of Mathematics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada. [11] L. Lovsz and A. Schrijver, \Cones of matrices and set functions and 0-1 optimization," a SIAM J. on Optimization 1 (1991) 166{190. [12] R. Horst and H. Tuy, Global Optimization, Second, Revised Edition, (Springer-Verlag, Berlin, 1992). [13] M. Mesbahi and G. P. Papavassilopoulos, \A cone programming approach to the bilinear matrix inequality problem and its geometry," Mathematical Programming 77 (1997) 247{ 272. [14] Yu. E. Nesterov, \Semide nite relaxation and nonconvex quadratic optimization," CORE Discussion Paper, #9744, June 1997. [15] S. Poljak, F. Rendl and H. Wolkowicz, \A recipe for semide nite relaxation for (0,1)-quadratic programming," Journal of Global Optimization 7 (1995) 51{73. [16] M. V. Ramana, An algorithmic analysis of multiquadratic and semide nite programming problems, PhD thesis, Johns Hopkins University, Baltimore, MD, 1993. [17] M. G. Safonov, K. G. Goh and J. H. Ly, \Control system synthesis via bilinear matrix inequalities," in: Proceedings of the 1994 American Control Conference, Baltimore, MD, 1994. [18] H. D. Sherali and C. H. Tuncbilek, \A reformulation-convexi cation approach for solving nonconvex quadratic programming problems, " Journal of Global Optimization 7 (1995) 1{ 31. [19] N. Z. Shor, \Dual quadratic estimates in polynomial and boolean programming," Annals of Operations Research 25 (1990) 163-168. [20] L. Vandenberghe and S. Boyd, \Semide nite Programming," SIAM Review 38 (1996) 49{95. [21] Y. Ye, \Approximating quadratic programming with quadratic constraints," Working paper, Dept. of Management Sciences, The University of Iowa, April 1997. 34



更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图