9512.net
甜梦文库
当前位置:首页 >> >>

An algebraic geometry approach to nonlinear parametric optimization in control


An algebraic geometry approach to nonlinear parametric optimization in control
Ioannis A. Fotiou*, Philipp Rostalski*, Bernd Sturmfels? and Manfred Morari*

arXiv:math/0509288v1 [math.OC] 13 Sep 2005

Abstract— We present a method for nonlinear parametric optimization based on algebraic geometry. The problem to be studied, which arises in optimal control, is to minimize a polynomial function with parameters subject to semialgebraic constraints. The method uses Gr¨ obner bases computation in conjunction with the eigenvalue method for solving systems of polynomial equations. In this way, certain companion matrices are constructed off-line. Then, given the parameter value, an on-line algorithm is used to ef?ciently obtain the optimizer of the original optimization problem in real time.

I. INTRODUCTION Optimal control is a very active area of research with broad industrial applications [1]. It is among the few control methodologies providing a systematic way to perform nonlinear control synthesis that handles also system constraints. To a great extent, it is thanks to this capability of dealing with constraints that model predictive control (MPC) has proven to be very successful in practice [2], [3]. Model predictive control uses optimization on-line to obtain the solution of the optimal control problem in real time. This method has been proven most effective for applications. Typically, the optimal control problem can be formulated into a discrete time mathematical program, whose solution yields a sequence of control moves. Out of these control moves only the ?rst is applied, according to the receding horizon control (RHC) scheme. The optimal control problem is formulated as a mathematical program, which can be a linear program (LP), a quadratic program (QP) or a general nonlinear program (NLP). For hybrid systems, the corresponding mathematical programs can be mixed integer programs - MILPs, MIQPs or MINLPs [4]. The class of the optimization problem depends on the objective function and the class of systems one wants to derive an optimal controller for. Technology and cost factors, however, make the implementation of receding horizon control dif?cult if not, in some cases, impossible. To circumvent these issues, the solution of the optimal control problem is computed offline, by solving the corresponding mathematical program parametrically [5]. That is, we compute the explicit formula giving the solution of the program (control inputs) as a function of the problem parameters (measured state). The solution then is ef?ciently implemented on-line as a lookup table.
? Automatic Control Laboratory, Swiss Federal Institute of Technology, CH-8092 Zurich, Switzerland. ? Department of Mathematics, University of California at Berkeley, 94720-3840 CA, USA. Corresponding Author: Email: fotiou@control.ee.ethz.ch

In the present work, we extend the concept of the explicit solution to the class of nonlinear polynomial systems with polynomial cost function. By polynomial systems we mean those systems, whose state update equation is given by a polynomial vector ?eld. For this class of systems, the resulting mathematical program is a nonlinear (polynomial) parametric optimization problem. While the explicit solution is not generally possible in the nonlinear case, we stress the fact that a partial precomputation of the optimal control law is still feasible using algebraic techniques [6]. In this paper, we use the eigenvalue method [7] in conjunction with Gr¨ obner bases computation to perform nonlinear parametric optimization of polynomial functions subject to polynomial constraints. II. PARAMETRIC OPTIMIZATION Let u ∈ Rm be the decision-variable vector and x ∈ R be the parameter vector. The class of optimization problems that this paper deals with can generally assume the following form:
n

min J (u, x)
u

s.t.

g (u, x) ≤ 0,

(1)

where J (u, x) ∈ R[x1 , . . . , xn , u1 , . . . , um ] is the objective function and g ∈ R[x1 , . . . , xn , u1 , . . . , um ]q is a vector polynomial function representing the constraints of the problem. By parametric optimization, we mean minimizing the function J (u, x) with respect to u for any given value of the parameter x ∈ X ? Rn , where X is the set of admissible parameters. Therefore, the polynomial parametric optimization problem is ?nding a computational procedure for evaluating the maps u? (x) : Rn x ?→ Rm ?→ u? (2) J ? (x) : Rn x where u? J? = = arg min J (u, x)
u

?→ R ?→ J ? ,

min J (u, x).
u

(3)

For the sake of simplicity, we assume that the feasible set de?ned by g (u, x) is compact, therefore the minimum is attained. Also, in order for (2) not to be point-to-set maps, we focus our attention to one (any) optimizer.

A. Posing the problem Our point of departure is the observation that the cornerstone of continuous constrained optimization are the Karush-Kuhn-Tucker (KKT) conditions. All local and global minima for problem (1) (satisfying certain constraint quali?cations) occur at the so-called “critical points” [8], namely the solution set of the following system: ?u J (u, x) +
q i=1

These equations form an ideal I ∈ K [u1 , . . . , um ], where K denotes an arbitrary ?eld: I := f1 , . . . , fm . (6)

The solution points we are interested in are the points on the variety over the algebraic closure K of K , V (I ) = {s ∈ K
m

: f1 (s) = 0, . . . , fm (s) = 0},

(7)

?i ?u gi (u, x) ?i gi (u, x) ?i g (u, x)

= = ≥ ≤

0 0 0 0.

(4)

For the class of problems we consider, the two ?rst relations of the KKT conditions (4) form a square system of polynomial equations. Various methods have been proposed in the literature for solving systems of polynomial equations, both numerical and symbolic [9], [10], [11]. Here we consider symbolic methods since our aim is to solve the optimization problem parametrically. We should point out that the underlying philosophy is that we aim at moving as much as possible of the computational burden of solving the nonlinear program (1) off-line, leaving an easy task for the on-line implementation. B. Off-line vs. on-line computations The explicit representation of the optimal control law as a state feedback has been successfully investigated for the linear, quadratic and piecewise af?ne case. Among other advantages of the explicit representation is that one is able to analyze the controller, derive Lyapunov functions [12], perform dynamic programming iterations [13] in an effective way, even compute the in?nite horizon solution for certain classes of constrained optimal control problems [14]. Unfortunately, such an explicit representation is not always possible. The enabling factor in the case of linear systems (or piecewise af?ne systems) is the fact that the KKT system (4) can be solved analytically. In the general polynomial case studied here, we have to solve a system of (nonlinear) polynomial equations. The next best alternative then to an explicit solution is to bring the system in such a form, so that once the parameters are speci?ed, the solution can be extracted easily and fast. III. THE EIGENVALUE METHOD In this section we brie?y describe the method of eigenvalues ([7], Chapter 2, §4) for solving systems of polynomial equations. This method is used in conjunction with Gr¨ obner bases to perform parametric optimization. A. Solving systems of polynomial equations Suppose we have a system of m polynomial equations fi in m variables ui f1 (u1 , . . . , um ) = ··· fm (u1 , . . . , um ) = 0 (5) 0.

i.e. the set of common zeros of all polynomials in the ideal I . These points can be computed by means of Gr¨ obner bases. An obvious choice would be a projection-based algorithm by means of lexicographic Gr¨ obner bases, see ([15], Chapter 2, §8). Since the computation of a lexicographic Gr¨ obner basis is very time consuming, we focus on a different method. The ?rst step we take towards solving (5) is computing a Gr¨ obner basis with an arbitrary term-order, e.g. graded reverse lexicographic term-order. We de?ne G = {γ1 , . . . , γt } to be this Gr¨ obner basis of I . B. The generalized companion matrix Consider a polynomial function h ∈ K [u1 , . . . , um ]. The Gr¨ obner basis G and the division algorithm make it possible to uniquely write any polynomial h ∈ K [u1 , . . . , um ] in the following form: h = c1 (u)γ1 + · · · + ct (u)γt + h ,
G

(8)

? G is the unique remainder of the division of h with where h respect to the Gr¨ obner basis G. The polynomial h can in turn be multiplied with another polynomial function f ∈ K [u1 , . . . , um ] and their product expressed as follows: f · h = d1 (u)γ1 + · · · + dt (u)γt + f · h .
G

(9)

In the generic case, the ideal I will be zero-dimensional, which means that the corresponding quotient ring A = K [u1 , . . . , um ]/I (10)

is a ?nite-dimensional K -vector space ([15], Chapter 5, §2). The quotient ring of an ideal can be thought of as the set of all polynomials that do not belong to the ideal but belong to the underlying ring. Denote with b = [b1 , . . . , bl ]T the vector of the standard monomials. A monomial is standard if it is not divisible by any leading monomial of a polynomial in the Gr¨ obner basis. These standard monomials of G form a basis B = { b 1 , . . . , bl } (11)

for the K -vector space A. As a result, every remainder can be expressed with respect to this basis as an inner product ri = aT i ·b , (12)

where ai ∈ K l . We can now de?ne the map mh : A → A as follows: if pG ∈ A, then mh (pG ) := h · p = h · pG
G G G

∈ A.

(13)

The following proposition holds. Proposition 1: Let h ∈ K [u1 , . . . , um ]. Then the map mh : A → A is K -linear. The proof of Proposition 1 can be found in ([15], p. 51). Since A is a ?nite-dimensional vector space and the map mh is linear, its representation with respect to a basis of this vector space is given by a square matrix Mh . The l × lmatrix Mh is called the generalized companion matrix. C. Computing the companion matrix To compute the matrix Mh , assume that we have the basis B = {b1 , . . . , bl } consisting of the standard monomials bi of the Gr¨ obner basis G. Then, for each one of them, compute the remainder ri of the polynomial h · bi with respect to the Gr¨ obner basis G: h · bi
G

Theorem 2: The complex zeros of the ideal I are the vectors of joint eigenvalues of the companion matrices Mu1 . . . Mum , that is, V (I ) = (u1 , . . . , um ) ∈ Rm : ?v ∈ Rm ? i : Mui v = ui v It has to be noted that any vector-valued polynomial function h : Rm ?→ R can be evaluated over a zerodimensional variety in the same way. IV. THE ALGORITHM In this section, we present the proposed algorithm, which consists of two parts: the off-line part, where the generalized companion matrices for the optimization problem are constructed, and the on-line part where this precomputed information is used and given the value of the parameter x, the optimal solution is ef?ciently extracted. A. Idea Under certain regularity conditions, if J ? (de?ned in (3)) exists and occurs at an optimizer u? , the KKT system (4) holds at u? . Consequently, J ? is the minimum of J (u, x) over the semialgebraic set de?ned by the KKT equations and inequalities (4). These conditions can be separated in a set of inequalities and a square system of polynomial equations. The method of eigenvalues for solving systems of polynomial equations as described in section III can be used for the latter. This method assumes that the ideal generated by the KKT system (4) is zero-dimensional. By ignoring the inequalities, a superset of all critical points is computed and in a second step, all infeasible points are removed. Finally, among the feasible candidate points those with the smallest cost function value have to be found via discrete optimization. By discrete optimization we mean choosing among a ?nite set that point, which yields the smallest objective function value. B. Off-line Part In K [u1 , ..., um , ?1 , ..., ?q ], where K is the ?eld of rational functions R(x1 . . . , xn ) in the parameter x, we de?ne the KKT ideal
q

= ri ,

? bi ∈ B .

(14)

All ri ∈ A can in turn be expressed as an inner product ri = aT i ·b (15)

with respect to the basis B . By collecting all vectors ai for all basis elements [7], we can construct a representation of the map mh with respect to basis B , i.e. calculate the matrix Mh as follows: ? T ? a1 Mh ≡ [aij ] = ? · · · ? . (16) aT l Computing the companion matrix is a standard algebraic procedure implemented in various packages, e.g. in Maple 10. D. Evaluating polynomial functions on a variety Consider a polynomial function h ∈ R[u1 , . . . , um ]. The amazing fact about the matrix Mh is that the set of its eigenvalues is exactly the value of h over the variety V (I ) de?ned by the ideal I . More precisely, V (I ) is the set of all solution points in complex m-space Cm of the system (5). The following theorem holds. Theorem 1: Let I ? C[u1 , . . . , um ] be a zerodimensional ideal, let h ∈ C[u1 , . . . , um ]. Then, for λ ∈ C, the following are equivalent: 1) λ is an eigenvalue of the matrix Mh 2) λ is a value of the function h on the variety V (I ). The proof can be found in ([7], p. 54). To obtain the coordinates of the solution set of (5), we evaluate the functions h1 : hm u ··· : u ?→ u1 (17) ?→ um

IKKT = ?u J (u, x) +
i=1

?i ?u gi (u, x), ?i gi (u, x) (18)

containing all the equations within the KKT-system (4). All critical points for the optimization problem (4) and ?xed x are the subset of real points on the KKT-variety
R VKKT ? VKKT = V (IKKT ) .

(19)

on the variety V (I ) de?ned by the ideal I , where u above denotes the vector (u1 , . . . , um ). This can be done by means of the associated companion matrices of the functions hi . The following theorem taken from ([9] p. 22) is the basis for the calculation of these point coordinates.

Using the method described in section III we can compute these by means of the generalized companion matrices. The algebraic part of the algorithm, i.e the computation of the companion matrices can be done parametrically. For one thing, one could use Gr¨ obner bases computation for the ideal IKKT and try to compute the corresponding

companion matrices Mui and M?i directly. Owing to the structure of the polynomial equations of the KKT system (18), this problem is very poorly conditioned. The dif?culties stem from the fact that the ideal IKKT is by construction decomposable. It contains terms like ?i gi (u, x) which lead to a reducible variety V (IKKT ). To overcome this obstacle, we factorize the generators of the Gr¨ obner basis (i.e. the polynomials appearing in relation (18)) and express the ideal IKKT as an intersection of super-ideals Ij,KKT . The super-ideal Ij,KKT denotes the ideal constructed by ?xing a subset of p active constraints g ?i (u, x) among the set of all q constraints gi (u, x) – see (18). The corresponding Lagrange multipliers are denoted with ? ?i . This leads to Ij,KKT = ?u J (u, x) + with the feasibility inequalities ? ?i gi (x, u) ≥ 0 ≤ 0. (21)
p i=1

To handle this case, comprehensive Gr¨ obner bases can be used [16]. The parametric computation is guaranteed to be correct only if the sequence of leading coef?cients of the result and the sequence of greatest common denominators removed in the computations are nonzero [16]. If ordinary methods such as Buchberger’s algorithm are used to compute Gr¨ obner bases, these issues have to be kept in mind. A summary of the off-line algorithm appears in Algorithm 1. Algorithm 1 Off-line Part: Input: Objective function J (x, u) and constraints gi (x, u) ≤ 0. Output: Set of feasible sub-varieties Vj,KKT with their generalized companion matrices Muj,i and Mj,? ? i , or an explicit function u? j,i for their candidate optimizer. 1: for all combination of active and inactive constraints do 2: construct Ij,KKT 3: calc. Gr¨ obner basis Gj for Ij,KKT 4: if Gj =< 1 > then 5: discard the super-ideal 6: else 7: calculate number of solutions of Vj,KKT by means of the Hilbert polynomial 8: if ?Vj,KKT = 1 then 9: Express all u? j,i as rational functions in the parameter x 10: else 11: Compute generalized companion matrices Mj,ui and Mj,? ? i for all decision variables ui 12: end if 13: end if 14: end for
15: 16:
? ?j,i return: Mj,ui and Mj,? ? i , resp. uj,i and ?

? ?i ?u g ?i (u, x), g ?i (u, x)

(20)

Therefore, the ideal IKKT can be expressed as an intersecq tion of θ := ?({gi (u, x)}q i=1 ) = 2 super-ideals, where θ is the cardinality of the power set of all q constraints. Namely,
θ

IKKT =
j =1

Ij,KKT .

(22)

Relations (20) and (21) lead to a large number of superideals which are much better numerically conditioned than the original problem, even though they are not necessarily radical. Since many of the sub-varieties V (IKKT ) are empty, a Gr¨ obner basis computation for each ideal Ij,KKT identi?es these infeasible cases in advance and reduces the subsequent companion matrix computations tremendously by discarding them. ? in the non-empty The number of solutions over K sub-varieties Vj,KKT = V (Ij,KKT ) can be calculated by means of the Hilbert polynomial ([15], Chapter 9, §3). For zero-dimensional varieties this polynomial reduces to an integer, which is equal to the number of solutions counting multiplicity. If the sub-variety has only a single solution, the coordinates ui of the candidate solution can be computed analytically as a rational function of the parameters x. In this case, the polynomials in the Gr¨ obner basis from a set of linear equations in the decision variables that can be solved analytically. For all sub-varieties with more than one solution, a companion matrix has to be computed. The result are companion matrices whose entries are rational functions of the parameter x. Specialization of the parameters gives a map from the ?eld K to the ?eld R of real numbers. If the real parameters are chosen generically enough, then the given Gr¨ obner basis remains a Gr¨ obner basis, but for special choices of the parameters some trouble may arise. For instance, it may happen that a specialization leads to zero denominators.

C. On-line Part In order to evaluate the point coordinates of the KKT sub-varieties, we need to compute eigenvectors and eigenvalues for the companion matrices. Generally, eigenvalue computation cannot be done parametrically. The parameter x has to be ?xed to a numerical value and this computation is done on-line. Given the precomputed generalized companion matrices Mj,ui and Mj,? ? i (resp. an explicit expression for all subvarieties with linear Gr¨ obner basis) for all possible feasible combinations of active and inactive constraints, the on-line algorithm takes the value of the parameters x to compute the optimum J ? and the optimizer u? . The three main steps of the algorithm are: 1) calculate all critical points 2) remove infeasible solutions 3) ?nd the feasible solution u? with the smallest objective function value J ? = J (u? ).

Since all companion matrices have been computed parametrically, the remaining part that has to be done is linear algebra. For every non-empty sub-variety Vj,KKT , a set of right eigenvectors {v } is computed for the companion matrices Mj,ui of the j -th sub-variety, see Theorem 2. Because all companion matrices for a sub-variety Vj,KKT commute pairwise, they form a commutative sub-algebra within the non-commutative algebra of l × l matrices, where l is the companion matrix dimension (11), see also [7]. Therefore, it suf?ces to calculate the eigenvectors for a single arbitrary matrix in this sub-algebra, because they all share the same eigenvectors. To avoid computational problems, we choose a matrix Mj,rand in this sub-algebra as a random linear combination of the companion matrices associated with the decision variables Mj,ui , i.e. Mj,rand = c1 Mj,u1 + · · · + cm Mj,um + + cm+1 Mj,? ?p , ? 1 + · · · + cm+p Mj,? (23)

where ci ∈ R are randomly chosen. This ensures, with a low probability of failure, that the corresponding eigenvalues will all have algebraic multiplicity of one ([7], Chapter 2, §4). The sets of eigenvectors {v }j can now be used to compute all candidate critical points and their Lagrange multipliers ? ?j,k for the sub-variety Vj,KKT . To avoid unnecessary computations, we ?rst calculate the candidate Lagrange multipliers ? ?j,i for each sub-variety Vj,KKT . In this way, complex or infeasible candidate points with ?j,i < 0 for some i can be immediately discarded before the candidate optimizers u? j,i are computed. For all subvarieties with cardinality one, the problem of computing the critical points reduces to an evaluation of the precomputed functions. For all non-discarded candidate solutions, it remains to be checked whether they are feasible, i.e. g (u? j,i , xi ) ≤ 0. To achieve that, a set of feasible local candidate optimizers S = {u? j,i } is initially calculated by collecting all feasible candidate optimizers. After computing the objective function value J (u? j,i , x) for all candidate optimizers, the optimal solution J ? = min J (u? j,i , x) ?
uj,i ∈S

Algorithm 2 On-line Part: Companion matrices Mui and M? ? i for all non-empty sub-varieties Vj,KKT , resp. explicit expression for cardinality one sub-varieties has to be provided. Input: Value of the parameter x (state measurement taken in real time). Output: Optimal cost J ? and optimizer u? i. 1: for all feasible sub-varieties Vj,KKT with ?Vj,KKT > 1 do 2: specialize parameter x in Mui and M? ?i 3: calc. a set of common eigenvectors {v } for the companion matrix Mj,rand ?j,i v to obtain the joint-eigenvalues, 4: solve Mj,? ?i v = ? i.e. candidates for ? ?j,i 5: discard all eigenvectors with corresp. ? ?j,i < 0 6: use the remaining eigenvectors to calc. jointeigenvalues of Mj,ui to obtain candidates for u? j,i 7: end for 8: for all feasible sub-varieties Vj,KKT with ?Vj,KKT = 1 do 9: evaluate ? ?j,i (x) for all i 10: if ?i : ? ?j,i (x) < 0 then 11: discard sub-variety Vj,KKT 12: else 13: evaluate u? j,i (x) 14: end if 15: end for 16: for all evaluated candidate points {u? j,i }j do , x ) > 0 then 17: if gk (u? j,i 18: discard candidate point u? j,i 19: else 20: evaluate J (u? j,i , x) 21: end if 22: end for ? 23: compare J (u? j,i , x) for the calculated candidates uj,i ? ? and choose optimal J and corresponding ui 24: return: optimal cost J ? and optimizer u? i

A. Nonlinear model predictive control Consider the nonlinear discrete-time system with state vector x ∈ Rn and input vector u ∈ Rm x(k + 1) = f (x(k ), u(k )) (24)

and the optimizer
? u? i = arg min J (uj,i , x) u? j,i ∈S

subject to the inequality constraints g (u(k ), x(k )) ≤ 0, k = 0, . . . , N , (25)

for the optimization problem (1) can be easily obtained via discrete optimization over the ?nite set S . A summary of the on-line algorithm can be seen in algorithm 2. V. OPTIMAL CONTROL APPLICATION In this section we ?st give a description of the model predictive control optimization problem to show the connection of parametric optimization and optimal control.

where N is the prediction horizon and g ∈ R[x1 , . . . , xn , u1 , . . . , um ]q is a vector polynomial function representing the constraints of the problem. We consider the problem of regulating system (24) to the origin. For that purpose, we de?ne the following cost function
N ?1 N ?1 J (U0 , x0 ) = k=0

Lk (x(k ), u(k )) + LN (x(N ), u(N )) ,

N ?1 where U0 := [u(0), . . . , u(N ? 1)] is the optimization vector consisting of all the control inputs for k = 0, . . . , N ? 1 and x(0) = x0 is the initial state of the system. Therefore, computing the control input is equivalent to solving the following nonlinear constrained optimization program N ?1 min J (U0 , x0 ) u

leads to the following optimization problem: J? =
u(k),u(k+1),u(k+2) 3 x (k+i)x2 (k+i) ] Q i=1 [ 1 x1 (k+i) x2 (k+i)

min

+ s.t. x(k + j )


2 i=0

u(k + i)Ru(k + i)

≤ 5 ?j = 1 . . . N .

x(k + 1) = f (x(k ), u(k )) s.t. g (u(k ), x(k )) ≤ 0, k = 0, . . . , N.

(26)

Forming a vector u of decision variables with uk = u(k ) and renaming x(0), problem (26) is written in the more compact form min J (u, x)
u

s.t.

g (u, x) ≤ 0,

(27)

where J (u, x) is a polynomial function in u and x, u ∈ Rm is the decision variable vector and the initial state x = x(0) ∈ Rn is the parameter vector. This is exactly problem (1), a nonlinear parametric optimization problem. Our goal is to obtain the vector of control moves u. B. Illustrative example In this section we illustrate the application of the proposed method by means of a simple example. The offline algorithm including the algebraic methods and the case enumeration (22) have been implemented in Maple. A Maple-generated input ?le is used to initialize Matlab, in order to compute the optimizer on-line. Consider the Duf?ng oscillator [17], a nonlinear oscillator of second order. An equation describing it in continuous time is y ¨(t) + 2ζ y ˙ (t) + y (t) + y (t)3 = u(t), (28)

Of these twelve constraints there are ten constraints involving u(k + i), which have to be considered during the optimization. As described in section IV the KKT-variety will be split in 2?{gi } = 210 = 1024 sub-varieties. For all of them a Gr¨ obner basis needs to be computed. It turns out that only 29 of these are feasible, i.e. having a Gr¨ obner basis different from unity. Only these cases have to be further considered in the online algorithm. Among them there are 24 sub-varieties Vj,KKT with a linear Gr¨ obner basis. For these, a closed form expression for the candidate optimizers u? j,i can be computed. For the remaining ?ve cases companion matrices have to be computed, requiring eigenvalue computation in the on-line algorithm. These subvarieties Vj,KKT have ?ve solutions counting multiplicities, i.e. the companion matrices are 5 × 5 matrices. The trajectory of the controlled system starting from an initial state of x1 (0) = 2.5 and x2 (0) = 1 is shown in Figure 1. Figure 2 shows the state-space evolution of the controlled Duf?ng oscillator and its free response without the controller. In the uncontrolled case, a weak dynamic behavior and a violation of the constraint x2 (t) > ?5 can be observed.
5 4 3 2 1 x2 (t) 0 -1 -2 -3 -4 -5 -5 0 x1 (t) 5 controlled free response

where y ∈ R is the continuous state variable and u ∈ R the control input. The parameter ζ is the damping coef?cient and is known (here ζ = 0.3). The control objective is to regulate the state to the origin. To derive the discrete time model, forward difference approximation is used (with a sampling period of h = 0.05 time units). The resulting state space model with a discrete state vector x ∈ R2 and input u ∈ R is x1 (k + 1) x2 (k + 1) = + 1 ?h 0 h h (1 ? 2ζh) u(k ) + x1 (k ) x2 (k ) .

0 ?hx3 1 (k )

An optimal control problem with prediction horizon N = 3, weight matrices 1 0 Q= , 0 1 1 R= 10 and state-constraints x(k + j )


Fig. 1.

State-space diagram of the Duf?ng oscillator

≤ 5 ?j = 1 . . . N

The precomputation of companion matrices and the solutions u? j,i took less than one minute on a Intel Pentium 3 GHz with 1 GB RAM. The online algorithm needed less than 3.5 s to obtain the global optimum even with a naive brute-force on-line search algorithm for the minimization over the ?nite set of candidate points. It has to be noted that most of the time of these 3.5 s is consumed by the evaluation of expressions with the Matlab Symbolic Math

4

x1 (t),x2 (t)

2 0 -2 -4 0 2 4 6 8

x1 (t) x2 (t)

10

0.7 6 4

t

2 0 -2 0 2 4 6 8 10

t

Fig. 2.

State and input evolution of the controlled Duf?ng oscillator

Toolbox. An ef?cient implementation, in C for instance, would be orders of magnitude faster. VI. CONCLUSIONS AND OUTLOOK The main contribution of this paper is a new algorithm for nonlinear parametric optimization of polynomial functions subject to polynomial constraints. The algorithm uses Gr¨ obner bases and the eigenvalue method for solving systems of polynomial equations, to evaluate the map from the space of parameters to the corresponding optimal value and optimizer. The algorithm is very general, computationally robust and can be applied to a wide range of problems. The punchline of the proposed approach is the precomputation of the generalized companion matrices, thus partially presolving the optimization problem and moving the computational burden off-line. The method has been developed with model predictive control in mind. The connection to optimal control problems has been illustrated by applying the method to the Duf?ng oscillator. Finally, there is ongoing research on exploiting the structure of speci?c control problems, including sparseness and genericity assumption relaxation. More speci?cally, sparse resultant techniques are investigated to compute the companion matrices. Combining this method with recently proposed ”Sum of Squares Programming” methods, based on semi-de?nite representations of ?nite varieties [18], seems to be a promising direction for further research. Moreover, the integration of the proposed scheme with dynamic programming is also explored. R EFERENCES
[1] S. J. Qin and T. A. Badgwell, “An overview of nonlinear mpc applications,” in Nonlinear Model Predictive Control: Assessment and Future Directions, F. Allg¨ ower and A. Zheng, Eds. Birkhauser, 1999. [2] M. Morari and J. H. Lee, “Model predictive control: past, present and future,” Computers and Chemical Engineering, vol. 23, pp. 667–682, 1999. [3] C. E. Garcia, D. M. Prett, and M. Morari, “Model predictive control: theory and practice - a survey,” Automatica, vol. 25, pp. 335–348, 1989.

[4] A. Bemporad and M. Morari, “Control of systems integrating logic, dynamics, and constraints,” Automatica, vol. 35, no. 3, pp. 407–427, Mar. 1999. [5] A. Bemporad, M. Morari, V. Dua, and E. N. Pistikopoulos, “The explicit linear quadratic regulator for constrained systems,” Automatica, vol. 38, pp. 3–20, 2002. [6] I. A. Fotiou, P. A. Parrilo, and M. Morari, “Nonlinear parametric optimization using cylindrical algebraic decomposition,” in Proc. of the Conf. on Decision & Control, 2005. [7] D. Cox, J. Little, and D. O’Shea, Using Algebraic Geometry. New York: Springer, 1998. [8] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge: Cambridge University Press, 2004. [9] B. Sturmfels, Solving Systems of Polynomial Equations, ser. CBMS Regional Conference Series in Mathematics. American Mathematical Society, 2002, no. 97. [10] P. A. Parrilo and B. Sturmfels, “Minimizing polynomial functions,” Dimacs Series in Discrete Mathematics and Theoretical Computer Science, 2000. [11] D. Manocha, “Solving systems of polynomial equations,” IEEE Computer Graphics and Applications, vol. 14, pp. 46–55, 1994. [12] F. J. Christophersen, M. Baoti? c, and M. Morari, “Stability Analysis of Hybrid Systems with a Linear Performance Index.” Atlantis, Paradise Island, Bahamas: Proc. of the Conf. on Decision & Control, Dec. 2004, pp. 4589–4594. [13] F. Borrelli, M. Baotic, A. Bemporad, and M. Morari, “Dynamic programming for constrained optimal control of discrete-time linear hybrid systems,” Automatica, vol. 41, pp. 1709–1721, Oct. 2005. [14] M. Baoti? c, F. J. Christophersen, and M. Morari, “In?nite Time Optimal Control of Hybrid Systems with a Linear Performance Index,” in Proc. of the Conf. on Decision & Control, Maui, Hawaii, USA, Dec. 2003, pp. 3191–3196. [15] D. Cox, J. Little, and D. O’Shea, Ideals, Varieties and Algorithms: An Introduction to Computational Algebraic Geometry. New York: Springer, 1992. [16] V. Weispfenning, “Comprehensive Gr¨ obner bases,” Journal of Symbolic Computation, vol. 14, pp. 1–29, 1992. [17] D. W. Jordan and P. Smith, Nonlinear Ordinary Differential Equations, ser. Oxford Applied Mathematics and Computer Science. Oxford University Press, 1987. [18] M. Laurent, “Semide?nite Representations for Finite Varieties,” Preprint, 2004. To appear in Mathematical Programming.

u(t)


赞助商链接

更多相关文章:
中国科学院数学研究生课程
Algebraic Geometry 60/3 S070101J05 算术代数几何 ...parametric Statistics 60/3 S070103Z03 可靠性理论...Nonlinear Control Systems 80/4 S070105J06 随机...
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图