9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Signals, and Systems The Lyapunov Exponent and Joint Spectral Radius of Pairs of Matrices A

Math. Control Signals Systems (1997) 10:31-40 9 1997 Springer-Verlag London Limited

Mathematics of Control, Signals, and Systems

The Lyapunov Exponent and Joint Spectral Radius of Pairs of Matrices Are Hard When Not Impossible m To Compute and To Approximate*

J o h n N. Tsitsiklist and Vincent D. Blondel~

Abstraet. We analyze the computability and the complexity of various definitions of spectral radii for sets of matrices. We show that the joint and generalized spectral radii of two integer matrices are not approximable in polynomial time, and that two related quantities--the lower spectral radius and the largest Lyapunov exponent--are not algorithmically approximable.

Key words. Lyapunov exponent, Lyapunov indicator, Joint spectral radius, Generalized spectral radius, Discrete differential inclusion, Computational complexity, NP-hard, Algorithmic solvability.

1. Introduction

The

spectral radius of a

real matrix A is defined by : = max{[2[: 2 is an eigenvalue of A}.

p(A)

This definition can be extended in various ways to sets of matrices. D u e to their numerous practical applications, these possible extensions have been the object of intense attention in recent years. In this paper we analyse some of these extensions from a computational complexity point of view. Let [I. [I be any matrix n o r m (in what follows we always assume that matrix n o r m s are submultiplicative, i.e., that ][AB[] < [IAI[]IBID. The well-known identity p(A) = limk~oo [[Ak[l~/k (see, for example, Corollary 5.6.14 of [HJ]) justifies the generalizations of the concept of spectral radius to sets of matrices given next. Let Y~be a set of matrices in R"? the joint spectral radius p(E) is defined [RS]

* Date received: April 10, 1996. Date revised: March 10, 1997. This work was completed while Blondel was visiting Tsitsiklis at MIT. This research was supported by the ARO under Grant DAAL03-92-G-0115. 1 Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, U.S.A.jnt@mit.edu. Institute of Mathematics, University of Lirge, B-4000 Liege, Belgium. vblondel@ulg.ac.be. 31

32

J . N . T s i t s i k l i s a n d V. D . B l o n d e l

by ~(Z) = lim sup ~k(X),

k---+r

where pk(Z) = sup{]]A1A2.-. Ak]]l/k: each Ai 9 Z} for k >_ 1. It is shown in [DL] (notice that our notations are different from those used there) that p(Z) _< pk(Z) for all k _> 1, and therefore the joint spectral radius can be given in the equivalent form p(E) = limk~o~ /~k(E). Similarly to p we define the lower spectral radius _p(E) by p(X) = lim inf pk(]~),

-k_~.Qo -

where _pk(Z) = inf{llAiA2 - 99Akll l/k: each A~ 9 Z} for k >_ 1. As for the single matrix case, the quantities ilk(Z) and _pk(Z) generally depend on the matrix norm used but the limiting values fi(Z) and _p(Z) do not. To see this, remember that any two submultiplicative norms I1' II1 and I[" [12 are related by allAII1 _< IIAIt2 _< fl[IAII1 for some 0 < ct < ft. For any product A1A2.. "Ak we have ctl/k]lAiA2... Akl]~/k <_ [IA1A2-.. Akll~/k <_fll/kllA1A2... Akll~/k and by letting k tend to infinity we conclude that the joint and lower spectral radii are well defined independently of the matrix norm used. The joint and lower spectral radii correspond in a certain sense to two extreme cases. With the joint spectral radius we calculate the largest possible average norm that can be obtained by multiplying matrices from Z, whereas with the lower spectral radius we calculate the lowest possible such norm. We now define an additional quantity that is intermediate between these two extreme cases. We assume that we have a probability distribution P over the set Z and that we generate an infinite sequence (Ai)i___l of elements of X by picking each matrix Ai randomly and independently according to the assumed probability distribution P. A probability distribution is said to be nontrivial if nonzero probabilities are attached to all matrices of Z. The largest Lyapunov exponent (also called the top Lyapunov exponent or asymptotic growth rate) associated with P and X is defined by (see [O], see also [CKN] for a more readable account) 2(X, P) = l i m ~ E[log(llA1... Ak[I)]. It can be shown that this limit exists and, as for the previous cases, does not depend on the matrix norm used (see [O] for a proof of the first of these statements). In order for our development to be uniform we transform the largest Lyapunov exponent into the Lyapunov spectral radius pp(Z) by defining pp(X) = e ~(x'P). Basic inequalities relating p, pp, and fi are given by

e(x) <__pAX) ___p(x).

Moreover, since p(A) = limk--,oo IIAkll l/k, the definitions of p, pp, and fi coincide with the usual spectral radius when Y, consists of a single matrix.

Joint Spectral Radius of Pairs of Matrices Are Hard To Compute

33

Additional definitions similar to those of p, pp, and fi are possible by replacing the norm appearing in their definitions by a spectral radius. For example, the generalized spectral radius f~(Z) defined by Daubechies and Lagarias [DL] is obtained by setting f'(X) = lim sup f~(X),

k--~oo

where f~(Z) = sup{(p(A1A2... Ak))l/k: each A i E ?} for k > 1. Similar definitions lead to the spectral quantities _pP and p~. It has been conjectured in [DL] and established by Berger and Wang that the generalized spectral radius fit coincides with the joint spectral radius f when E is finite (see Theorem IV of [BW] or Theorem 1 of [E] for an elementary proof). Gurvits has also shown [G2, Theorem B.1] that p~ coincides with _p when E is finite. In what follows we always assume that the set E is finite and, for convenience, deal only with the three spectral radii defined in terms of norms. The generalized spectral radius was introduced in [DL] for studying concepts associated to Markov chains, random walks, and wavelets. The logarithm of the joint spectral radius also appears in the context of discrete linear inclusions where it is called the Lyapunov indicator, see, for example, [B1]. In systemstheoretic terms, the generalized spectral radius can be associated with the stability properties of time-varying systems in the worst case over all possible time variations, or with the stability of "asynchronous" [T] or "desynchronized" [K2] systems. The definition of the lower spectral radius is natural for formalizing control design notions associated to discrete-time systems. Instead of viewing the order of matrix multiplication as an externally imposed time variation, we view it as a control action, and are interested in the stability properties that can be obtained by choosing control actions in the best possible way. Despite this natural interpretation, the definition of the lower spectral radius seems quite recent (the first reference seem to be [G2], see also [BT2] for connections with control concepts). Finally, the largest Lyapunov exponent appears in the context of discrete linear differential inclusions (see [BGFB] and references therein) and is related to time-varying systems in which time variations are random. Besides systemstheoretic interpretations, Lyapunov exponents are pervasive in many areas of applied mathematics such as mathematical demography [C], [R2], percolation processes [D], statistics JR1], and Kalman filtering [B2]. Other references and descriptions of applications appear in the yearly conference proceedings [ACE] and in the survey [CKN]. We now briefly survey how these quantities can be computed or approximated. By letting k tend to infinity, the inequalities (with our notations) _ _< fk(z)

proved in Lemma 3.1 of [DL] can be used to derive algorithms which compute arbitrarily precise approximations for f(E) (see, for example, [G1] for one such algorithm). These approximating algorithms can in turn be used in procedures that

34

J.N. Tsitsiklis and V. D. Blondel

decide, after finitely many steps, whether p > 1 or 17 < 1 (such procedures are given, e.g., by Brayton and Tong [BT3] in a system-theory context and by Barabanov [B1] in the context of discrete linear inclusions). These procedures may not terminate when p happens to be equal to one. The existence of algorithms for computing arbitrarily precise approximations of p does not rule out the possibility that the decision problem "# < 1" is undecidable. It is so far unknown whether this problem, which was the original motivation for the research reported in this paper, is algorithmically solvable (see [LW] for a discussion of this issue and for a description of its connection with the finiteness conjecture, see also the discussion in [G2]). A negative result in this direction is given by Kozyakin who shows [K2] that the set of pairs of 2 ? 2 matrices that have a joint spectral radius less than one is not semialgebraic. In our first result (Theorem 1) we show that, unless P = NP, approximating algorithms for ~ cannot possibly run in polynomial time. More precisely, we show that, unless P = NP, there is no algorithm that can compute p(Z) with a relative error bounded by e > 0, in time polynomial in the size of E and e (see later for more precise definitions). As a corollary we show that it is NP-hard to decide if all possible products of two given matrices are stable. The situations for the largest Lyapunov exponent and for the lower spectral radius are somewhat different from that of the joint spectral radius. Computable upper bounds for PP for the case where E consists of nonnegative matrices are given in [GA] and analytic solutions are available for special cases (see, for example, [LR] for an analytic solution for the case where E consists of two 2 x 2 matrices, one of which is singular). In general, no exact, or even approximate, computational methods other than simulation seem to be available for computing p~, or _p. The problem of computing PP has been known for at least 20 years, and we quote from Kingman [K1, p. 8971 (the same quotation appears in [C]): "Pride of place among the unsolved problems of subadditive ergodic theory must go to the calculation of the constant ~ (a constant that is equal to the logarithm of Pc). In none of the applications described here is there an obvious mechanism for obtaining an exact numerical value, and indeed this usually seems to be a problem of some depth." In our second result (Theorem 2) we show that no approximating algorithm exists for _p and pp. More precisely, let p be any function satisfying e(Z) ___p(Z) _ pe(Z) for some nontrivial probability distribution P and for all E. We show that the problem of computing p exactly, or even approximately, is algorithmically undecidable. We also show that, when all the matrices in E are constrained to have nonnegative coefficients, then the problem of computing p becomes NP-hard. If the decision problem "p < 1" was decidable for such a function p, then the associated decision procedure could be used to compute arbitrary precise approximations of p. Since p is not computable when _p < p < pp, we conclude, as a corollary to Theorem 2, that "p < 1" is undecidable for the Lyapunov spectral radius, for the lower spectral radius, and for all intermediate functions between these two.

Joint Spectral Radius of Pairs of Matrices Are Hard To Compute

35

For convenience of the exposition we restrict our attention in what follows to

pairs of matrices with integer entries. Our results being negative they equally

apply to sets of k > 2 matrices or to infinite sets, and to matrices with real entries. An earlier version of this paper appears in the conference proceedings [TB].

2. Approximability of the Joint Spectral Radius

As explained in the Introduction, the joint spectral radius can be approximated to arbitrary precision. We show in this section that, unless P = NP, approximating algorithms cannot run in polynomial time. Following Papadimitriou [P1], we say that a function p(Z) is polynomial-time approximable if there exists an algorithm p*(Z, 5), which, for every rational number e > 0 and every set of matrices Z with p(Z) > 0, returns an approximation of p(Z) with a relative error of at most 5 (i.e., such that Ip* - p[ -- 5lp[) in time polynomial in the size of Z and e. By the "size of E and e" we mean the description size, or "bit size," of E and e. For example, if e is the ratio of two relatively prime numbers p and q, the size of e can be taken to be log(pq).

Theorem 1. Unless P = NP, the joint (generalized) spectral radius ~ of two matrices is not polynomial-time approximable. This is true even for the special case where Z consists of two matrices with {0, 1} entries. Proofi Our proof proceeds by reduction from the classical SAT problem (see [GJ] for a definition of SAT), it is inspired from the proof of Theorem 6 in [PT] and it is similar to the proof of Theorem 2 in [BT1] (however, we were unable to deduce this theorem from Theorem 2 in [BT1].) Starting from an instance of SAT with n variables Xx,...,xn and m clauses C t , . . . , Cm, we construct two directed graphs Go and GI. The graphs have identical nodes but have different edges. Besides the start node s, there is a node uij associated to each clause Ci and variable xj, a 0th node u0j associated to each variable xj, and an (n + 1)th node ui(n+x) associated to each clause Ci. Edges are constructed as follows: for i = 1 , . . . , m and j = 1 , . . . , n there is

9 an edge (uij, ui(j+t)) in both Go and G1 if the variable xj does not appear in clause C~; 9 an edge (uij, uoj) in Go and an edge (uij, uio+l)) in GI if the variable xj appears in clause Ci negatively; 9 an edge (uij, u0j) in GI and an edge (u/j, ui(j+t)) in Go if the variable xj appears in clause Ci positively. For i = 1 , . . . , m there are edges (s, uil) in both graphs. The graphs have edges (u0j, u00+l)) for j = 1 , . . . , n - 1 and have an edge from u0n to s. There are no edges leaving from (Ui(n+l) , S) for i = 1 , . . . , m. Let r denote the total number of nodes (r = (n + 1)(m + 1)). We construct two r x r matrices A0 and A1. Associated to the graph Go (resp. G1) is the r ? r matrix

36

J.N. Tsitsiklis and V. D. Blondel

A0 (resp. A1) whose (i, j)th entry is equal to one if there is an edge from node j to node i in Go (resp. G1), and is equal to zero otherwise. To any given node e we associate a column-vector x(e) of dimension r whose entries are all zero with the exception of the entry corresponding to the node which is equal to one. We need two observations. 1. Let a partition of the nodes be given by Pn+2 = {s}, Pn+l = {ufl: i = 1 , . . . , m}, P, = {u01,ui2: i = 1 , . . . , m } , . . . , P 2 = {u0(,-1),ui,: i = 1 , . . . , m } , and P1 = {u0n, u/(n+l) : i = 1 , . . . , m}. We use g~ to denote the index of the partition to which the node e belongs. Any edge (from Go or Ga) leaving from a node of partition Ph goes to a node of partition Ph-1. Furthermore, the unique edge leaving from partition P1 goes back to partition P,+2. Thus, any path in Go and G1 that starts from node e either gets to a node ui(,+l), from which there is no outgoing edge, or it visits node s after g~ steps. In matrix terms this implies the following. Let ~ be an arbitrary node and let g~ be its associated partition index. If h is a positive integer equal to g~ modulo (n + 2) and A is a product of h factors in {A0, A1}, then

Ax(o:) = laX(S)

for some nonnegative scalar la. 2. Let q l , . . . , q, e {0, 1} be a truth assignment of the boolean variables xj and consider the product A q . . . A q l . The vector A q . . . A q l x ( u f l ) is equal to X(Uo,) if the clause Ci is satisfied and is equal to x(ui(,+l)) otherwise. Let A, be any of A0 or A1. There are no edges leaving from u~(,+l), there is one edge from u0, to s, and there are edges from s to ufl for i = 1 , . . . , m. Thus we h a v e A,x(Ui(n+l) ) = 0, A,x(Uon) = x(s), and A,x(s) = ~-~im=l X(Uil ). From this we conclude

m

a , A q ... AqlA,x(s ) = a , A q ... Aql Z

i=1 ill

x(uil)

= A, ~ A q

i=1

. . . a q , x(uil) ~- J.x(s),

where 2 is equal to the number of clauses that are satisfied by the given truth assignment. With these two observations we now prove the theorem. Assume first that the instance of SAT is satisfied by the assignment xi = qi for q l , . . . , q, ~ {0, 1} and define A by A = A , A q . . . . A q ~ A , with A, any of A0 or A1. Since all m clauses are satisfied we have A x ( s ) = rex(s) and thus fi(A0, A1) _> m 1/(n+2) 9 Assume now that the instance of SAT is not satisfiable. Let Yi = ~ P i x(a) for i = 1 , . . . , n + 2 and consider the vector max norm [1' ]1- Let A be a product of n + 2 factors in {A0,A1}. Since the instance of SAT is not satisfiable we have ]]Ayi]] <_ ( m - 1)llyi[I = m - 1 for i = 1 , . . . , n + 2 . Now let e denote the vector n+2 whose entries are all equal to one. Then e = 2_,i=1 Yi and Ae = ~ n + 2 -Ayi. The 2_.,,i=a -

Joint Spectral Radius of Pairs of Matrices Are Hard To Compute

37

nonzero entries of Ayi are at the same place as the nonzero entries of yi. Hence, [[aell = l[ ~Ayi[[ = maxilla yill -< m - 1 . The entries of A are all nonnegative and so Ilall = Ilae[I for the max row sum matrix norm. Thus we have [JAil-< m - 1 whenever A is a product of n + 2 factors in {Ao, A1}. From this we conclude that fi(Ao, A1) < (m - 1) 1/(n+2). Suppose now that p*(Z, e) is an algorithm which, for every e > 0 and s with p(Y.) > 0, returns an approximation of p(E) with [p* - p[ < e[p[. By running this algorithm on the pair of {0, 1} matrices Ao, A1 obtained from the instance and on a sufficiently small e (e.g., we can take e < (m/(m - 1)) 1/(n+2) - 1), we are able to distinguish fi(Ao,A1) > m 1/(n+2) from fi(Ao, A1) < ( m - 1) 1/(n+2). The algorithm thus allows us to decide .the instance of SAT. Since all transformations are performed in polynomial time, the algorithm cannot possibly run in time polynomial in the size of Z and e unless P = NP. 9

Remarks.

1. Since the problem remains NP-hard when the matrices have {0, 1} entries, a corollary of the theorem is the following:

Corollary 1. Unless P = NP, the joint (or generalized) spectral radius of two n ? n matrices, with {0, 1} entries, is not approximable with relative error 10 -k (k positive integer) in a number of operations polynomial in n and k.

2. If a polynomial-time algorithm was available for checking the stability of all products of two given matrices, then the algorithm could be used to approximate the joint spectral radius in polynomial time. Thus we have:

Corollary 2. Consider all possible products of two given real matrices Ao and A1. It is NP-hard to decide if all products are stable. This is true even if the matrices have {0, 1} entries.

3. As indicated by a reviewer it may be possible to improve the theorem by proving that, for a suitably small constant a, and unless P -- NP, no polynomialtime approximation algorithm of relative error e exists for ft. Such a result would have to be derived from negative results on the approximability of the MAXSAT problem. 4. L. Gurvits has kindly communicated to us that he has also proved Corollary 2 using a different reduction (unpublished).

3. Approximability of the Lower Spectral Radius and the Lyapunov Exponent

In this section we show that the lower spectral radius and the Lyapunov spectral radius, and intermediate quantities between these two, cannot be approximated by an algorithm. Let p be a quantity that we wish to compute and let us fix some positive constants K and L with L < 1. Consider an algorithm which on input Z outputs a number p*(E). We say that this algorithm is a (K,L)-approximation

38 algorithm if for every E we have ]p* - p ] <_K + Lp.

J.N. Tsitsiklis and V. D. Blondel

This definition allows for an absolute error of K and a relative error of L. Despite the latitude allowed by this definition, we show below that (K, L)-approximation algorithms do not exist for the Lyapunov and the lower spectral radii. In order to prove our result we need the following definition. We say that a set of matrices Z is mortal if there exists some k > 1 and matrices Ai a E such that A 1 A 2 . . . Ak = 0. The following result can be found in [BT1] (see, in particular, Theorems 1 and 2, and Remark 2 after Theorem 1) and builds on an earlier result by Paterson [P2]. Mortality of two integer matrices of size n ? n is undecidable for n = 6np + 6 where np is any number of pairs of words for which Post's correspondence problem is undecidable. (We may take np ~ 7, see below.) Post's correspondence problem is a classical undecidable problem on words (for a description of the problem and a proof of its undecidability see, e.g., [HU]). In a recent contribution Matiyasevich and S~nizergues [MS] have shown that Post's correspondence problem is undecidable as soon as there are seven pairs of words. Thus we can take nv = 7, and the mortality of pairs of 48 x 48 integer matrices is undecidable. We are now able to prove our theorem. The proof essentially uses the fact that any (K, L)-approximation algorithm can be used to decide the mortality of matrices.

Proposition.

Theorem 2. Let np be a number of pairs of words for which Post's correspondence problem is undecidable. Fix any K > 0 and L with 0 <_ L < 1. Let p be a function defined on pairs of matrices and assume that _p(E) <_ p(Y,) _< pp(E) for some nontrivial probability distribution P and for all pairs E.

1. There exists no (K, L)-approximation algorithm for computing p. This is true even for the special case where E consists of one (6np + 7) ? (6np + 7) integer matrix and one (6np + 7) ? (6np + 7) integer diagonal matrix. 2. For the special cases where E consists of two integer matrices with {0, 1} entries, there exists no polynomial-time (K,L)-approximation algorithm for computing p unless P = NP.

Proof.

Let K > 0 and 0 < L < 1 be given and let p be as above. Suppose that there exists a (K, L)-approximation algorithm for p and let Z be an arbitrary family of n ? n integer matrices. We claim that the (K, L)-approximation algorithm can be used to decide whether or not E is mortal. This will establish the theorem. We form a family E ~ of (n + 1) ? (n + 1) matrices as follows. For each A ~ Z, we construct B ~ E r by letting B = diag{cA, d}, where c and d are positive constants satisfying K + d(L + 1) < (1 - L)c - K. Suppose that E is mortal. Then it is easily seen that p(Z') = pe(E') = d and

Joint Spectral Radius of Pairs of Matrices Are Hard To Compute

39

thus p ( U ) = d. In this case, applying a (K, L)-approximation algorithm to Y/, would give a result p* bounded by p* < K + (L + 1) d. Suppose now that E is not mortal. The matrices in Y/have integer entries that are either equal to zero, or are larger than c. Since Y~is not mortal, any product of k matrices has some entry whose magnitude is at least ck and it follows that p(~') > c and thus p ( Y / ) > c. In this case, applying a ( K , L ) - a p p r o x i m a t i o n algorithm to E', would give a result p* satisfying p-p*<<_ Lp + K or p * >

(1 -

L ) p - K >_ (1 - L ) c - K.

Having chosen c and d so that K + d < (1 - L)c - K, the result of the (K, L)approximation algorithm applied to Y/allows us to determine whether E is mortal or not. The mortality problem is undecidable even for the case where Z consists of two (6% + 6) x (6% + 6) integer matrices. The fact that one of the matrices m a y be taken to be diagonal follows from the observation that the L y a p u n o v exponent and lower spectral radius are left unchanged by the similarity transformation of the matrices, combined with the fact that the matrices used in the paper [P2], to which [BT1] refers, are all diagonalizable. The first part of the theorem is therefore proved. F o r proving the second part of the theorem, we invoke the same reduction and use the fact that checking whether two matrices with {0, 1} entries are mortal is an NP-complete problem. 9

Remarks.

1. Note that the matrices in Z are not irreducible. It is not clear whether a similar negative result can be obtained if we restrict the set Z to irreducible matrices. 2. If an algorithm was available for checking the presence of a stable matrix in the set of all products of two given matrices, then the algorithm could be used to approximate the lower spectral radius. Thus we have:

C o r o l l a r y 3. Consider all possible products of two given real matrices Ao and A1. It is undecidable to decide if one of the products is stable. This is true even if the two matrices are integer, of size 49 ? 49, and one of them is diagonal.

References

[ACE] L. Arnold, H. Crauel, and J.-P. Eckmann (eds.), Lyapunov Exponents, Proceedings of a Conference Held in Oberwolfach, Lecture Notes in Mathematics, vol. 1486, Springer-Verlag, Berlin, 1991. [B1] N. E. Barabanov, Lyapunov indicators of discrete inclusions, parts I, II, and III, Avtomat. i Telemekh., 2 (1988), 40-46, 3 (1988), 24-29, and 5 (1988), 17-24 (Translation in Automat. Remote Control). [BW] M. Berger and Y. Wang, Bounded semigroup of matrices, Linear Algebra Appl., 166 (1992), 21-27. [BT1] V. D. Blondel and J. N. Tsitsiklis, When is a pair of matrices mortal?, Technical report LIDS-P-2314, LIDS, MIT, January 1996. To appear in Information Processing Letters. [BT2] V. D. Blondel and J. N. Tsitsiklis, Complexity of stability and controllability of elementary hybrid systems,preprint (1997).

40

J.N. Tsitsiklis and V. D. Blondel

[B2] P. Bougerol, Filtre de Kalman Bucy et exposants de Lyapounov, in Lyapunov Exponents (L. Arnold et al., eds.), Lecture Notes in Mathematics, vol. 1486, Springer-Verlag, Berlin, 1991, pp. 112-122. [BGFB] S. Boyd, L. E1 Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, SIAM Studies in Applied Mathematics, vol. 15, SIAM, Philadelphia, PA, 1994. [BT3] R. Brayton and C. Tong, Constructive stability and asymptotic stability of dynamical systems, IEEE Trans. Circuits and Systems, 27 (1980), 1121-1130. [C] J. Cohen, Subadditivity, generalized products of random matrices and operation research, SIAM Rev., 30 (1988), 69-86. [CKN] J. Cohen, H. Kesten, and M. Newman (eds.), Random Matrices and Their Applications, Contemporary Mathematics, vol. 50, American Mathematical Society, Providence, RI, 1986. [D] R. Darling, The Lyapunov exponent for product of infinite-dimensional random matrices, in Lyapunov Exponents (L. Arnold et al., eds.), Lecture Notes in Mathematics, vol. 1486, Springer-Verlag, Berlin, 1991, pp. 206-215. [DL] I. Daubechies and J. C. Lagarias, Sets of matrices all infinite products of which converge, Linear Algebra Appl., 162 (1992), 227-263. [E] L. Elsner, The generalized spectral radius theorem: an analytic-geometric proof, Linear Algebra Appl., 220 (1995), 151-159. [GJ] M. R. Garey and D. S. Johnson, Computers and Intractability: A Guide to the Theory of NPcompleteness, Freeman, New York, 1979. [GA] R. Gharavi and V. Anantharam, An upper bound for the largest Lyapunov exponent of a Markovian random matrix product of nonnegative matrices, preprint (1995). [G1] G. Gripenberg, Computing the joint spectral radius, Linear Algebra AppL, 234 (1996), 43-60. [G2] L. Gurvits, Stability of discrete linear inclusion, Linear Algebra Appl., 231 (1995), 47-85. [HU] J. E. Hopcroft and J. D. Ullman, Formal Languages and Their Relation to Automata, Addison-Wesley, Reading, MA, 1969. [H J] R.A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, 1985. [K1] J. Kingman, Subadditive ergodic theory, Ann. Probab., 1 (1976), 883-909. [K2] V. S. Kozyakin, Algebraic unsolvability of problem of absolute stability of desynchronized systems, Avtomat. i Telemekh., 6 (1990), 41-47 (Translation in Automat. Remote Control). [LW] J. C. Lagarias and Y. Wang, The finiteness conjecture for the generalized spectral radius of a set of matrices, Linear Algebra Appl., 214 (1995), 17-42. [LR] R. Lima and M. Rahibe, Exact Lyapunov exponent for infinite products of random matrices, J. Phys. A, 27 (1994), 3427-3437. [MS] Y. Matiyasevich and G. Stnizergues, Decision problems for semi-Thue systems with a few rules, preprint (1996). [O] V. I. Oseledec, A multiplicative ergodic theorem. Lyapunov characteristic numbers for dynamical systems, Trans. Moscow Math. Soc., 19 (1968), 197-231. [P1] C.H. Papadimitriou, Computational Complexity, Addison-Wesley, Reading, MA, 1994. [PT] C. H. Papadimitriou and J. N. Tsitsiklis, The complexity of Markov decision processes, Math. Oper. Res., 12 (1987), 441-450. [P2] M. Paterson, Unsolvability in 3 x 3 matrices, Stud. Appl. Math., 49 (1970), 105-107. [R1] K. Ravishankar, Power law scaling of the top Lyapunov exponent of a product of random matrices, d. Star. Phys., 54 (1989), 531-537. [R2] J. Roerdink, The biennal life strategy in a random environment, J. Math. Biol., 26 (1988), 199-215. [RS] G.-C. Rota and G. Strang, A note on the joint spectral radius, Indag. Math., 22 (1960), 379381. [T] J. N. Tsitsiklis, On the stability of asynchronous iterative processes, Math. Systems Theory, 20 (1987), 137-153. [TB] J. N. Tsitsiklis and V. D. Blondel, The spectral radius of a pair of matrices is hard to compute, Proceedings of the 35th Conference on Decision and Control, Kobe, December 1996, pp. 3192-3197.

赞助商链接

- Approximation of the joint spectral radius using sum of squares
- Lax Pairs and Spectral Curves for Calogero-Moser and Spin Calogero-Moser Systems
- Exact Lyapunov Exponent for Infinite Products of Random Matrices
- An Elementary Proof of Lyapunov Exponent Pairing for Hard-Sphere Systems at Constant Kineti
- Chaotic Measures and Real-World Systems Does the Lyapunov exponent always measure chaos
- Asymptotics for the almost sure Lyapunov Exponent for the solution of the parabolic Anderso
- Computationally efficient approximations of the joint spectral radius
- The bullwhip effect and Lyapunov exponent
- Software Package Evaluation for Lyapunov Exponent and Others Features of Signals Evaluatin
- Global Bounds for the Lyapunov Exponent and the Integrated Density of States of Random Schr
- Chaotic Measures and Real-World Systems Does the Lyapunov exponent always measure chaos
- 施劲松The minimum signless Laplacian spectral radius of graphs with given independence number
- 2011_Gungor_On the distance spectral radius and the distance energy of graphs
- A sharp formula for the essential spectral radius of Ruelle's operator on smooth and H olde
- Approximating the spectral radius of sets of matrices in the max-algebra is NP-hard

更多相关文章：
更多相关标签：