9512.net
甜梦文库
当前位置:首页 >> >>

Jordan Matrices on the Equivalence of the I(1) Conditions for VAR Systems ¤


EUROPEAN UNIVERSITY INSTITUTE DEPARTMENT OF ECONOMICS

EUI Working Paper ECO No. 99/12

Jordan Matrices on the Equivalence of the I(1) Conditions for VAR Systems
Fragiskos Archontakis

BADIA FIESOLANA, SAN DOMENICO (FI)

All rights reserved. No part of this paper may be reproduced in any form without permission of the author.

c °1999 Fragiskos Archontakis Printed in Italy in April 1999 European University Institute Badia Fiesolana I-50016 San Domenico (FI) Italy

Jordan Matrices on the Equivalence of the I(1) Conditions for VAR Systems¤
Fragiskos Archontakis Department of Economics European University Institute Via dei Roccettini, 9 San Domenico di Fiesole (FI) I-50016, ITALY email: archonta@datacomm.iue.it February 1999

Abstract The Jordan Form of the VAR's Companion matrix is used for proving the equivalence between the statement that there are no Jordan blocks of order two or higher in the Jordan matrix and the conditions of Granger's Representation Theorem for an I(1) series. Furthermore, a Diagonal polynomial matrix containing the unit roots associated to the VAR system is derived and related to Granger's Representation Theorem.

¤ This

work is part of my Ph.D thesis under the guidance of S?ren Johansen.

Thanks are due to N. Haldrup, R. Kiefer and C. Osbat for commenting on ea rlier drafts of the paper, N. Hargreaves for correcting the language and N. Haldrup and M. Salmon for providing useful material. All remaining errors are mine.

1

Introduction

After the seminal work by Engle and Granger (1987) integrated and cointegrated economic time series have been amongst the most popular topics in the econometrics literature. This is because in this area of research the empirical and theoretical studies on scalar and vector autoregressive processes with unit roots and the studies on the economic theory of longrun equilibrium relationships have been combined giving as a product the cointegration theory, see for instance the textbooks by Banerjee et al (1993), Hendry (1995) and Johansen (1996) for an extensive analysis. The references that have just been mentioned treat mostly models of I(1) processes, that is, processes that can be made stationary by di?erencing. However, it turned out that I(2) series are likely to appear in economics. For example, if in°ation appears to be a nonstationary series then prices should be I(2). Johansen (1992) gives a representation theorem for I(2) processes and conditions for an I(2) vector time series to cointegrate. More recently Gregoir and Laroque (1993) have presented some further theoretical results on the polynomial error correction model (ECM) and polynomial cointegration; see also Granger and Lee (1990). Work on structure theory has been also carried out by d' Autume (1990) and Engle and Yoo (1991). From the point of statistical and probability theory in I(2) systems Johansen (1995, 1996, 1997), Kitamura (1995) and Paruolo (1996) are coping with the problems of estimation and inference. The present paper is closely related to the one by d' Autume who has analyzed the general case of integration order higher than one by the use of the Jordan matrix associated with the model under investigation. He proves that by considering the Jordan Canonical Form of the Companion matrix of the system one can determine the degree of integration d as the dimension of the largest Jordan block corresponding to the unit roots. For instance, let us assume that there are three Jordan blocks, associated with the unit roots, appearing in the Jordan Canonical Form of the Companion matrix of the system. We further assume that there are two blocks of size one and one block of size two. It yields that d = 2, i.e. the degree of integration of the model is two. 1

In this paper we use the structure of the Jordan Canonical Form of a matrix in order to show the equivalence between the result derived by d' Autume and the condition underlying Granger's Representation Theorem for an integrated of order one series, that is, there are no Jordan blocks of order two or higher if and only if j? ?? j = 0, see also Johansen (1991). 6 We claim that the method under consideration is a general method that can be applied to high dimensional autoregressive systems with several lags and di?erent orders of integration. We prove here the result for the case of an I(1) vector time series. Another objective of the paper is to show that the conditions of Granger's Representation Theorem for an I(1) system are equivalent to a precise structure of a Diagonal polynomial matrix containing the unit roots associated to the system.
0 ? ?

The rest of the paper is organized as follows. In section 2 we introduce the Companion Form representation for a vector autoregressive (VAR) model and the Jordan Canonical Form. A Diagonal Form of a polynomial matrix is introduced and connected with the Jordan Form. Section 3 gives the main theoretical result of the paper for I(1) variables, namely that the results of d' Autume (1990) characterizing the degree of integration of the system via the Jordan Form of the companion matrix are equivalent to the conditions derived by Johansen (see Johansen, 1991 and 1996) of Granger's representation theorem. Finally, section 4 concludes. A word on notation used in the paper. Upper case letters have been chosen for representing the random variables. The distinction between univariate and multivariate time series will be clear from the context. Of wide use is the backshift lag operator L. It is de?ned on a time series Xt (scalar or vector) as LXt = Xt  and in general Lk Xt = Xt k , for k = 0; 1; 2::: Trivially L Xt = Xt , that is, L = 1 the identity operator. Moreover, the di?erence operator ?, is de?ned as follows: ?Xt = Xt ? Xt  and hence ? = 1 ? L. Rn and C n are the n-dimensional spaces of real and complex numbers, respectively. In denotes the n ? n identity matrix. The orthogonal complement of an n?r matrix ? is an n?(n ? r) full column rank matrix ? such that ? ? = 0 and it holds that the n ? n matrix (?; ? ) is of full rank. With A (z) we denote a polynomial
? ? ? 0 ? ? ?

2

matrix, that is, a matrix whose entries are polynomials ? (z) on the set C of complex numbers.

2

Related Forms to the VAR

This section introduces the vector autoregressive model and its companion form, whilst it also provides most of the technical tools from matrix algebra needed for the following section. That is, the Jordan Canonical Form applied on the companion matrix of the stacked VAR(1) and the Diagonal Form which will be used in the following section.

2.1

The Companion Form of the VAR

In this subsection we introduce the Companion Form, see Johansen (1996) for more details. Consider the vector autoregression with k lags, VAR(k), in p dimensions: Xt = ?1 Xt 1 + ?2 Xt 2 + ::: + ?k Xt
? ? ?k

+ ?t , for all t = 1; 2; :::; T;

(1)

where ?i , i = 1; 2; :::; k are p?p parameter matrices and the disturbances ?t are assumed to be identically, independetly distributed (i.i.d.) with expected value E (?t ) = 0 and variance V ar (?t ) = ?, where ? a p ? p positive-de?nite matrix. The X k+1 ; :::; X1 ; X0 are p ? 1 vectors in Rp of ?xed initial values and for simplicity we do not consider any deterministic terms. Then (1) can be written in the companion form representation in n = pk dimensions: ~ ~ Xt = AXt 1 + ?t , ~ (2) ? ? ~ where Xt = Xt ; Xt 1 ; :::; Xt k+1 , ?t = (?t ; 0 ; :::; 0 ) and ~
? ? 0

0 * ? * Ip * * A=* 0 * * * (
?

0

0

0

0

0

?

? 0 Ip 0

? ? ? ?k? ?k ??? 0 0 ... ... Ip 3

0

 + + + + +; + + + )

0

0

which is the companion matrix. The disturbances are still i.i.d. with E (~t ) = 0 and ? V ar (~t ) = § = diagf?; 0; :::; 0g: ? It can be proved, see Anderson (1971), that the necessary and su±cient condition for making the system stationary is that jA (z)j = 0 =) jzj > 1; where A (z) = In ? Az is the characteristic polynomial associated with equation (2). The condition is equivalent to j?In ? Aj = 0 =) j?j < 1; i.e. all the eigenvalues of the companion matrix lie inside the unit circle and hence for ?nding the roots of A (z) it is enough to calculate the eigenvalues of A. Note though that while each root z corresponds to one eigenvalue ? = 1=z the converse does not hold. It is the case that whenever ? = 0 there is no root z corresponding to this zero eigenvalue. Under this condition it holds that there exists ± > 0 such that for all z 2 S, with S = fz : jzj < 1 + ±g the matrix A (z)? is well-de?ned as a convergent power series within a neighborhood of the origin and thus the autoregressive system (1) can be easily transformed to the respective moving-average representation Xt = C (L) ?t , with C (z) = A (z)? : We now introduce an important tool from matrix theory, i.e. the Jordan Canonical Form that we shall use in the subsequent section.

2.2

The Jordan Canonical Form

The idea of using Jordan matrices in the development of the theory of integrated variables is presented. For more details on the mathematics see the Appendix at the end of the paper. 4

2.2.1

Why are Jordan matrices interesting in the study of integrated processes?

Let us consider the model (2) with the matrix A given by A = P JP ? , where J is the Jordan matrix associated with the Companion matrix A, see equations (10) and (11) in the Appendix. We have ~ XJ ~ ~ ~ AXJ? + ?J =) XJ = P JP ? XJ? + ?J =) ~ ~ ~ ~ =) P ? XJ = JP ? XJ? + P ? ~J ; ? = YJ = JYJ? + uJ ; uJ = P ? ?J : ~ In order to illustrate this representation consider the four-dimensional system with J = diagfJ! (?) ; J (?)g. In this case the full system can be decomposed into two subsystems of the form:

~ and by de?ning the random variable YJ = P ? XJ , we ?nally deduce that

?

ZJ ZJ

0 ! B B =B B @
t

? 0 0 0
0

1 ? 0 0

0 1 ? 0
t

0 0 0 ?

1 C? C C C A

Z ? Z ?
;t ;t

!
+u ,
t

where Z = (Y ; Y ; Y! ) and Z = Y" . In particular, if ? = 1, i.e. the unit root case, we ?nd Y s I(3), Y s I(2), Y! s I(1) and Y" s I(1). Thus, a Jordan block J (1) as a matrix of coe±cients for the AR(1) model generates an I(d) variable. Some further applications of the Jordan form on cointegration analysis can be found in d' Autume (1990).
t t t t t t t t d

2.2.2

Main Result established by d' Autume

The general approach of d' Autume consists of coping ?rst with the theory of a deterministic system in Companion Form and then apply the derived results to the stochastic framework. After transforming the ~ VAR(k) to a VAR(1) model A(L)X = ? , see equation (2), d' Autume ~ proves a proposition stating that the degree of integration is equal to the
t t

5

maximum size of the Jordan blocks. As a special case assume that the ~ time series XJ is integrated of order one. According to d' Autume the I(1) case is equivalent to all Jordan blocks being of size one, or equivalently, the Companion matrix is diagonalizable with respect to the eigenvalues equal to unity.

2.3

Jordan and Diagonal Form

From the Jordan Canonical Form J of a matrix one can derive relatively easily a Diagonal Form for a polynomial matrix (In ? Jz). The Diagonal Form will be used in the next section in order to derive the Moving Average representation of the VAR in the case of I(1) variables in the system. 2.3.1 The Diagonal Form of a Polynomial Matrix

For the de?nitions of the elementary operations, unimodular matrices and the Diagonal Form of a polynomial matrix the reader is referred to the Appendix. However, we brie°y note that for a polynomial matrix B (z) there is a diagonal polynomial matrix ¤ (z) which contains all the roots of B (z), i.e. the equations jB (z)j = 0 and j¤ (z)j = 0 have exactly the same roots. 2.3.2 Deriving the Diagonal Form from the Jordan Canonical Form for a value of ?

In this subsection we provide a way of ?nding the Diagonal Form of a polynomial matrix (In ? Jz) if the structure of the Jordan matrix J is known to contain a single eigenvalue ?. We consider the single eigenvalue Jordan matrix because further on we are going to separate the eigenvalues di?erent than one and focus on the unit roots of the model, that is, later we will set ? = 1 and use the Diagonal Form developed here. We provide ?rst a proposition which will be used for proving Theorem 2. 6

Proposition 1 For the polynomials f (z) with f (0) = 1 and g (z) de?ned on C we have that the matrix

?

B (z) =

f (z) ?z 0 g (z)

!

is equivalent, under the elementary operations procedure, to

?

¤ (z) =

1 0 0 f (z) g (z)

!

:

Proof. The main di±culty here is to apply the elementary operations in such a way that the applied transformation matrices will be unimodular. Following S?ren Johansen's suggestion we apply the following series of elementary operations. De?ne ?rst h (z) = z ?1 [f (z) ? 1], which we note that is a polynomial because by the Taylor expansion of f (z) we have f (z) =
deg[f ]

X

n=0

f

(n)

z n f (0)=1 (z) =) h (z) = n!

deg[f ]

X

n=1

f

(n)

z n?1 (z) : n!

Here deg [f ] is a positive integer number and denotes the degree of the polynomial f (z). Then we have:

? ?

B (z) = s

f (z) ?z 0 g (z)

! ?
s

1 ?z g (z) h (z) g (z)

!

1 0 g (z) h (z) f (z) g (z)

!

s

s diag f1; f (z) g (z)g = ¤ (z) :

This is because the unimodular matrices

?

L (z) =

1 0 ?g (z) h (z) 1

!

?

; R (z) =

1 z h (z) f (z)

!

are such that L (z) B (z) R (z) = ¤ (z) :2 We want to show that a polynomial matrix of the form Ik ? Jk (?) z is equivalent to a diagonal polynomial matrix ¤k (z). 7

Theorem 2 Consider the k ? k Jordan block matrix Jk (?) of the single eigenvalue ?, see also (10) in the Appendix. The matrix polynomial in z Bk (z) = Ik ? Jk (?) z ; is equivalent, under elementary operations, to the k ? k diagonal polynomial matrix in z:

8 > < ¤k (z) = diag 1; :::; 1 > | {z } : k?


-times

9 > = k ; (1 ? ?z) ; > ;

where

? Uk (z) Bk (z) Vk (z) = ¤k (z) , Bk (z) = Uk  (z) ¤k (z) Vk? (z) ;
with Uk (z) and Vk (z) unimodular matrices. Proof. We consider ?rst the cases k = 1 and k = 2:

For the J (?) block, we have trivially ¤ (z) = 1 ? ?z and

? B (z) = 1 ? J (?) z = 1 ? ?z = U  (z) ¤ (z) V? (z) ;
i.e. U (z) = 1, V (z) = 1. For the J (?) block it holds that

?

B (z) = I ? J (?) z =

and ¤ (z) = diag 1; (1 ? ?z) , which can be found via the elementary operations procedure. The matrices U (z) and V (z) are

n

1 ? ?z ?z 0 1 ? ?z

!

= U ? (z) ¤ (z) V ? (z) ;

o

?

U (z) =

1 0 ? (1 ? ?z) 1

!

?

; V (z) =

1 z ?? 1 ? ?z

!

;

which are clearly unimodular matrices, since they are polynomial matrices with constant determinant equal to one. Instead of proceeding with a J! (?) block, we consider the polynomial matrix ? ! (1 ? ?z)k? ?z B¤ (z) = ; 0 1 ? ?z 8

which will be the typical block to appear in Bk (z) for all k ? 2. We only have to apply Proposition 1 for f (z) = (1 ? ?z)k? and g (z) = 1 ? ?z. Then we have:

?

B¤ (z) =

(1 ? ?z)k? ?z 0 1 ? ?z

!

s diag 1; (1 ? ?z)k = ¤¤ (z) :

n

o

Hence there are unimodular matrices U¤ (z) and V¤ (z) such that U¤ (z) = V¤ (z) =

0 @

 (

0 D E k? ?z ? (1 ? ?z) ? 1 (1 ? ?z) 1 1 z

D ?

1 z E (1 ? ?z)k? ? 1 (1 ? ?z)k?

 );  )

and it holds that ¤¤ (z) = U¤ (z) B¤ (z) V¤ (z). In a similar fashion one can generalize the result for any positive integer k.2 After having proved the speci?c case, i.e. for a single Jordan block, the general result, for any Jordan matrix, comes naturally. Theorem 3 Consider the n ? n matrix in Jordan Form J, of a VAR's Companion matrix A, as in equation (2), with s Jordan blocks JkE (?i ), for i = 1; :::; s. Then there is a Diagonal Form of the polynomial matrix (In ? Jz) of the form ¤ (z)n?n = diag 1; :::; (1 ? ? z) ; 1; :::; (1 ? ? z) ; 1; :::; (1 ? ?s z)
k

n

k

kI

o

:

Especially, for ?i = 1, it holds that the order d of the autoregressive process is given by d =max fki j ?i = 1g.
·i·s

Proof. Due to the structure of J, i.e. it is a block-diagonal matrix, the elementary operations on a Jordan block do not a?ect in any way the other Jordan blocks. Hence, by applying Theorem 2, we have that the matrix In ? Jz = diag fIk ? Jk (? ) z; Ik ? Jk (? ) z; :::; IkI ? JkI (?s ) zg 9

is equivalent, under elementary operations, to ¤ (z)n?n , where ¤ (z) is as de?ned above because after transforming each submatrix [IkE ? JkE (?) z] D E kE to 1; :::; (1 ? ?s z) , for all i = 1; :::; s, we derive the ¤ (z)n?n matrix. When for some ?i it holds ?i = 1 we have the unit root case. It is clear, following d' Autume's argument, that the order of integration should be equal to d, where d =max fki j ?i = 1g.2
·i·s

2.4

Background Cointegration Theory

This subsection provides the intermediate results that we shall use for the proof of Theorem 7 to appear in the next section. Proposition 4 Consider the n ? r matrices ? and ? which are full column rank r, r < n. The matrix ? ? has full rank if and only if ? ? has full rank.
0 0 ? ?

Proof.

See exercise 3.7 in Hansen and Johansen (1998).2

We now wish to prove that Granger's Representation Theorem holds for one lag if and only if it holds for k lags. We state ?rst, for completeness, Granger's Representation Theorem (GRT). Theorem 5 Consider a p-dimensional autoregressive variable XJ with k lags, with the usual assumptions for the disturbance terms and the initial values, in the error correction form: ?XJ = ?XJ with characteristic polynomial A (z) = (1 ? z)I ? ?z ? satisfying the assumptions: jA (z)j = 0 =) jzj > 1 or z = 1; 10 (3)
?

+

k ? i=1

: ? ?X
i k?1 i=1

t?i

+ ?t ;

: ? (1 ? z) z
i

i

and ? = ?A (1) = ?? with ?, ? full rank p ? r parameter matrices and ?i , i = 1; 2; :::; k being p ? p parameter matrices. Then
0

where ? = Ip ?

2

j? ?? j = 0 () Xt ? I(1); 6
0 ? ?

k? i=1

?i . The variable Xt is given as
0 ?

Xt = ? (? ?? )
? ? ?

1

?

0 ?

:? + Y ;
t i t i=1

with Yt a stationary process. Proof. For a proof see Theorem 4.2 in Johansen (1996).2

Let now the assumptions of GRT hold and consider a VAR(k) model A(L)Xt = ?t . For convenience let k = 3: Xt = ?1 Xt
?

1

+ ?2 Xt

?

2

+ ?3 Xt

?

3

+ ?t ;

which in the ECM form becomes: ?Xt = ?? Xt
0 ?

1

+ ?1 ?Xt

?

1

+ ?2 ?Xt

?

2

+ ?t ;

?? Then

0

= ?1 + ?2 + ?3 ? Ip ;

?1 = ??2 ? ?3 and ?2 = ??3 : j? ?? j = 0 () Xt ? I(1); 6
0 ? ?

where ? = Ip ? ?1 ? ?2 . The Companion Form representation is:

 *X * X ? (
J J

XJ?

  + * ? + = * IF ) (
0

? 0 IF

?! 0 0

 + * XJ? + * XJ? )(
XJ?!

  + * ?J ++* 0 ) (
0

 + + =) )

~ ~ ~ ~ XJ = AXJ? + ?J =) ?XJ = (A ? I) XJ? + ?J ; ~ ~

(4)

which is the ECM in the companion form. We shall now prove that GRT holds for a VAR(k) if and only if it holds for a VAR(1). See also exercise 4.7 in Hansen and Johansen (1998). Proposition 6 The GRT holds for a VAR(k) if and only if it holds for a VAR(1). 11

Proof. For notational convenience we set k = 3. Consider equation (4). It holds that A ? I = ?? , for the full rank matrices ~~
0

 *? ?=* 0 ~ (

?? ??! IF 0 0 ?IF IF

 + + )

 ? ~=* 0 * and ? (  + += )

IF IF ?IF 0 0 0 ?IF

 + + )

because A?I =

 * ? ? IF * IF (  *? *0 (
0

? ?! 0 ?IF IF ?IF

= since

?? ??! IF 0 0 ?IF IF

 +* ? +* 0 )(

IF IF ?IF 0 0 0 ?IF

 + + )

0

= ?? ; ~~
0

? ? IF = ?? ? ? ? ?! :
0

Moreover, we ?nd that ? ~ Hence

?

 * = * (? (
0

? + ?! ) ? ?! ?
? 0 0 ?

?

 + +; )

~ ?

?

 *? =* ? (
?
0

? ? ?

 + +: )

? ? = ? (IF + ? + 2?! ) ? = ? ?? ; ~ ~
0 ? ? ? ?

which yields immediately that ? ? = j? ?? j. This shows that the ~ ~ GRT holds for one lag if and only if it holds for three lags. In a similar fashion one can prove the result for a general VAR(k) model.2
0 0 ? ? ?

? ? ?

? ? ??

?

?

3

Granger's Representation Theorem

In this section we shall use the material so far developed in order to unify GRT with the Companion Form and the Jordan Canonical Form. We focus on the theory applied to variables integrated of order one. In 12

the ?rst subsection we prove the condition that there is no Jordan block of size greater than one, corresponding to unit root, is equivalent to j? ?? j = 0. In the second subsection we derive the Wold representation 6 from the VAR representation by using both Jordan and Diagonal forms.
0 ? ?

3.1

Jordan Blocks and the GRT I(1)-conditions

For the general VAR(k) model A(L)XJ = ?J , d' Autume (1990) has proved that after transformation to the Companion Form, see equation (2), it holds that the system is I(1) if and only if there is not a Jordan block of size two or higher, corresponding to unit root, in the Jordan matrix J of A. Our aim is to show that this result is equivalent to the conditions underlying GRT, see Theorem 4.2 in Johansen (1996). Theorem 7 Consider the VAR(k) model A(L)XJ = ?J . It holds that j? ?? j = 0, i.e. XJ ? I(1), if and only if there is no Jk (1) block, 6 with k ? 2, in the Jordan Form J of the Companion matrix A of Xt : ~ Yt = JYt  + ut , with Yt = P  Xt . The matrices ? , ? and ? are de?ned as in Theorem 5.
0 ? ? ? ? ? ?

Proof. In the proof we consider only Jordan blocks of order one and two for convenience. We allow for the intermediate step: Yt = JYt
?

? ^ ^? 6  + ut , no J (1) () ?? ? ? 6= 0 () j? ?? j = 0;
ii
0

?

?

i

0

?

?

where ?, ? are such that ^ ^ J ?I =P
?

? ?? ? ~ (A ? I) P = P ? ? P ? = ?? ~ ^^
0 0

0

and ?, ? are as in Proposition 6. ~ ~ Proof of (i): It holds from Proposition 6 that ? ?? = ? ? , while ~ ~ ^ ~ for ? = P ? and ? = P  ? we have ^ ~
0 ? ? 0 ? ? 0 ? ? ? ? ?

~ ? ? = (P ? ) P ^ ^
0 0 0 ? ? ?

?

~

~ ? = ? PP
0 ? ?

?

~

~ ~ ? =? ? :
0 ? ? ?

13

Hence ? ?? = ? ? = ? ? . From Proposition 4 we take ~ ~ ^ ^
0 0 0 ? ? ? ? ? ?

? ? ?

? ? ? 6= 0 () ?? ? ^ ^? ?^ ^
0 0 ?

?

?

? ? ??

6= 0

and thus
? ? ?

? ? ? 6= 0 () ?? ? ^ ^? ?^ ^
0 0 ?

?

?

? ? ??

= ? ? ~ ~
0 ?

? ? ?

? ? ??

= j? ?? j = 0: 6
0 ? ?

This proves equivalence relationship (i). It only remains to show (ii):
? ? ?

? ? ? 6= 0 () No J (1) block in J, ^ ^?
0

?

that is only 1 ? 1 blocks J (1) = 1 appear in J. Proof of (ii): Suppose ?rst that J has a J (1) block as an entry. Without loss of generality we can assume that the J (1) block is placed at the north-west corner of the n ? n block-diagonal matrix J. We can achieve this structure by suitably interchanging the columns of the transformation matrix P . The matrix J ? I is of reduced rank, say s < n, that depends on the number of J (?) and J (?) blocks in J for ? = 1. Now, due to the facts that:
n

(a) the unity is at the o?-diagonal position (1,2) of the [J (1) ? I ] = J (0) submatrix, and (b) the block-diagonal form of the Jordan matrix J yields that J ? I is also a block-diagonal matrix, and if e is the (i; j)-element of the matrix J ? I we have that e  = 0, for all i and e = 0, for all i but for i = 1: e = 1, we have:
n i;j n i; i;

0 B J =B @

;

1 1 0 1 0

0 F

1  C C =) J ? In = * * ) (

0 1 0 0 0
?

0 G

 + ^ + = ?? ; ) ^
0

for some Jordan matrix F and G = F ? In 6= 0. We can partition D^ ^ E ^ ^ suitably the n ? s matrices ? and ? as ? = [^  ; ? ] and ? = ? ; ? and ^ ^ ? ^ take J ? In = ?? = ? ? + ? ? ^^ ^ ^ ^ ^
  

14

^ where ? = (1; 0; :::; 0) and ? = (0; 1; :::; 0) are n ? 1 vectors with ^ ^ ^ ? ? = 0, while ? and ? are n ? (s ? 1) matrices such that ^ ^
  

?

0

?

0

0 G

!

=? ? : ^ ^
0

Now multiply from the right by ? to ?nd ^ ? ? ? ^ ^ ^
0

?

=
0

0

?

0

0 G

!

^ ^ ? = 0 =) (^ ? ) ? ? = 0 =) ? ? = 0 ^ ? ^ ^ ^
0 0 0

() ? ? = 0: ^ ^

^ ^ This is because the n ? (s ? 1) matrix ? has ? consequence we have the rank reduction of ? ?: ^ ^
0

? ?
;j

= 0, for all j. As a

^ ^ ? ? = (^  ; ? ) ? ; ? ^ ^ ? ^
0 0

?

? ?
= =

?

=

? ? ? ? ^ ^ ^ ^ ? ? ? ? ^ ^ ^ ^
0 0 0 0

! ?

? ^ ? ^
0 0 0

!?

^ ^ ? ; ?

? !

= =) ?? ? ? = 0: ? ^ ^?
0

0 0 ? ? ? ? ^ ^ ^ ^
0

?

?

? ? Hence we have proved that ?if there is a J (1) block in J it implies ?? ? ? = 0, or equivalently if ?? ? ? 6= 0 it follows that there is no J (1) ? ^ ^? ? ^ ^? ? block in J. We shall now show the converse, namely that if there are no ? ? ? ^ ^? J (1) blocks, it implies ?? ? ? 6= 0.
0 0 0

In the case that there are only single unit roots in the system, that is, there are only J (1) Jordan blocks in J, we can think of J ? I as

?

J ?I =
n

0 0 0 D

!

n

= ?? ; ^^
0

where D is a s ? s full-rank matrix with jDj = 0. A possible choice of ? 6 ^ ^ ^ ^ and ? is ? = (0; D ) , ? = (0; I ) . Thus we have
0 0 0

? ? = (0; D ) ^ ^
0 0

?

r

0 I

!

=D

0

r

? ? ? ^ ^? =) ?? ? ? = jDj = 0: 6
0

This allows us to claim that:

? ^ ^? There is no J (1) in J () ?? ? ? 6= 0;
0

?

?

15

which proves equivalence relationship (ii). We have shown that the I(1)condition of GRT is equivalent to \no Jordan block J (1) in J". The proof of Theorem 7 is complete.2 One should note at this point that a similar Theorem can be proved for an I(2) variable. That is, for a process XJ ? I(2) it should hold that there is no J! (1) block in J for the VAR(1) in Companion Form if and only if the complicated I(2)-conditions of GRT are satis?ed.

3.2

The Moving-Average Representation

We shall now combine the concepts we have already seen, i.e. the Jordan Form and the Diagonal Form of a polynomial matrix, in order to ?nd the moving-average representation of a system and complete the proof of GRT. We point out that GRT as stated by Engle and Granger (1987) starts from the moving-average and arrives to a cointegrated VAR via a reduced rank assumption on the moving-average representation. The n ? n block-diagonal matrix J can be partitioned suitably as

?

J=

J (1) 0 ^ 0 J

!

;

where we denote the r ? r matrix J (1) = diagfJ (1); :::; J (1); J (1); :::; J (1)g ^ and the (n ? r)?(n ? r) matrix J is the Jordan matrix formed by the rest of the eigenvalues smaller than one in absolute value, due to the usual assumption on the root of the characteristic polynomial. We consider again integrated variables of order one or two, hence the form of J (1). The polynomial matrix (In ? Jz) can be written as the product of two matrices where one contains all the unit roots of the system while the other has the stationary roots with modulus greater than one. Hence,


This for

mulation has been adopted from d' Autume (1990).

16

consider the matrix (In ? Jz) as

? ?

In ? Jz = =

Ir ? J (1) z 0 ^ 0 In?r ? Jz Ir ? J (1) z 0 0 In?r

!
= Ir 0 ^ 0 In?r ? Jz

!?

!
= B (z) C (z) ;

say. We have that jB (1)j = 0 and jC (1)j = 0, as a result of the partition 6 discussed above. By Theorem 2 it yields B (z) = R(z)¤(z)Q(z), with j¤ (1)j = 0, where R (z) and Q (z) are unimodular matrices. This, by the way, is the Smith McMillan Form of B (z), see the Appendix for more details and Theorem 2 for a proof in this particular case. It follows that In ? Jz = B (z) C (z) = R(z)¤(z) [Q(z)C(z)] ; (5)

where Q(z)C(z) is invertible for jzj · 1 + ±, for some ± > 0, since jQ(1)C (1)j = 0. Now, recall equation (2): 6 ~ ~ ~ ~ ? ? ~ Xt = AXt? + ~t () A (L) Xt = ~t ; A (z) = In ? Az: By using J = P ? AP () A = P JP ? ~ A (z) = In ? P JP ? z = P (In ? Jz) P ? () ~ () A (z) = [P R(z)] ¤(z) Q(z)C(z)P ? = ? (z) ¤(z)? (z) ;

~ we can derive the following expression for A (z):

h

i

(6)

where ? (z) is a unimodular matrix and ? (z) is an invertible matrix for jzj · 1 + ±, since j? (z)j = 0 for jzj · 1 + ±. Note here that the last 6 ~ expression (6) for A (z) is not a Smith-McMillan Form. This is because the polynomial matrices ? (z) and ? (z) are not both unimodular as they should be in a Smith-McMillan Form. It may happen that the polynomial matrix ? (z) = Q(z)C(z)P ? 17

is unimodular but in general it will not be and its determinant j? (z)j will not be a constant for every z 2 C. The matrix ¤(z) has the structure (by applying Theorem 3 for ? = 1): ¤(z) = diag 1; :::; 1; (1 ? z) ; :::; (1 ? z) ; (1 ? z) ; :::; (1 ? z)

n

o

:

Consequently if there are n ? r (single) unit roots, i.e. if there are no J (1) blocks, then ¤(z) = ¤ (z) = diagfIr ; (1 ? z) In?r g: (7)

Hence the system is I(1) and the respective theory can be applied! , see Hylleberg and Mizon (1989) and for I(2) systems Engle and Yoo (1991) and Haldrup and Salmon (1998). Now we shall prove that the conditions underlying GRT are equivalent to the structure of ¤(z) in the Diagonal Form of equation (7). We make the usual assumption (3) on the roots of the characteristic equation. Theorem 8 The A (z) is an n?n polynomial matrix, with r = rankA (1), and A (z) = ?(z)¤(z)?(z); (8) where ?(z) and ?(z) are invertible polynomial matrices, that is j? (1)j = 6 0 and j? (1)j 6= 0, and the matrix ¤ (z) contains all the unit roots of A (z). If ? = ?? , where ?; ? are n ? r full rank matrices for some 0 · r < n and if ? ?? has full rank, then the structure of the diagonal matrix ¤ (z) is as in (7).For r = n, i.e. when the A(1) is of full rank, no condition such j? ?? j = 0 is required and ¤ (z) = In . 6
0 0 ? ? 0 ? ?

The converse of the result also holds, that is, for A (z) as in (8) if ¤ (z) has the form of (7) then ? ?? is of full rank.
0 ? ?

Since we have considered 1(@) variables with

@ · 2, there are no Jordan blocks of

order higher than two and hence the underlying structure. ! One has to pay attention to the fact, however, that the references mentioned use the Smith-McMillan Form.

18

The matrices ?(z), ?(z) have as elements ?nite order polynomials and since they are invertible their inverses are given as an in?nite order power series convergent in a circle which contains the unit circle because of assumption (3). Proof. In ? the VAR(k) system the condition j? ?? j = 0 will be 6 ? ? ? ~ ~ equivalent to ?? ? ? 6= 0, in the stacked VAR(1) Companion Form, or ? ? ? ^ ^? ?? ? ? 6= 0, i.e. there is no J (1) block in J. Hence from Theorem 3 and
 ? ? 0 ? ? 0

the discussion preceding equation (7) we can conclude that ¤ (z) = diagfIr ; (1 ? z) In r g:
?

To prove the converse we just have to assume that ¤ (z) is as in equation (7) and then refer to (6) and the discussion following it. In this case we can derive analytically the moving average representation from the autoregressive representation. We have from equation (6):
$ ~ ~ ~ A (L) Xt = ~t =) ? (L) ¤(L)? (L) Xt = ?t ; ? ~

with ¤(z) = diagfIr ; (1 ? z) In r g, which gives
?

~ ¤(L)? (L) Xt =

h

?

?

(L) ?t =) ~
?

i

~ ?? (L) Xt = diagf?Ir ; In r g ? h ~ t = diagf?Ir ; In r g ? ? (L) ?X
?

h

?

(L) ?t =) ~ (L) ?t () ~ (L) ~t ; ?
i i

i

?

~ ?Xt = ?

h

?

(L) diagf?Ir ; In r g ?
?

i

h

?

(9)

where we have used the diagonal structure of ¤(z) and the fact that ? (z) is an invertible matrix for jzj · 1 + ±. Thus we have derived the ~ Wold representation of ?Xt . The right hand side is a stationary process because we know that the elements of the matrices ?  (z) and ?  (z) are given as convergent power series. Consequently equation (9) implies ~ that Xt is an I(1) process. It follows that the system is I(1) and hence from the GRT we deduce that if ? = ?? thenj? ?? j = 0.2 6
? ? 0 0 ? ?

As a simple illustration of the results of this section we consider the following example. 19

Example: Consider the model ?XJ = ?? XJ
 ?



+ ?J , with ? =
0

?1 1?
4 2 ;

, ? = (1; ?1)
0

where ?J are i.i.d. N(0; I2 ). We have A (z) = (1 ? z) I2 ? ?? z =) A (z) =
0

?

with characteristic equation jA (z)j = 0 () z = 4 or z = 1, so the 3 assumption on the roots being z = 1 or outside the unit circle is satis?ed. It also holds that ? = (2; ?1) and ? = (1; 1) =) j ? ? j = 0 () XJ ? I(1): 6
0 0 0 ? ? ? ?

1z 1 ? 5z 4 4 ?1z 1 ? 1z 2 2

!

Thence, we can ?nd that ? 1 (z) A (z) ? 1 (z) = diagf1; (1 ? z)g = ¤ (z) ()
? ?

A (z) = ? (z) ¤ (z) ? (z) ; with ? (z) = and ? (z) =
?

?

1 0 ?3z + 5 1

!

=) ? 1 (z) =
?

?

1 0 3z ? 5 1

!

?

1z 1 ? 5z 4 4 ?5 + 15 z 1 ? 3 z 4 4

!

=) ? 1 (z) =
?

?

1 5

?

3 4 4+5z 3z 4
?

z z?

!
:

Note that ? 1 (z) is not a unimodular matrix but has a representation as a power series convergent for jzj < 4 , due to the assumption on the 3 roots of jA (z)j = 0.

4

Conclusion

The analysis of the paper has been focused on the mathematical properties of the nonstationary vector autoregressive models. We have used the 20

Companion Form of the VAR, the Jordan Canonical Form of a matrix and introduced a Diagonal Form for a polynomial matrix. The latter can be found easier once the Jordan form is known. An alternative proof for Granger's Representation Theorem has been given, based on the structure of the Jordan matrix of the Companion matrix. The condition underlying Granger's Representation Theorem for a series integrated of order one is shown to be equivalent to the preclusion of any Jordan blocks of order two or higher. We have shown how the moving average representation can be derived from the VAR representation using the Diagonal Form. Finally the strong connection of the GRT and the structure of the Diagonal Form containing all the unit roots of the system is proved.

5

Appendix

In this appendix we gather the most important mathematical results already established in the literature.

5.1

The Jordan Canonical Form

For an extensive treatment on the Jordan canonical form see, for example, Gantmacher (1959, vol. I). De?nition 1 Let ? be a complex number and let i be a positive integer. The i ? i triangular matrix

JE

 *? * * * (?) = * * * * (
0

1 0 ? 1 ... ... ? 1 ?

 + + + + + + + + )

(10)

is called a Jordan block of order i. For i = 1, then J (?) = ?. 21

Theorem 9 Let A 2 C n?n , i.e. in the set of n ? n complex matrices, and suppose that the distinct eigenvalues of A are f? ; ? ; :::; ?m g, m · n. Then there exists a nonsingular matrix P 2 C n?n such that J = P ? AP = diagfQ ; Q ; :::; Qm g; (11)

where each Qi is a block-diagonal matrix with Jordan blocks of the same eigenvalue ?i : The matrix A is said to have the Jordan Canonical Form J, with A = P JP ? . Proof. See Gantmacher (1959, vol. I).

5.2

The Diagonal Form of a Polynomial Matrix

We start by providing the necessary background material. The interested reader is referred to Gohberg et al (1982). De?nition 2 The elementary row and column operations (E.O.) on a polynomial matrix A (z) are: (I) multiply any row (column) by a nonzero number. (II) interchange any two rows (columns) (III) add to any row (column) any other row (column) multiplied by an arbitrary polynomial b (z). Performing elementary row (column) operations is equivalent to premultiplying (postmultiplying) A (z) by appropriate matrices which are called elementary matrices. Speci?cally, for dimension of A (z) equal to n = 3, we have:

 *1 E = * 0 (

0 c 0 0 1

 0 + 0 + and E )
22

 *0 =* 1 (

1 0 0 0 1

 0 + 0 +; )

corresponding to elementary operations of type I and type II, respectively;

 *1 E! = * 0 (
0

b (z) 1 0

0 0 1

  + + or E" = * b * ) (

1 (z) 0

0 1 0

0 0 1

 + +; )

corresponding to elementary operations of type III. De?nition 3 A polynomial matrix U (z) is called unimodular if its determinant is a nonzero constant. Note that elementary matrices are unimodular and their inverses are also elementary matrices. De?nition 4 The Diagonal Form of a polynomial matrix B (z) is a diagonal polynomial matrix ¤ (z) such that B (z) = U (z) ¤ (z) V (z) ; where U (z) and V (z) are unimodular matrices. Theorem 10 The Diagonal Form can be derived via the elementary operations. Proof. See Gohberg et al (1982).2

5.3

The Smith-McMillan Form of a Polynomial Matrix

A form more general than the Diagonal Form is the Smith-McMillan Form. We give two de?nitions ?rst. De?nition 5 A monic polynomial is a polynomial whose highest-degree term has coe±cient 1. 23

De?nition 6 The (normal) rank of a polynomial matrix C(z) is the highest rank of C(z); 8 z: Theorem 11 A polynomial matrix C(z), can be written in the SmithMcMillan decomposition as C(z) = V ? (z)M(z)U ? (z) and where V (z) ; U (z) are unimodular matrices and M (z) is the Smith canonical form of C(z), i.e.

?
M(z) =

diagfs (z); :::; sH (z)g 0

0 0

!

Here r is the (normal) rank of C(z) and the fsE (z)g are unique monic polynomials obeying the division property that sE (z) is divisible by sE? (z), for all i = 2; :::; r, Moreover, by de?ning ?E (z) as the greatest common divisor of all i ? i minors of C(z), we have sE (z) = ?E (z)=?E? (z) ; ? (z) = 1:2 Proof. See Kailath (1980).2

Remark 1 We are mainly interested in polynomial matrices which will be assumed of full (normal) rank. The polynomials s(z) are the invariant polynomials of C(z); the invariance refers to elementary equivalence transformations. The Smith form of a polynomial matrix can be found either by performing elementary operations or by ?nding its invariant polynomials. Remark 2 Note that since U (z) and V (z) are unimodular matrices all the roots of the equation jC(z)j = 0 are to be found in M (z): 24

Remark 3 We are interested in unit roots and will factor from M(z) all the unit roots as M (z) = ¤ (z) N (z), where ¤ (z) contains all the unit roots of the system and N (L) is such that jN (1)j = 0 and contains 6 the rest roots of the system. The Smith-McMillan decomposition that we shall use is the following: C(z) = V ? (z)¤ (z) N (z) U ? (z): Note that now N (z) U ? (z) is, in general, not a unimodular matrix any more.

25

References
[1] Anderson, T. W. (1971) The Statistical Analysis of Time Series. New York: John Wiley & Sons. [2] Banerjee A., Dolado J., Galbraith J. W. and Hendry D. F. (1993). Co-integration, Error-Correction and the Econometric Analysis of Non-Stationary Data. Oxford: Oxford University Press. [3] d' Autume, A. (1990) Cointegration of Higher Orders: A Clari?cation, DELTA, Working Paper 90-22. [4] Engle R.F. and Granger C. W. J. (1987) Cointegration and Error Correction: Representation, Estimation and Testing, Econometrica, 55, 251-271. [5] Engle R.F. and Yoo B. S. (1991) Cointegrated Economic Time Series: An overview with New Results. In Engle R.F. and Granger C. W. J. (eds.), Long-run economic relationships, readings in cointegration, 237-266. Oxford: Oxford University Press. [6] Gantmacher, F. R. (1959) The Theory of Matrices, vols. I and II. New York: Chelsea. [7] Gohberg I., Lancaster P. and Rodman L. (1982) Matrix Polynomials. New York: Academic Press. [8] Granger C. W. J. and Lee T. -H. (1990) Multicointegration. In Rhodes G.F. and Fomby T.B. (eds.), Advances in econometrics: Cointegration, spurious regressions and unit roots, 71-84. JAI Press Inc. [9] Gregoir S. and Laroque G. (1993) Multivariate Time Series: A Polynomial Error Correction Theory, Econometric Theory, 9, 329-342. [10] Haldrup N. and Salmon M. (1998) Representations of I(2) Cointegrated Systems using the Smith-McMillan Form, Journal of Econometrics, 84, 303-325. 26

[11] Hansen P. R. and Johansen S. (1998) Workbook on Cointegration. Oxford: Oxford University Press. [12] Hendry D. F. (1995) Dynamic Econometrics. Oxford: Oxford University Press. [13] Hylleberg S. and Mizon G. (1989) Cointegration and Error Correction Mechanisms, The Economic Journal, 99, 113-125. [14] Johansen S. (1991) Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models, Econometrica, 59, 1551-1580. [15] Johansen S. (1992) A Representation of Vector Autoregressive Processes Integrated of Order 2, Econometric Theory, 8, 188-202. [16] Johansen S. (1995) A Statistical Analysis of Cointegration for I(2) variables, Econometric Theory, 11, 25-59. [17] Johansen S. (1996) Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, 2nd edition. Oxford: Oxford University Press. [18] Johansen S. (1997) Likelihood analysis of the I(2) model, Scandinavian Journal of Statistics, 24, 433-462. [19] Kailath T. (1980) Linear Systems. New York: Prentice-Hall, Englewood Cli?s. [20] Kitamura Y. (1995) Estimation of Cointegrated Systems with I(2) processes, Econometric Theory, 11, 1-24. [21] Paruolo, P. (1996) On the Determination of Integration Indices in I(2) systems, Journal of Econometrics, 72, 313-356.

27



更多相关文章:
正定矩阵的性质和判定方法及应用
the equivalence theorem in the problem solving process...practical application of positive definite matrices....X T AX i ?1 j ?1 n n 是正定二次型(负...
关于压缩感知的20篇论文点评
18-Computation and relaxation of conditions for equivalence between ell1 and ...21-A negative result concerning explicit matrices with the restricted isometry...
2012年李真真毕业论文
The Theory And Application Analysis Of The ...Key words: equivalence of matrices; congruence of...a ij ,i ? j . 由于 xi x j ? x j xi ...
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图