9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Renormalization group and the Ginzburg-Landau equation

Renormalization Group and the Ginzburg-Landau Equation

J.Bricmont UCL, Physique Theorique, B-1348 Louvain-la-Neuve, Belgium A.Kupiainen Rutgers University, Mathematics Department, New Brunswick NJ 08903, USA

We use Renormalization Group methods to prove detailed long time asymptotics for the solutions of the complex Ginzburg-Landau equations with initial data approaching, as x ! 1, di erent spiralling stationary solutions. A universal pattern is formed, depending only on this asymptotics at spatial in nity.

Abstract

1. Introduction

Parabolic PDE's often exhibit universal scaling behaviour in long times: the solution behaves as u(x; t) t? 2 f (t? 2 x) as t ! 1, where the exponents and and the function f are universal, i.e. independent on the initial data and equation, in given classes. This fact has an explanation in terms of the Renormalization Group (RG) 5, 6, 1], very much like the similar phenomenon in statistical mechanics. In 1] a mathematical theory of this RG was developed and here we would like to apply these ideas to a concrete situation, namely the complex Ginzburg-Landau equation u = @ u + u ? juj u _ (1) @ where u : R R ! C, @ = @x and dot denotes the time derivative. (1) has an in nite set of stationary solutions

2 2

uq (x) = 1 ? q ei qx (2) and a natural question is to inquire about the time development of initial data u(x) that approaches two such solutions at 1: lim ju(x) ? uq (x)j = 0: (3) x! 1

2 ( + )

q

Supported by NSF grant DMS-8903041

Draft version: 4=25=1997

22:07

1

We prove in this paper a detailed long time asymptotics for such data. As a consequence, we shall show that, for any interval I , p C sup ju(x; t) ? ei t uq (x)j pI (4) t x2I where the constants q , and depend only on the boundary conditions (3). For the detailed asymptotics, see the Theorem in Section 3. This result ampli es those proved in 3].

2. The Renormalisation Group idea

Following 3], we write

u = (1 ? s)ei : The equation (1.1) becomes in these variables s = @ s ? 2s + 3s ? s + (@ ) ? s(@ ) _ _ = @ ? 2@ @s @ + G(s; @s; @ ) 1?s

2 2 3 2 2 2

(1)

2

@ s ? 2s + F (s; @ )

2

(2)

with the initial data (it will be convenient to take the initial time as t = 1)

x! 1

lim s(x; 1) = s ; x! 1 j (x; 1) ? x ? j = 0: lim (3) where 2s = F (s ; ). We will specify below the precise space of initial data (3). We will prove the following asymptotics as t ! 1 p x x 1 (4) (x; t) = t ( p ) + ( p ) + O( p ) t t t x 1 x s(x; t) = s ( p ) + p r ( p ) + O( 1 ) (5) t t t t again in a certain Banach space. The functions , , s , r are universal, depending on the initial data only through the boundary conditions (3). They are smooth and therefore u will have the asymptotics (1.4), with = (0); q =

0

(0);

= (0):

Before going to the proofs, we will explain in a heuristic fashion the RG idea behind the proof. There are two mechanisms giving rise to the asymptotics (4), (5): the di usive approach of to the scaling form (4) and the "slaving" of s to follow whatever is doing, due to the linear ?2s term in (2). We will now explain the rst mechanism in the context of an equation for only. We will see in the next Section that this is precisely what the slaving mechanism produces: s in (2) will be e ectively "slaved" to a function of @ only.

Draft version: 4=25=1997

22:07

2

Thus, we consider the equation _ = (1 + a(@ ; @ ))@ ; (x; 1) = f (x) (6) with the boundary conditions (3) for the initial data f . We assume that a is analytic around the origin. The RG analysis of (6) proceeds as follows. We x some (Banach) space of initial data S . Next, we pick a number L > 1 and set (7) L (x; t) = L (Lx; L t) where will be chosen later and solves (6) with the initial data f 2 S . The RG map R : S ! S (this has to be proven!) is then (Rf )(x) = L(x; 1): (8) Note that L satis es _ L = (1 + aL(@ L ; @ L))@ L: (9) with aL(x; y) = a(L? ? x; L? ? y). We may now iterate R to study the asymptotics of (6). R depends, besides , on L and a. Let us denote this by RL;a . We have then the "semigroup property" RLn ;a = RL;aLn?1 : : : RL;aL RL;a: (10) Each R on the RHS involves a solution of a xed time problem and the long time problem on the LHS is reduced to an iteration of these. Letting t = L n, we have 1 (11) (x; t) = t? 2 (RLn;a f )(xt? 2 ): Now one tries to show that there exists an such that (12) aLn ! a ; RLn;a f ! f where (13) RL;a f = f is the xed point of the RG, corresponding to the scale-invariant equation _ = (1 + a )@ . Then, rescaling x, the asymptotics of the original problem is given by (xt = ; t) t? 2 f (x): (14) What are and f ? To understand this, consider rst the trivial case a = 0. This is just the di usion equation with initial conditions increasing linearly at in nity. This problem is of course exactly soluble. We have Z 1 p p 1 e? 4 y?x 2 f ( ty)dy; t? 2 ( tx; t + 1) = p 1 4 t !t!1 ?x + ( ? ?)(x + 2@ )e(x) (x) (15)

2 2 2 2 2 1 2 2 2 1 2 ( ) + 0

Draft version: 4=25=1997

22:07

3

x 4 where e(x) = ?1 e? 1 y2 pdy . In RG terminology, we have the "Gaussian" xed point corresponding to this boundary condition problem. It is easy to check that is a xed point for the map (7) with = ?1. How about a 6= 0? We take again = ?1 whence the xed point equation is _ = (1 + a (@ ))@ (16)

4 0 0 2

R

where a ( ) = a( ; 0) and the xed point is the scale invariant solution p x (x; t) = t ( p ): t x We get (replacing pt by z)

(17)

d d (1 + a ) dz + 1 z dz ? 1 = 0 2 2 with a = a (@ ) and, for small , we look for a solution

2 2

(18) (19)

=

0

0

+

where ( 1) = 0 and is the Gaussian solution (15), which solves (18) with a = 0. This is easy to solve (see Proposition 1 below or the Proposition in Section 4 of 1]). Consider then . We set (x; t) = (x; t) + (x; t)

x where, with an abuse of notation, (x; t) = t ( pt ) and (6). Then, _ = @ + (a@ ? a @ )

2 2 2

p

(20) is given by (19); solves (21) (22)

2 2 2 1 2

with ( 1) = . Now we set

L (x; t) =

2 1

(Lx; L t)

2 1 2

which, using @ l (Lx; L t) = L ?l @ l (x; t), satis es the equation _L = @ L + L(a(@ + L? @ L ; L? @ + L? @ L)(@ + L? @ ?a (@ )@ ):

2 1 2

L)

(23) (24)

Thus, reasoning as above, we expect

! x where (x; t) = ( pt ) satis es the L ! 1 form of (23):

Ln

_ =@

2

+a @

22:07

2

+ (ax(@ ; 0)@ + ay (@ ; 0)@ 4

2

)@

2

:

(25)

Draft version: 4=25=1997

This is a linear equation, easy to solve, whose solution is a small perturbation of the "Gaussian" solution (which solves (25) with a = 0), (x) = ? + ( ? ? )e(x): (26) Actually, (x) = 0 (x), where is given by (15), and (x) is a xed point of the map

0 + 0 0 0 0

(RL )(x) = (e L2 ?

(

1) 2

@

)(Lx):

(27)

p x x (28) (x; t) = (x; t) + (x; t) + (x; t) = t ( p ) + ( p ) + (x; t) t t and one studies the asymptotics of in the same manner. The last two steps ( and ) were carried out in 1] in great detail. We will now combine this RG approach with the slaving principle to arrive at (4), (5).

i.e. of (7), (8) with = 0 and a = 0. Finally, one sets

3. The full model

We may summarize the previous discussion in a more conventional language: we look for solutions of (2.2) of the form (x; t) = (x; t) + (x; t) + (x; t) s(x; t) = s (x; t) + r (x; t) + v(x; t) with (1)

p x x (x; t) = t ( p ); (x; t) = ( p ); t t x ); r (x; t) = p r ( p ) 1 x s (x; t) = s ( p t t t

(2)

d (we stick to this abuse of notation in order not to proliferate symbols, but we shall use dz or prime for the derivative with respect to the single argument of ; , etc...,as opposed to @ for the partial derivative with respect to x). The boundary conditions are

(x) ? xj = 0 ; xlim j (x) ? j = 0 ; !1 xlim js (x) ? s j = 0 ; xlim r (x) = 0 ; !1 !1 and are solved by a xed point method near the Gaussian solutions d (z) = ?z + ( ? ?)(z + 2 dz )e(z) (z) = ? + ( ? ? )e(z)

0 + 0 +

xlim j !1

(3)

(4)

and s and r are \slaved" to them.

Draft version: 4=25=1997

22:07

5

Thus, let us set

s = s (@ ); (5) where s solves the algebraic equation ?2s + F (s ; @ ) = 0, which is equivalent to q 1 ? s = 1 ? (@ ) , so that s is of order (@ ) for (@ ) small. Note that, by x x (2), @ (x; t) = 0 ( pt ), so s ( pt ) = s ( 0 ). Then is the solution of _ = @ + G(s ; @s ; @ ), i.e., using (2),

2 2 2

B

We set

d d d d d ? 1 z d + 1 ) = ?2 dz s dz = ?2 dz dz22 : (? dz 2 dz 2 d 1?s 1 ? ( dz )

2 2 2

(6) (7)

=

0

+

and solve the xed point equation (6) in the space of C N functions equipped with the norm 2 dm k kN = maxN sup j dzm (z)je z8 : (8) m z Then we have Proposition 1. For any N , there exists an > 0 such that, for j j in (3), (6) has a unique solution with

0

k kN C :

3

(9) (10)

and then

ks kN C

Next, we turn to whereby and r . Set =

2

2

+ ; s=s +r

(11) (12)

1

_ = @ + G(s; @s; @ ) ? G(s ; @s ; @ ) and

r = @ r ? 2r + F (s; @ ) ? F (s ; @ ) + H _

2

(13) (14)

where

x x x H = @ s ? @t s = t? (s 00 ( p ) + ( p )(s 0 ( p )) t 2 t t

1 2 1 22:07

Draft version: 4=25=1997

6

x Note that H (x; t) = t? H ( pt ). r is now taken (recall (2)) to be the part of (13) 1 proportional to t? 2 : we solve (15) 2r = DFs ;@ (r ; @ ) x for r (@ (x; t)) = pt r ( 0 ( pt )), where DF is the derivative of F , i.e.

1 1 1 1 2 2

DFs ;@ (r ; @ ) = 6s r ? 3s r + 2@ @ ? 2s @ @ ? (@ ) r Thus, r (@ ) is of order @ , and we let solve _ = @ + DGs ;@s ;@ (r ; @r ; @ ) (16) where DG is the derivative of G, i.e. 2 @s @ DGs ;@s ;@ (r ; @r ; @ ) = ? (1@s @ ) r ? 12? s @ ? 12? s @r : ?s and we get d ? 1 z d ) = ? 2s 0 0 ? 2s 0 0 ? 2 0 r 0 (? dz 2 dz (17) (1 ? s ) 1 ? s 1 ? s This is solved by setting = + (18) and we have Proposition 2. For any N , there exists an > 0 such that, for j j in (3), and s ; as in Proposition 1, (16) has a unique solution with k kN C : (19)

2 2 2 2 2 0 3

and then

kr kN C :

2

(20)

set

We can now write the nal equations that we will study. In accordance with (1), we

= + ; r =r +v (21) whence, from (12) and (13) we get the equations _ = @ + G(s; @s; @ ) ? G(s + r ; @ (s + r ); @ ( + ))] + G(s + r ; @ (s + r ); @ ( + )) ? G(s ; @s ; @ ) ? DGs ;@s ;@ (r ; @r ; @ )] = @ + A(v; @v; @ ) + D (22) v = @ v ? 2v + F (s; @ ) ? F (s + r ; @ ( + ))] + _ F (s + r ; @ ( + )) ? F (s ; @ ) ? DFs ;@ (r ; @ )] + H = @ v ? 2v + B (v; @ ) + E (23)

2 2 2 2

Draft version: 4=25=1997

22:07

7

where H = H + H and E = F (s + r ; @ ( + )) ? F (s ; @ ) ? DFs ;@ (r ; @ ) + H x x x 2 H = @ r ? @t r = t? 3 (r 00 ( p ) + ( p )(r 0 ( p )) (24) t 2 t t and we have separated the inhomogenous terms in (22), (23) for future convenience. We will now specify the initial data for (2.2) that we will consider. We take a perturbation of a \Gaussian" data satisfying the boundary conditions. (x) (x; 1) = (x) + (x) + (x) (25) (26) s(x) s(x; 1) = s (@ (x)) + r (@ (x)) + v(x) and describe now a Banach space of and v. Due to the fact that the equations (22),(23) involve second derivatives in the nonlinear terms (v is coupled to @ and to @v), it is useful to use Fourier transform to control smoothness. Due to the fact that they involve functions like that do not decay at in nity, we need to be careful about the spatial behaviour at in nity. We need norms that combine these two aspects, i.e. the phase space behaviour of and v. Thus, let be a non negative C 1 function on R with compact support on the interval (?1; 1), such that its translates by Z, n = ( ? n), form a partition of unity on R. For f 2 C , we then introduce the norms, for j = 1; 2 d kf k j = sup (1 + n )(1 + k )j n@ i f (k)j: (27)

1 2 2 2 0 0 0 0 2 ( )

Roughly, kf k j < 1 means that f falls o at least as x? at in nity and f^(k) as k? . Note that r , the derivatives of and s and @ have a norm bounded by (actually, by for s and r ). To check this, use the explicit form (2.15), (2.26) of ; and the bound kf k j C kf kN for N j + 2. The n could be changed to anything increasing 2 faster than n , for > 1, but not faster than e n8 (coming from (8)). These norms for di erent choices of may be shown to be equivalent. The reason we need di erent norms for v and has to do with the slaving and will be explained below. Our main theorem may now be stated. Theorem. For any > 0 there is an > 0 such that for j j, j j, k k , kvk , the equations (2:2) have a unique solution satisfying p p 2 lim t 1 ? k ( t ; t) ? t ( ) ? ( )k = 0 t!1 p 1 lim t ? ks( t ; t) ? s ( 0 ( )) ? p r ( 0 ( ))k = 0 (28) t!1 t where and are given in Propositions 1 and 2 and s and r in (5) and (15). Remark 1. The convergence in the norm (27) implies convergence in L qand in L1, see equations (47) and (48) below, applied to i = 0. Thus, using 1 ? s = 1 ? (@ ) , we have, for any interval I , q p sup ju(x; t) ? 1 ? q ei xq t (29) j CI t? 12

( ) 4 2 2 2 ( ) 4 0 0 (2) (1) (1 ) (2) 1 (1) 1 2

n2Z;k2R;i j

4

2

x2I

2

(

+

(0)+

(0))

+

Draft version: 4=25=1997

22:07

8

1 The \anomalous" term (0) equals ? 2 ( ? ? ) + O( ) and q = (@ )(0) = ( ? ?) + O( ) Remark 2. The Theorem does not depend much on the speci c form of F and G in (2.2). For a more general application of the slaving principle, see 2]. One obtains easily more re ned asymptotics, as in 1], namely the error in the rst equation in (28) 1 is t? 2 ( ) + O(t? ). Proof. Using the Propositions (where we always assume that we have N as large as needed), we can replace in (25), (26), , by the true xed points , , and still denote by ; v the remainder. We consider the equations (22,23) and solve them by the RG method. Thus, let n n n (x; t) = (L x; L t); n (x) = n (x; 1) vn(x; t) = Lnv(Lnx; L n t); Vn(x) = vn(x; 1): (30) We have (31) n (x) = n (Lx; L ) ; Vn (x) = Lvn (Lx; L ) and n , vn are the solutions of the equations _ n = @ n + An (vn; @vn; @ n ) + Dn (32) n v + B (v ; @ ) + E vn = @ vn ? L n n n n _ (33) n obtained from (22,23) upon the scaling (30). Concretely, let sn s + L?nr + L?nvn and @ n @ + L?n@ ( + n ). Then, using s (Ln x; L nt) = s (x; t) and the corresponding scaling properties of r ; @ ; @ , we get An = L n G(sn; L?n@sn ; @ n) ? (: : :)j n vn ] X ndp p1 p2 p3 = L apvn @vn @ n (34)

3 + 3 1 2 + 1+ 0 0 2 2 +1 2 +1 2 2 2 2 2 2 = =0

where p , p = 0; 1, jpj = p + p + p and ?n ?p2 ( ?n ?p3 ap = 2(?) p1 (@s + L (1@r s) ? L@ n r + Lp1 @ ) ? ) ?

2 3 1 2 3 1+ 1 1 1+

jpj>0

(35)

In the same way

dp = 1 ? p ? p ? p :

1 2 3

Dn =

= =0

a = ajr ~ . Note that in (34) each term has dp 0, while in (36), we have dp ?1. Bn has a similar expansion Bn = L n F (sn; @ n ) ? (: : :)jvn n ] X n ?p1 ?p2 p1 p2 = L bp vn @ n (37)

3 = =0 (3 )

X ndp L apr p1 @r p2 @ p3 ~ jpj>1

(36)

jpj>0

Draft version: 4=25=1997

22:07

9

(where in fact jpj 3, see (2.2)). Finally, X n ?p1 ?p2 ~ p1 En = L bp r @

(3 )

Note that here we have 3 ? p ? p show inductively in n that

1 (2)

jpj>1

p2

+ Ln H + H

1 1

2

(38)

2

2

2 in (37) and 3 ? p ? p

(1)

1 in (38). We will (39)

k nk ; kVnk

Here and below, C denotes a generic constant, which may change from place to place. Thus, let us assume (39) for n and prove it for n + 1. We need to solve (32,33) with initial data n(x), Vn(x) up to time L . This is done by writing (32,33) as integral equations :

2

( C )n L

n:

n (t)

=

(e t?1)@2

( ( 1)( 0

n+

2

Z t?1

0

vn(t) = (e t? @2 ? L2n Vn + dse En(t ? s)) + dses @2 ? L2n Bn(t ? s) vn + Bn( n ; vn): (40) with obvious notations, and where n ; vn regroup the rst two terms ; as in 4], we use the contraction mapping principle with the norms kwkLj = sup2 kw( ; t)k j : (41)

) ( 2 ) 0 0 0 0 ( ) ( )

dses@2 Dn(t ? s)) + Z t?1 s(@ 2 ?2L2n )

Z t?1

0

dses@2 An(t ? s)

Z t?1

n + A n ( n ; vn )

0

We shall show that T ( ; v) = ( n; vn)+(An( ; v); Bn( ; v)) maps the ball B = f( ; v)jk ? n g into itself and is a contraction there. n kL + kv ? vn kL We need to estimate the norms of A and B. Consider A rst, and a generic term in (34):

0 0 0 (2) 0 (1)

t2 1;L

]

(x; p) = where Fp(y) = (vp1 @vp2 @ (x; p) =

m2Z

0

Z t?1

0

ds dyes@2 (x ? y)ap(y)Fp(y):

We localize the y variable:

X

m (x; p):

Z

(42) (43) (44)

X Z t?1

p3 )(y; t ? s).

ds dyes@2 (x ? y) m(y)ap(y)Fp(y)

2

Z

We want to bound, for i 2, d ml (p) = sup j(1 + k ) l @ i m (k; p)j:

k

m2Z

We distinguish between jl ? mj 2 and jl ? mj < 2. (A) Let rst jm ? lj 2. Then m and l have disjoint supports and es@2 (x ? y) is smooth uniformly in s. We write

ml (p) = sup j k

Z

Z Z ?ikx t?1 ds dxe

0 22:07

i dy(1 ? @x )( l (x)@x es@2 (x ? y)) m(y)ap(y)Fp(y)j(45)

2

Draft version: 4=25=1997

10

and estimate the various factors on the RHS. First, we use, for j 4, s L ? 1, j@ j es@2 (x ? y)j CLe?jm?lj (46) where CL denotes a constant depending on L; (46) holds on the support of m and l . To bound Fp (y), note rst that our norms (27) imply the following L1 and L bounds. First, for any function w,

2 1

j@ iw(x)j =

X

l

X

l

4

j l(x)@ i w(x)j

Z

XZ

l

dkj ldw(k)j @i C kwk j

1 2

( )

(1 + l 2 j

Z

)?1

dk(1 + k

1 2

2

)?1kwk(j)

Z

(47)

( )

for i j , and secondly, by Schwartz

Z

j

m

@ i wj =

m

@ i w j2

0 (2)

=C 2 j

0 (1)

d 2 m @ i wj

n

We will see below that so that, for ( ; v) in B ,

k j = 12+wk : m

4

p

(48) (49)

k nkL ; kvnkL k kL + kvkL

(2) (1)

C

(2C + 1) n: (50) Finally, from (35), (2.15), (2.26) and the Propositions, we have kapk1 C p1 ? p2 ?p3 : (51) Now we bound (45): the x integral is controlled by l (x) or its derivatives, the s integral is less than L and we use (47) with w = v or for all factors in Fp(y) except one, for which we use (48):

1+ 3 2 2

X ?ndp L ml (p) jpj>0

(1) 2

CL (kvkL + k kL )e?jm?lj(1 + m )?

(1) (2) 4

1

(52)

We get from (51) if p or p = 0, or, otherwise, from (50), because we have then a nonlinear term in kvkL ; k kL . (B) Let now jm ? lj < 2. The di culty is that we do not have (46) for s close to zero. We will use Fourier transforms. Let m 2 C 1(R) be such that m m = m and denote m ap by fm . Then Z d @ i m(k; p) = dsdpdq ^l(k ? p)(ip)ie?sp2 f^m (p ? q) d p(q): (53) mF l

3 (2) 0

Let us consider the various factors on the RHS. Since is C 1 with compact support, we have j ^l (k ? p)j = je?i k?p l ^(k ? p)j Cr (1 + jk ? pjr )? (54)

( ) 1

Draft version: 4=25=1997

22:07

11

for any r. For f^m, note that due to (2.15), (2.26) and the Propositions,

Z

j(1 + (?@ )r ) m ap(x)jdx C

2 1+ 1 3 2 2

1+ 1 3

p

?2p2 ?p3

(55) (56) (57) (58)

for all r

N ?1 ,

2

whence, taking N large enough in Propositions 1,2,

3

2 1

jf^m(p ? q)j CC p ? p ?p (1 + (p ? q) r )? : R for any r. Also, dsjpjie?sp CL if i 2, so, provided we can show j d p(q)j C jpj(kvkL )p p (k kL )p (1 + m )? (1 + q )? mF

2

2 (1)

1+ 2

(2)

3

4

1

2

1

we can perform the convolutions in (53) to get

X

jpj>0

L

+

ndp

ml (p)

CL (kvkL + k kL )(1 + m )? :

(1) (2) 4 1

(52) and (58) yield, combined with (50),

kAn( ; v)k

To prove (57), use and

(2)

CL n :

1 2 1 ( )

(59)

d j m@ i wj (1 + m )? (1 + k )? kwk j

4

iw j@d (k)j = j

X d n @ i w(k)j

n

for w = v or , and perform the convolutions using

Z

C kwk j 1+k

2 2

( )

dp(1 + (k ? p) )? (1 + p )?

2 1 2

1

C (1 + k )? :

1 1 2 1 2

(60)

For Bn the analysis is similar, but we have to take advantage of the "mass" term, ?2L n , in (40), in order to control the terms with 3 ? p ? p = 2 in (37) or 3 ? p ? p = 1 in (38). It is also here that the di erence in the norms (27) enters. The only change in (A) above is in (46):

2

j@ j es @ ?L n (x ? y)j CLe?sL n e?jm?lj

( 2

2 )

2

(61)

(for jl ? mj 2). For (B), we integrate by parts @x = ?@y in (45) (recall i 1 here), and we get @y ( m(y) p(y)Fp(y)). After going to Fourier transform, as in (53), we see that we can use (54), (56) when m (y) or p(y) is replaced by its derivative. We also have (57) for Fp(y) replaced by its derivative, because F does not involve derivatives of v and contains

Draft version: 4=25=1997

22:07

12

only the rst derivative of . It is here that we use a di erent norm on v and . Hence proceeding as before, we end up estimating sup(1 + k )

2

Z t?1

0

k

dse?s k2

(

+ 2n )

L

(1 + k )?

2

1

CL? n

2

(62)

We also get L? n from the integral over s of (61). Therefore,

2

CL n : (63) The bounds (49) for the inhomogenous terms are proved in the same way, remembering that dp ?1 in (36) and 3 ? p ? p 1 in (38). For H ; H , it is easy to see that their norm is bounded by C , because derivatives of s ; r are bounded using (10), (20). Thus, we have shown that T maps B into itself ( rst, choose L so that, in (39), C L ; then, take small enough, depending on L). The contraction is proved in the same way : kA( ; v ) ? A( ; v )k CL (k ? k + kv ? v k ) kB( ; v ) ? B( ; v )k CL (k ? k + kv ? v k ) (64) To complete the induction for (39), we need to study the inhomogenous terms in (40). The main task is to show kR nk = k(e L2 ? @2 n)(L )k C k nk : (65) L It is now easier to work in the x representation. We have

(1) 2 1 2 1 2 1 1 2 2 (2) 1 2 (2) 1 2 (1) 1 1 2 2 (1) 1 2 (2) 1 2 (1) 0 (2) ( 1) (2) (2)

kBn( ; v)k

R g = G(x; y)g(y)dy

0

Z

(66)

) ( )

with

1 4 (67) G(x; y) = L? (4 (1 ? L? ))? 2 e? 1 ?L?2 ?1 x?L?1 y 2 : We need to bound Z Z i kR gk = sup (1 + l )j dxeikx dy(1 ? @x )( l (x)@x G(x; y))g(y)j CL? kgk : (68)

1 2 (1 0 (2) 4 2 1 (2)

k;l;i

2

We consider rst jlj C log L. Then

j:::j

XZ

m

1

i dxdyj(1 ? @x )( l (x)@xG(x; y)) m(y)g(y)j

2 4 (2) 4

k CL? kg+ l 1 m for L large enough. We used (48) (with @ i w replaced by g) and (67).

CL?

X ?jl?L?1 mj e (1 + m4 )?1kgk(2)

Draft version: 4=25=1997

22:07

(69)

13

For jlj C log L, and the terms with jmj L, we use, for j

j j@xG(x; y)j CL? e?jlj

1

4, (70)

and thus, using (48) again,

X

jmj L

CL?

1

jmj L

X ?jlj kg k(2) e

1 + jmj

4

k CL? kg+ l : 1

1 (2) 4 1

(71)

j For the terms with jmj L, by (48) and j@xG(x; y)j CL?

X

jmj L

(2) ?1 X kg k CL 4 jmj L 1 + m

CL?4 kgk(2)

CL?1 kgk

1+l :

4

(2)

(72)

for L large, since jlj C log L. Hence, (68) follows. The corresponding term for v in (40) is estimated similarly (it is of course superexponentially small), and the other contributions to n and vn are O( n ), as are those of A and B, see (59), (63). (39) is proved. 2 Proof of the Propositions. Consider rst (9). Using (6), (7), and B = 0, we have the equation d d d d d d (73) B = G(s ( dz ( + )); dz s ( dz ( + )); dz ( + )) g( dz ; dz ) Put

0 0 0 2 0 0 0 2

h=B

and solve

(74)

1

in the Banach space de ned by (8). h is the value of g at h = 0, i.e. d d d d h = G(s ( dz ); dz s ( dz ); dz ): d d d In N (h) we encounter terms like dz B ? h and dz22 B ? h . We write the latter as dz22 B ? h = d + )B ? h = ?h + ( ? z d )B ? h. Thus, (?B ? z dz dz d d N (h) = R(B ? h; dz B ? h; z dz B ? h) (76) with R : C ! C analytic near zero. Thus all we now need is d d Lemma. The operators B ? , dz B ? and z dz B ? are continous in the norm (8). This Lemma was proved in 1] for A = B ? , but the proof holds for B as well. d >From the explicit form of G, of in (2.15), and of s ( dz ) in (5), we have kh kN C

0 0 0 0 0 1 1 1 1 2 1 2 1 1 2 1 2 1 1 1 1 3 1 1 1 1 2 0 0 0

d d h = g( dz B ? h; dz B ? h) h + N (h)

1 2 2 0

(75)

Draft version: 4=25=1997

22:07

14

while kN (h)kN C ( khkN + khkN ). So, we can use the contraction mapping principle in a ball whose radius is of order around h . This proves (9). The bound (10) holds d d because s is of order ( dz ) and dz is of order . Proposition 2 is proven in the same way. It is enough to note that solves (17) with the RHS equal to zero and, setting h = A , with A as in 1], we get an equation like (75), with N (h) linear in h and where h = the RHS of (17) with replaced by ; we have the bounds kh kN C for N = N 0 ? 1 where N 0 is the N of Proposition 1, and kN (h)kN C khkN . 2

2 2 3 2 0 0 0 2 0 3 0 0

We thank P. Collet for explaining to us the problems discussed in this paper. J.B. bene ted >from the hospitality of Rutgers University and A.K. from the one of the University of Louvain. This work was supported in part by NSF Grant DMS-8903041.

Acknowledgments

References

1] Bricmont, J., Kupiainen, A., Lin, G., Renormalisation Group and asymptotics of solutions of nonlinear parabolic equations, preprint. 2] Bricmont, J., Kupiainen, A., in preparation. 3] Collet, P., Eckmann, J-P., Solutions without phase-slip for the Ginsburg - Landau equation, preprint (1991). 4] Collet, P., Eckmann, J-P., Epstein, H., Di usive repair for the Ginsburg - Landau equation, preprint (1991). 5] Goldenfeld, N., Martin, O., Oono, Y., Lin, F., Anomalous dimensions and the renormalisation group in a nonlinear di usion process, Phys. Rev. Lett. 64, 13611364 (1990). 6] Goldenfeld, N., Martin, O., Oono, Y., Asymptotics of partial di erential equations and the renormalisation group, to appear in the Proc. of the NATO ARW on Asymptotics beyond all orders, edited by S. Tanveer (Plenum Press).

Draft version: 4=25=1997

22:07

15

- The Ginzburg-Landau Equation Solved by the Finite Element Method
- Dissipative Solitons of the Ginzburg-Landau Equation
- The Ginzburg-Landau equation II. The energy of vortex configurations
- The Derivative Expansion of the Effective Action and the Renormalization Group Equation
- The Complex Ginzburg-Landau Equation in the Presence of Walls and Corners
- Breaking the hidden symmetry in the Ginzburg-Landau equation
- Kahler Potentials and Renormalization Group Flows in N=2 Landau-Ginzburg Models
- Perturbative renormalization of the Ginzburg-Landau model revisited
- Boundary Effects in The Complex Ginzburg-Landau Equation
- Traveling waves and the renormalization group improved Balitsky-Kovchegov equation

更多相关文章：
**
***Renormalization* *group* *and* *the* *Ginzburg-Landau* *equation*.pdf

*Renormalization* *Group* *and* *the* *Ginzburg-Landau* *Equation* J.Bricmont UCL, Physique Theorique, B-1348 Louvain-la-Neuve, Belgium A.Kupiainen Rutgers University, ...**
101025-Skyrmion-in-condensed-matter-Yonsei_图文.ppt
**

*Ginzburg-Landau* theory ?*Renormalization* *group* idea ?Spontaneous symmetry breaking in phase transition ?Goldstone theorem, Mermin-Wagner theorem ?High-Tc puzzle...**
Stability of moving fronts in ***the* *Ginzburg-Landau* *equation*_....pdf

Stability of moving fronts in*the* *Ginzburg-Landau* *equation*_专业资料。We use *Renormalization* *Group* ideas to study stability of moving fronts in *the* Ginzburg-...**
Effective Coupling Constant in ***Renormalization* *Group* for *the* ....pdf

Effective Coupling Constant in*Renormalization* *Group* ...ned by *the* *equation* 2π Λ2 = ?2 exp , (...*Landau* ghost, where Λ*Landau* = me exp(3π/2...**
***Renormalization* *Group* Method *and* Reductive Perturbation ....pdf

*Renormalization* *Group* Method *and* Reductive Perturbation Method_专业资料。It is ...! 4 which is *the* two-dimensional complex *Ginzburg-Landau* *equation*. From Eq...**
***Ginzburg-Landau* *equation* *and* vortex liquid phase of Fermi ....pdf

We show that Landau interactions renormalize two parameters entering*the* GL *equation* leading to *renormalization* of su *Ginzburg-Landau* *equation* *and* vortex liquid...**
***The* superconducting phase transition *and* gauge dependence_....pdf

*the* *renormalization* *group* functions of *the* *Ginzburg-Landau* model is ...interest in particle physics, d = 4, we obtain *the* same *equation* as (...**
I.M. ***Ginzburg**Landau* *equation* I Static vortices.pdf

I.M.*Ginzburg**Landau* *equation* I Static vortices_专业资料。Abstract. We ... *Renormalization* *group* ... 暂无评价 15页 免费 喜欢此文档的还喜欢 To ...**
...of amplitude ***equations* by *the* *renormalization* *group* method....pdf

Derivation of amplitude*equations* by *the* *renormalization* *group* method_专业资料...*Ginzburg-Landau* *equation* 6] with correction up to order 3 after following ...**
Functional ***renormalization* *group* in *the* broken symm....pdf

functional*renormalization* *group* (RG) *equations* for *the* irreducible vertices of *Ginzburg-Landau* theories by augmenting these *equations* by a flow *equation* for ...**
Dynamical ***Renormalization* *Group* Study of a Conserved Surface ....pdf

Dynamical*Renormalization* *Group* Study of a Conserved...δH + η *and* *the* Langevin *equation* will reach ...(17), *the* corresponding *Landau*-*Ginzburg* function ...**
***Renormalization* *Group* Derivation of *the* Localizatio....pdf

*The* result obtained is $\nu=2$ for all *Landau* levels. Our approa隐藏>> PUPT-1477, June 1994 *Renormalization* *Group* Derivation of *the* Localization Length...**
...Resonance in Two Dimensional ***Landau* *Ginzburg* *Equation*.pdf

Stochastic Resonance in Two Dimensional*Landau* *Ginzburg* *Equation*_专业资料。We ...ect of *renormalization* of *the* mechanism on *the* stochastic resonance. To this...**
Formation of Patterns ***and* Coherent Structures in Ch....pdf

Based on*the* *renormalization* *group* approach we derive a set ...更多>>In ...*GinzburgLandau* *equation* by solving explicitly *the* Vlasov *equation* for *the* ...**
Boundary Effects in ***The* Complex *Ginzburg-Landau* *Equation*_图文....pdf

in*The* Complex *Ginzburg-Landau* *Equation*_专业资料。... amplitude saturation *and* frequency *renormalization*. ...*Groups* of defects of *the* same charge can also ...**
***The* specific heat of superfluids near *the* transition ....pdf

This is accomplished by*the* *equation* k Joule cs = Vm 3 B c = 14:74... obtained from a *renormalization* *group* calculation for *the* *Landau*{*Ginzburg* ...**
Conformal Invariance ***and* *Renormalization* *Group* in Q....pdf

general covariance,*the* conformal invariance *and* *the* *renormalization* *group* ?...Our formulation may be viewed as a *Ginzburg-Landau* theory which can ...**
...Interactions, ***the* *Renormalization* *Group* *and* *the* Isotropic-....pdf

``Competing Interactions,*the* *Renormalization* *Group* *and* *the* Isotropic-N_专业...It is well known that *Ginzburg-Landau* functionals similar to *the* one ...**
...of Tricritical Point in ***Ginzburg-Landau* Theory_....pdf

*The* *Ginzburg-Landau* equa... 暂无评价 31页 免费 *Ginzburg-Landau* *equation*....Our derivation explains why usual *renormalization* *group* arguments always produce...**
AdsCft在凝聚态中应用简介_图文.pdf
**

*Renormalization* *Group* Holographic RG *and* effective ...Coherence length: Landon *equation*: Superfluidity ...*Ginzburg-Landau*-Wilson ? Comments: these models ... 更多相关标签：

Stability of moving fronts in

Effective Coupling Constant in

We show that Landau interactions renormalize two parameters entering

I.M.

Derivation of amplitude

functional

Dynamical

Stochastic Resonance in Two Dimensional

Based on

in

This is accomplished by

general covariance,

``Competing Interactions,