9512.net
甜梦文库
当前位置:首页 >> >>

Foundations of Quantum Decoherence


Foundations of Quantum Decoherence

arXiv:0805.3178v1 [quant-ph] 20 May 2008

Presented in Partial Ful?llment of the Requirements for the Degree Bachelor of Arts in Physics and Mathematics

John Gamble

The College of Wooster 2008

Advised by: Dr. John Lindner, The Moore Professor of Astronomy & Professor of Physics

Dr. Derek Newland, Assistant Professor of Mathematics

c Copyright by John Gamble May 20, 2008

I gratefully acknowledge the loving help and support of my parents, John and Clare Gamble, and of my ?ancée, Katherine Kelley. I extend sincere thanks to my advisors, John Lindner and Derek Newland, for their long hours and dedication to this project. I also thank Jon Breitenbucher
A for painstakingly assembling and maintaining this L TEX template, which made the writing process

signi?cantly more enjoyable than it would have been otherwise. Finally, I am grateful to The Gallows program for providing me an environment in which I could grow, learn, and succeed.

Abstract
The conventional interpretation of quantum mechanics, though it permits a correspondence to classical physics, leaves the exact mechanism of transition unclear. Though this was only of philosophical importance throughout the twentieth century, over the past decade new technological developments, such as quantum computing, require a more thorough understanding of not just the result of quantum emergence, but also its mechanism. Quantum decoherence theory is the model that developed out of necessity to deal with the quantum-classical transition explicitly, and without external observers. In this thesis, we present a self-contained and rigorously argued full derivation of the master equation for quantum Brownian motion, one of the key results in quantum decoherence theory. We accomplish this from a foundational perspective, only assuming a few basic axioms of quantum mechanics and deriving their consequences. We then consider a physical example of the master equation and show that quantum decoherence successfully represents the transition from a quantum to classical system.

ii

Contents
Abstract Contents List of Figures ii iii v

Preface vi 0.1 Decoherence and the Measurement Problem . . . . . . . . . . . . . . . . . . . . . vii 0.2 Notational Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x CHAPTER 1 Mathematical background 1.1 Linear Vector Spaces . . . . 1.2 Linear Operators . . . . . . 1.3 The Tensor Product . . . . . 1.4 In?nite Dimensional Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PAGE . . . . . . . . . . . . . . . . . . . . . . 1 1 6 12 17 21 21 22 25 28 33 36 38 38 42 47 51 54 54 55 56 56 58 59 62 63 65 66 67

2

Formal Structure of Quantum Mechanics 2.1 Fundamental Correspondence Rules of Quantum Mechanics 2.2 The State Operator . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Composite Systems . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Quantum Superposition . . . . . . . . . . . . . . . . . . . . . 2.5 Example: The Bell State . . . . . . . . . . . . . . . . . . . . . 2.6 Projection Onto a Basis . . . . . . . . . . . . . . . . . . . . . . Dynamics 3.1 The Galilei Group . . . . . . . . 3.2 Commutator Relationships . . 3.3 The Schr?dinger wave equation 3.4 The Free Particle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

4

The Wigner Distribution 4.1 De?nition and Fundamental Properitees . . . . . . . 4.1.1 Inverse Distribution . . . . . . . . . . . . . . 4.1.2 Reality of the Wigner Distribution . . . . . . 4.1.3 Marginal Distributions . . . . . . . . . . . . . 4.2 Wigner distributions of combined systems . . . . . . 4.3 Equation of Motion for a Free Particle . . . . . . . . 4.4 Associated Transform and Inversion Properties . . . 2 4.4.1 The Wigner transform of (i/2) ?2 x ? ? y ρ(x, y) 4.4.2 4.5
2

The Wigner transform of (x ? y) ?x ? ? y ρ(x, y) . . . . . . . . . . . . . . .

4.4.3 The Wigner transform of x ? y ρ(x, y) . . . . . . . . . . . . . . . . . . . . Example: The Wigner Distribution of a Harmonic Oscillator . . . . . . . . . . . .

iii

5

The Master Equation for Quantum Brownian Motion 5.1 The System and Environment . . . . . . . . . . . . . . . 5.2 Collisions Between Systems and Environment Particles 5.3 E?ect of Collision on a Wigner Distribution . . . . . . . 5.4 The Master Equation for Quantum Brownian Motion . Consequences of the Master Equation 6.1 Physical Signi?cance of the ?rst two terms . . . . . . 6.2 The Decoherence Term . . . . . . . . . . . . . . . . . . 6.3 Example: The Harmonic Oscillator in a Thermal Bath 6.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

72 72 74 78 84 87 87 89 92 93 95 97

6

References Index

iv

List of Figures
Figure 1 2 1.1 2.1 4.1 4.2 4.3 4.4 5.1 6.1 6.2 6.3 Page Graphical representation of decoherence . . . . . . . . . . . . . . . . . . . . . . . . viii H. J. Bernstein’s simple model of decoherence . . . . . . . . . . . . . . . . . . . . . ix Venn diagram of a rigged Hilbert space triplet . . . . . . . . . . . . . . . . . . . . The Bloch sphere representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Wavefunctions of the quantum harmonic oscillator . . . . . . State operators of the quantum harmonic oscillator . . . . . . Wigner distributions of the quantum harmonic oscillator . . Classical correspondence of the quantum harmonic oscillator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 30 68 69 70 71 73 91 91 93

System particle interacting with environment particles . . . . . . . . . . . . . . . . Decoherence in the ground state of the quantum harmonic oscillator . . . . . . . Decoherence in the third excited state of the quantum harmonic oscillator . . . . Decoherence of the quantum harmonic oscillator as t → ∞ . . . . . . . . . . . . .

v

CHAPTER

0

Preface

T

his thesis is designed to serve a dual purpose. First, it is a stand-alone treatment of contemporary decoherence theory, accomplishing this mostly within a rigorous framework

more detailed than is used in typical undergraduate quantum mechanics courses. It assumes no prior knowledge of quantum mechanics, although a basic understanding obtained through a standard introductory quantum mechanics or modern physics course would be helpful for depth of meaning. Although the mathematics used is introduced thoroughly in chapter 1, the linear algebra can get quite complicated. Readers who have not had a formal course in linear algebra would bene?t from having ref. [1] on-hand during some components, especially chapters 2 and 3. The bulk of the work speci?cally related to decoherence is found in the last three chapters, and readers familiar with quantum mechanics desiring a better grasp of decoherence theory should proceed to the discussion of quantum mechanics in phase-space, found in chapter 4. Second, this thesis is an introduction to the rigorous study of the foundations of quantum mechanics, and is again stand-alone in this respect. It develops the bulk of quantum mechanics from several standard postulates and the invariance of physics under the Galilei group of transformations, outlined in sections 2.1 and 3.1, respectively. Readers interested in this part of the thesis should study the ?rst three chapters, where many fundamental results of quantum mechanics are developed. We now begin with a motivating discussion of quantum decoherence. One of the fundamental issues in physics today is the emergence of the familiar macroscopic physics that governs everyday objects from the strange, underlying microscopic laws for the motion of atoms and molecules. This collection of laws governing small bodies is called quantum mechanics, and operates entirely di?erently than classical Newtonian physics. However, since all

vi

macroscopic objects are made from microscopic particles, which obey quantum mechanics, there should be some way to link the two worlds: the macro and the micro. The conventional interpretation of quantum mechanics answers questions about the transition from classical to quantum mechanics, known as quantum emergence, through a special measurement process, which is distinct from the other rules of quantum mechanics [2].1 However, when this measurement concept is used, problems arise. The most famous of these problems is known as Schr?dinger’s cat, which asks about the nature of measurement through a paradox [3]. The problem creates ambiguity about 1. when a measurement occurs, and 2. who (or what) performs it. When all is said and done, the conventional interpretation leaves a bitter taste in the mouths of many physicists; what they want is a theory of quantum measurement that does not function due to subjectively de?ned observation. If no external observers are permitted, how can classical mechanics ever emerge from quantum mechanics? The answer is that complex systems, in essence, measure themselves, which leads us to decoherence.

0.1

Decoherence and the Measurement Problem

Quantum decoherence theory is a quantitative model of how this transition from quantum to classical mechanics occurs, which involves systems performing local measurements on themselves. More precisely, we divide our universe into two pieces: a simple system component, which is treated quantum mechanically, and a complex environmental component, which is treated statistically.2 Since the environment is treated statistically, it obeys the rules of classical (statistical) mechanics, and we call it a mixture [4]. When the environment is coupled to the system, any
1

In fact, the motion of a system not being measured is considered unitary, and hence reversible, while the measurement process is conventionally considered discontinuous, and hence irreversible. So, not only are they treated separately, but they are considered fundamentally di?erent processes! The words statistical and classical are being tossed around here a bit. What we mean is statistical in the thermodynamic sense, for example probability distributions prepared by random coin-tosses. These random, statistical distributions are contrasted against quantum states, which may appear to be random when observed, but actually carry quantum interference information.
2

vii

Figure 1: A graphical representation of decoherence. Here, the environment, which is treated statistically, can be thought of as an information reservoir. It serves to absorb the quantum interference properties of the system, making the system appear as a classical, statistically prepared state. quantum mechanical information that the system transfers to the environment is e?ectively lost, hence the system becomes a mixture over time, as indicated in ?gure 1. In the macroscopic world, ordinary forces are huge compared to the subtle e?ects of quantum mechanics, and thus large systems are very di?cult to isolate from their environments. Hence, the time it takes large objects to turn to mixtures, called the decoherence time, is very short. It is important to keep in mind that decoherence is inherently local. That is, if we consider our entire universe, the system plus the environment, quantum mechanically, classical e?ects do not emerge. Rather, we need to “focus” on a particular component, and throw away the quantum mechanical information having to do with the environment [3]. In order to clarify this notion of decoherence, we examine the following unpublished example originally devised by Herbert J. Bernstein [5]. To start, consider an electron gun, as shown in ?gure 2. Electrons are an example of a two-state system, and as such they possess a quantum-mechanical property called spin [6]. As we develop in detail later in section 2.4, the spin of a two-state system can be represented as a vector pointing on the unit two-sphere. Further, any possible spin can be

viii

Figure 2: A sketch of Bernstein’s thought experiment. The electrons with initial random spin are set to a
certain angle in the xy-plane at the ?rst angular control. A switch determines whether or not an additional phase factor is added using a roulette wheel. Then, a Stern-Gerlach analyzer is used to measure the angle of electron spin.

? direction, and a spin pointing down formed as a linear combination of a spin pointing up in the z ? direction.3 in the ?z We suppose that our electron gun ?res electrons of random spin, and then we use some angular control device to ?x the electron’s spin to some angle (that we set) in the xy-plane. Then, we use a Stern-Gerlach analyzer adjusted to some angle to measure the resulting electron. The Stern-Gerlach analyzer measures how close its control angle is to the spins of the electrons in the beam passing through it [5]. It reads out a number on a digital display, with 1 corresponding to perfect alignment and 0 corresponding to anti-alignment. So far, we can always use the analyzer to measure the quantum-mechanical spin of each electron in our beam. We simply turn the analyzer’s angular control until its digital display reads one, and then read the value of the angular control. Similarly, if we were to turn the analyzer’s control to the angle opposite from the beam’s angle, the display would read zero. The fact that these two special angles always exist is fundamental to quantum mechanics, resulting from a purely non-classical phenomenon called superposition.4 We next ? and ?z ? a basis for the In linear algebra terminology, we call the spin vectors pointing in +z linear vector space of all possible states. We deal with bases precisely in section 1.1.
3

The precise nature of quantum superposition is rather subtle, and we discuss it at length in section 2.4.

4

ix

insert another component into the path of the electron beam. By turning on a switch, we activate a second device that adjusts the angle of our beam in the xy-plane by adding θ. The trick is that this device is actually attached to a modi?ed roulette wheel, which we spin every time an electron passes. The roulette wheel is labeled in radians, and determines the value of θ [5]. We now frantically spin the angular control attached to our analyzer, attempting to ?nd the initial angle of our electron beam. However, much to our surprise, the display appears to be stuck on 0.5 [5]. This reading turns out to be no mistake, since the angles of the electrons that the analyzer is measuring are now randomly distributed (thanks to the randomness of the roulette wheel) throughout the xy-plane. No matter how steadfastly we attempt to measure the spin of the electrons in our beam, we cannot while the roulette wheel is active. Essentially, the roulette wheel is absorbing the spin information of the electrons, as we apparently no longer have access to it. This absorption of quantum information is the exact process that the environment performs in quantum decoherence theory. In both cases, the information is lost due to statistical randomness, and forces a quantum system to be classically random as well. The roulette wheel in this simpli?ed example, just like the environment in reality, is blocking our access to quantum properties of a system. In chapter 6, we return to a more physical example of decoherence using the quantitative tools we develop in this thesis. First, we need to discuss the mathematical underpinnings of quantum mechanics.

0.2

Notational Conventions

Throughout this thesis, we adopt a variety of notational conventions, some more common than others. Here, we list them for clarity. ? The symbol (≡) will always be used in the case of a de?nition. It indicates that the equality does not follow from previous work. The (=) sign indicates equality that logically follows from previous work. ? An integral symbol without bounds, , is a de?nite integral from ?∞ to +∞, rather than the antiderivative, unless otherwise noted.

x

? Usually, the di?erential term in an integrand will be grouped with the integral symbol and separated by (·). This is standard multiplication, and is only included for notational clarity. ? Vectors are always given in Dirac kets, ( |· ), operators on abstract vector or Hilbert spaces are always given with hats, ( ? · ), linear functionals over vector spaces are given in Dirac bras, ( ·| ), and operators on function spaces are given with checks, ( ˇ · ). ? Both partial and total derivatives are given using either standard Leibniz or in a contracted form dx , where dx ≡ d . dx

? The symbol (?) is used to denote a special representation of a particular structure. Its precise de?nition is made clear by context. ? The symbol (?) is used to denote the complex conjugate of a complex number.

xi

CHAPTER

1

Mathematical background

B

efore we begin our discussion of quantum mechanics, we take this chapter to review the mathematical concepts that might be unfamiliar to the average undergraduate physics

major wishing a more detailed understanding quantum mechanics. We begin with a discussion of linear vector spaces and linear operators. We next generalize these basic concepts to product spaces, and ?nally consider spaces of in?nite dimension. Quantum mechanics is much more abstract than other areas of physics, such as classical mechanics, and so the immediate utility of the techniques introduced here is not evident. However, for the treatment in this thesis to be mostly self-contained, we proceed slowly and carefully.

1.1

Linear Vector Spaces

In this section, we introduce linear vector spaces, which will be the stages for all of our subsequent work.1 We begin with the elementary topic of vector spaces [1].
1

Well, actually we will work in a triplet of abstract spaces called a rigged Hilbert space, which is a special type of linear vector space. However, most textbooks on quantum mechanics, and even most physicists, do not bother much with the distinction. We will look at this issue in more detail in section 1.4.

1

1. Mathematical background

2

De?nition 1.1 (Vector space). Let F be a ?eld with addition (+) and multiplication (·). A set V is a vector space under the operation (⊕) over F if for all |u , |v , |w ∈ V and a, b ∈ F: 1. |u ⊕ |v = |v ⊕ |u . 2. (|u ⊕ |v ) ⊕ |w = |u ⊕ (|v ⊕ |w ). 3. There exists |0 ∈ V such that |0 ⊕ |u = |u . 4. There exists ? |u ∈ V such that ? |u ⊕ |u = |0 . 5. a · (b |u ) = (a · b) |u . 6. (a + b) |u = a |u + b |u . 7. a(|u + |v ) = a |u + a |v . 8. For the unity of F, 1, 1 |u = |u .

If V satis?es the criteria for a vector space, the members |u ∈ V are called vectors, and the members a ∈ F are called scalars. For the purposes of quantum mechanics, the ?eld F we are concerned with is almost always C, the ?eld of complex numbers, and V has the usual (Euclidean) topology.2 Since the operation (⊕) is by de?nition interchangeable with the ?eld operation (+), it is conventional to use the symbol (+) for both, and we do so henceforth [7].3

De?nition 1.2 (Linear dependence). A collection of vectors {|vα }α∈Λ , where Λ is some index set, belonging a vector space V over F is linearly dependent if there exists a set {aα }α∈Λ such that aα |vα = |0 (1.1)
α∈Λ

given that at least one ai ∈ {aα }

0.

This means that, if a set of vectors is linearly dependent, we can express one of the member vectors
2

The ?elds we refer to here are those from abstract algebra, and should not be confused with force ?elds (such as the electric and magnetic ?elds) used in physics. Loosely speaking, most of the sets of numbers we deal with in physics are algebraic ?elds, such as the real and complex numbers. For more details, see ref [7].
3 In de?nition 1.2, we use the notion α ∈ Λ, which might be foreign to some readers. Λ is considered an index set, or a set of all possible allowed values for α. Then, by α ∈ Λ, we are letting α run over the entire index set. Using this powerful notation, we can treat almost any type of general sum or integral. For more information, see ref. [8]

1. Mathematical background

3

in terms of the others. If a set of vectors is not linearly dependent, we call it linearly independent, in which case we would not be able to express one of the member vectors in terms of the others [1].

De?nition 1.3 (Dimension). Consider the vector space V and let {|v α }α∈Λ ? V be an arbitrary set of linearly independent vectors. Then, if Λ is alway ?nite, the dimension of V is the maximum number of elements in Λ. If Λ is not always ?nite, then V is said to have in?nite dimension.

De?nition 1.4 (Basis). Let B = {|vα }α∈Λ ? V , where V is a vector space over the ?eld F. If |vα and vβ when α β are linearly independent and an arbitrary vector |u ∈ V can be written as a linear combination of |vα ‘s, i.e. |u =
α∈Λ

cα |vα ,

(1.2)

with cα ∈ F, we say {|vα }α∈Λ is a basis set or basis for V .

It follows directly from this de?nition that, in any vector space with ?nite dimension D, any basis set will have precisely D members. Because quantum mechanics deals with a Euclidean vector space over the complex numbers, it is advantageous to precisely de?ne the inner product of two vectors within that special case [4].

De?nition 1.5 (Inner product). Let V be a vector space over the ?eld of complex numbers C. Then, g : V × V → C is an inner product if, for all |u , |v , |w ∈ V and α, β ∈ C, 1. g (|u , |v ) = g (|v , |u )? , 2. g (|u , a |v + b |w ) = a · g (|u , |v ) + b · g (|u , |w ), 3. g (|u , |u ) ≥ 0 with g (|u , |u ) = 0 ? |u = |0 .

Although it is not immediately clear, the inner product is closely related to the space of linear functionals on V , called the dual space of V and denoted V ? . Below, we de?ne these concepts precisely and then show their connection through the Riesz representation theorem [4].

1. Mathematical background

4

De?nition 1.6 (Linear functional). A linear functional on a vector space V over C is any function F : V → C such that for all α, β ∈ C and for all |u , |v ∈ V , F (a |u + b |v ) = a · F (|u ) + b · F (|v ) . (1.3)

We say that the space occupied by the linear functionals on V is the dual space of V , and we denote it by V ? .

We connect the inner product with the dual space V ? using the Riesz representation theorem [4].

Theorem 1.7 (Riesz representation). Let V be a ?nite-dimensional vector space and V ? be its dual space. Then, there exists a bijection h : V ? → V de?ned by h(F) = f for F ∈ V ? and f ∈ V such that F (|u ) = g f , |u ? |u ∈ V , where g is an inner product of V [4].

The proof of this theorem is straightforward, but too lengthy for our present discussion, so we will reference a simple proof for the interested reader [4]. The consequences of this theorem are quite drastic. It is obviously true that the inner product of two vectors, which maps them to a scalar, is a linear functional. However, the Riesz theorem asserts that any linear functional can be represented as an inner product. This means that every linear functional has precisely one object in the dual space, corresponding to a vector in the vector space. For this reason, we call the linear functional associated with with |u a dual vector and write it as u| ∈ V ? ,

(1.4)

and we contract our notation for the inner product of two vectors |u and |v to g (|u , |v ) ≡ u v ,

(1.5)

a notational convention ?rst established by P. A. M. Dirac. The vectors in V are called kets and the dual vectors, or linear functionals associated with vectors in V ? , are called bras. Hence, when we adjoin a bra and a ket, we get a bra-ket or bracket, which is an inner product. Note that by the de?nition of the inner product, we have uv = vu ,
?

(1.6)

1. Mathematical background

5

so if we multiply some vector |v by a (complex) scalar α, the corresponding dual vector is α? v|. When we form dual vectors from vectors, we must always remember to conjugate such scalars. As another note, when choosing a basis, we frequently pick it as orthonormal, which we de?ne below [1].

De?nition 1.8 (Orthonormality of a Basis). A basis B for some vector space V is orthonormal if any two vectors φi and φ j in B satisfy φi φ j ? ? ? ?1 =? ? ?0 if i = j . if i j (1.7)

For any vector space, we can always ?nd such a basis, so we do not lose any generality by always choosing to use one.4 A useful example that illustrates the use of vectors and dual vectors can be found by constraining our vector space to a ?nite number of dimensions. Working in such a space, we represent vectors as column matrices and dual vectors as row matrices [6]. For example, in three dimensions we might have ? ? ? ? ? 1 ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? ? ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ?? , ? i ? ? ? ? ? ? ? ? ? ? 0 ? ?

|e1

(1.8)

and i |e2 (1.9)

where |e1 and |e2 are the unit vectors from basic physics [9]. Then, the linear functional
4

The process for ?nding an orthonormal basis is called the Graham-Schmidt algorithm, and allows us to construct an orthonormal basis from any basis. For details, see ref. [1].

1. Mathematical background corresponding to |e2 is5 e2 | ? i? = .

6

0

1

0

0

?i 0

(1.10)

We represent the inner product as matrix multiplication, so we write ? ? ? ? ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? = 0, ? ? 0 ? ? ? ? ? ? ? ? ? ? ? 0 ?

? i e2 e1 ?

0

?i 0

(1.11)

which indicates that |e1 and |e2 are orthogonal, as we expect.

1.2

Linear Operators

So far, we have looked at two main types of objects in a vector space: vectors and linear functionals. In this section, we focus on a third: the linear operator. Recall that linear functionals take vectors to numbers. Similarly, linear operators are objects that take vectors to other vectors. Formally, this is the following de?nition [10].

De?nition 1.9 (Linear Operator). Let |u , |v ∈ V be vectors and α, β be scalars in the ? is a linear operator on V if ?eld associated with V . Then, we say A ? |v ∈ V A and ? α |u + β |v A ? |u + βA ? |v . = αA (1.12) (1.13)

Throughout the rest of this thesis, whenever we discuss an operator on a vector space, we will always use a hat to avoid confusion with a scalar. In a ?nite dimensional vector space, as indicated previously, we often represent vectors by column matrices and dual vectors by row
5

Here, notice that to generate the representation for e2 | from |e2 , we must take the complex conjugate. This is necessary due to the complex symmetry of the inner product established in eqn. 1.6.

1. Mathematical background matrices. Similarly, we represent operators by square matrices [6]. For example, if ? ? ? 0 ? ? ? ? ? ??? A ? 1 ? ? ? ? ? ? 0 then ? |e1 A ? ? ? 0 ? ? ? ? ? ? 1 ?? ? ? ? ? ? ? 0 0 0 0 ?? ? ? ?? ? ? ? ? 1 ? 0 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? 0 ? ? =? ? ? ? 0 ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? 0 ?? 0 ? ? 0 ? ? ? ? ? ? ? ? ? ? |e2 . ? ? ? ? ? ? ? 0 0 0 ? ? 0 ? ? ? ? ? ? , ? 0 ? ? ? ? ? ? 0 ?

7

(1.14)

(1.15)

We can also use our formalism to access individual elements of an operator in its matrix representation. Working in the three-dimensional standard, orthonormal basis from the example ? as above, we specify B ? |u = |v , B where |u = u1 |e1 + u2 |e2 + u3 |e3 and |v = v1 |e1 + v2 |e2 + v3 |e3 . Then, ? |u ei | B = = = =
j=1

(1.16)

(1.17)

(1.18)

? (u1 |e1 + u2 |e2 + u3 |e3 ) ei | B
3

? ei | B
j=1

uj ej

ei v
3

v j ei e j vi , (1.19)

=

which is just the matrix equation [4]
3

B(i, j)u j = v j ,
j=1

(1.20)

1. Mathematical background where we made the de?nition ? ej . Bi j = B(i, j) ≡ ei B

8

(1.21)

? . Note that the matrix We call B(i, j) the matrix element corresponding to the the operator B elements of an operator depend on our choice of basis set. Using this expression for a matrix element, we de?ne the trace of an operator. This de?nition is very similar to the elementary notion of the trace of a matrix as the sum of the elements in the main diagonal.6

? be an operator on the vector space V and let B = {|vα }α∈Λ ? De?nition 1.10 (Trace). Let A ? is V be an orthonormal basis for V . Then, the trace of A ? ≡ Tr A
α∈Λ

? |vα . vα | A

(1.22)

So far, we have de?ned operators as acting to the right on vectors. However, since the Riesz theorem guarantees a bijection between vectors and dual vectors (linear functionals in the dual space), we expect operators to also act to the left on dual vectors. To make this concept precise, we write a de?nition.

? , follows De?nition 1.11 (Adjoint). Suppose |u , |v ∈ V such that an operator on V , A ? |u = |v . A ?, A ? ? , as Then, we de?ne the adjoint of A ? ? ≡ v| . u| A (1.24) (1.23)

From this de?nition, it follows that ? ? |w u| A
?

= = =

?

vw wv ? |u , w| A (1.25)

6 Since the individual matrix elements of an operator depend on the basis chosen, it might seem as if the trace would vary with basis, as well. However, the trace turns out to be independent of basis choice [4].

1. Mathematical background

9

which is an important result involving the adjoint, and is sometimes even used as its de?nition. This correctly suggests that the adjoint for operators is very similar to the conjugate transpose for square matrices, with the two operations equivalent for the matrix representations of ?nite vector spaces.7 Although the matrix representation of an operator is useful, we need to express operators using Dirac’s bra-ket notation. To do this, we de?ne the outer product [6].

De?nition 1.12 (Outer Product). Let |u , |v ∈ V be vectors. We de?ne the outer product ? such that of |u and |v as the operator A ? ≡ |u v| . A (1.26)

Note that this is clearly linear, and is an operator, as |u v| |w = |u v w = v w |u ∈ V

(1.27)

for |u , |v , |w ∈ V , a vector space. Further, if an operator is constructed in such a way, eqn. 1.25 tells us that its adjoint is (|u v|)? = |v u| . Self-adjoint opeartors, i.e. operators such that ?? = A ?, A (1.28)

(1.29)

are especially important in quantum mechanics. The main properties that make self-adjoint operators useful concern their eigenvectors and eigenvalues.8 We summarize them formally in the following theorem [4].
7

Many physicists, seeing that linear functionals are represented as row matrices and vectors are represented as column matrices, will write |v = v|? . This is not technically correct, as the formal de?nition 1.11 only de?ned the adjoint operation for an operator, not a functional. However, though it is an abuse of notation, it turns out that nothing breaks as a result [4]. For clarity, we will be careful not to use the adjoint in this way.
8 We assume that the reader has seen eigenvalues and eigenvectors. However, if not, see ref. [1] or any other linear algebra text for a thorough introduction.

1. Mathematical background

10

? be Theorem 1.13 (Eigenvectors and Eigenvalues of Self-adjoint Operators). Let A a self-adjoint operator. Then, all its eigenvalues are real and any two eigenvectors corresponding to two distinct eigenvalues are orthogonal.

? |u = u |u and A ? |v = v |v so that |u and |v are arbitrary (nonzero) eigenvectors of A ? Proof. Let A corresponding to the eigenvalues u and v. Then, using eqn. 1.25, we deduce [4] = = = = = =

u uu

u| u |u ? ? |u u| A ? |u u| A u| u |u u? u u
? ?

? ?

u? u u .

(1.30)

Since |u

0, we get u = u? , so u is real. Hence, any arbitrary eigenvalue of a self-adjoint operator

is real. Next, we consider combinations of two eigenvectors. That is, = = = = ? |v ? u| A ? |v u| A ? |v ? v| A ? ? |u u| A ? |v ? v| A ? |u u| A u| v |v ? v| u |u
? ? ?

0

= (v ? u) u v . u, u v = 0, so |u and |v are orthogonal as claimed.

(1.31)

Thus, if v

Now that we have shown this orthogonality of distinct eigenvectors or an operator, we would like to claim that these eigenvectors form a basis for the vector space in which the operator works. For ?nite dimensional spaces, this turns out to be the case, although the proof quite technical, so we omit it with reference [4]. However, in?nite dimensional cases produce problems mathematically, hence the eigenvectors of an operator in such a space need not form a basis for that space [4]. For the moment, we will proceed anyway, returning to this issue in section 1.4.

1. Mathematical background

11

? . Since Suppose that {|vα }α∈Λ is the set of all eigenvectors of the self-adjoint operator A eigenvectors are only determinable up to a scaling factor, as long as our vectors are of ?nite magnitude, we may rescale all of these vectors to be an orthonormal set of basis vectors [1]. By our assumption, this set forms a basis for our vector space, V . Thus, for any |u ∈ V , we can write |u =
α∈Λ

uα |vα =
α∈Λ

|vα uα .

(1.32)

Noting that, since the basis vectors are orthonormal, vi u =
α∈Λ

uα vi vα = ui ,

(1.33)

we get |u =
α∈Λ

|vα

? ? ? ? vα u = ? ? ?

|vα
α∈Λ

? ? ? ? vα |? ? ? |u .

(1.34)

It follows immediately that

?, |vα vα | = 1
α∈Λ

(1.35)

which is called the resolution of the identity. This leads us to a result that allows us to represent self-adjoint operators in terms of their eigenvector bases, the spectral theorem [4].

? be an operator on the vector space V . Assuming Theorem 1.14 (Spectral Theorem). Let A ? , {|vα }α∈Λ , forms a basis for V , A ? can be expressed that the spectrum of eigenvectors of A as ?= A aα |vα vα | , (1.36)
α∈Λ

?. where {aα }α∈Λ are the eigenvalues of A

Proof. Let |u ∈ V be an arbitrary vector. Then, since {|vα }α∈Λ is a basis for V , we can write |u =
α∈Λ

uα |vα .

(1.37)

1. Mathematical background Hence, ? |u = A
α∈Λ

12

? |vα = uα A
α∈Λ

uα aα |vα .

(1.38)

Now, we consider the other side of the equation. We get [4] ? ? ? ? ? ? ? ? ? ? ? |u vα |? ? ? ? ? ? ? ? ? ? ? ? ? ? vα |? ? ?

aα |vα
α∈Λ

= =

aα |vα
α∈Λ

uβ vβ
β∈Λ

aα uβ |vα
α∈Λ β∈Λ

vα vβ

=
α∈Λ

aα uα |vα ? |u , A (1.39)

=

where we used the orthonormality of our basis vectors. This holds for arbitrary |u ∈ V , so [4] ?= A
α∈Λ

aα |vα vα | ,

(1.40)

as desired. Since we assumed that the eigenvectors for any self-adjoint operator formed a basis for the operator’s space, we may use the spectral theorem to decompose self-adjoint operators into basis elements, which we make use of later.

1.3

The Tensor Product

So far, we have discussed two types of products in vector spaces: inner and outer. The tensor product falls into the same category as the outer product in that it involves arraying all possible combinations of two sets, and is sometimes referred to as the cartesian or direct product [7]. We formally de?ne the tensor product operation (?) below [6].

1. Mathematical background

13

De?nition 1.15 (Tensor Product). Suppose V and W are two vector spaces spanned by the orthonormal bases {|vα }α∈Λ and wβ , respectively. Then, we de?ne the tensor
β∈Γ

product space, or product space, as the space spanned by the basis set |x , y : |x ∈ {|vα }α∈Λ , y ∈ wβ
β∈Γ

(1.41)

and denote the space as V ? W . We call each ordered pair of vectors a tensor product of the two vectors and denote it as |x ? y . We require |x1 ? y1 |x2 ? y2 ≡ x1 x2 ? y1 y2 . (1.42)

The tensor product is linear in the normal sense, in that it is distributive and can absorb scalar constants [6]. Further, we de?ne linear operators on a product space by ??B ? |v ? B ? |v ? |w ≡ A ? |w . A

(1.43)

The de?nition for the tensor product is quite abstract, so we now consider a special case in a matrix representation for clarity. Consider a a two-dimensional vector space, V , and a three-dimensional vector space W . We let the operator ? ? ? 1 ? ??? ? A ? ? ? 0 act over V , and the operator ? ? ? i 2 ? ? ? ? ? ? ?? B ? 0 1 ? ? ? ? ? ? 2i ?1 ? ? ?1 ? ? ? ? ? ? ? ?2 ? ? ? ? ? ? 0 ? ? ? ?i ? ? ? ? ? ? 2 ?

(1.44)

(1.45)

act over W . Then, operating on arbitrary vectors, we ?nd ? ? ? 1 ? ? |v ? ? ? A ? ? ? 0 ? ? ?? ? ? v1 ? ?? ? ? v1 ? iv2 ?i ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ?? ? ? 2 ? ? v2 ? ? 2v2 ? ? ? ? ? ? ? ? ?

(1.46)

1. Mathematical background and ? ? ? i 2 ? ? ? ? ? ? |w ? ? B ? 0 1 ? ? ? ? ? ? 2 i ?1 ?? ? ? ? ? ? ? ? ? ?1 ? w iw1 + 2w2 ? w3 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? =? ? ? ? ? ?2 ? w2 ? w2 ? 2 w3 ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? 0 ? ? w3 ? ? 2iw1 ? w2 ? ? ? ? ? ? ? ? ? . ? ? ? ? ? ? ?

14

(1.47)

The representation of the tensor product as a matrix operation is called the Kronecker product, and is formed by nesting matrices from right to left and distributing via standard multiplication [6]. We now illustrate it by working our example. ? ? ? ? ? ? iw1 + 2w2 ? w3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? w ? 2 w ? 2 3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2iw1 ? w2 ? ? ? ? ? ? ? ? iw1 + 2w2 ? w3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (v1 ? iv2 ) ? ? ? w ? 2 w 2 3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2iw1 ? w2 ? ? ? ? ? ? ? ? ? ? ? ? ? iw1 + 2w2 ? w3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2v2 ? ? ? ? w ? 2 w 2 3 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2iw ? w
1 2

? |v ? B ? |w A

?

? ? ? v1 ? iv2 ? ? ? ? ? ? 2v 2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

=

=

? ? ? (v1 ? iv2 ) (iw1 + 2w2 ? w3 ) ? ? ? ? ? ? ? (v1 ? iv2 ) (w2 ? 2w3 ) ? ? ? ? ? ? ? (v1 ? iv2 ) (2iw1 ? w2 ) ? ? ? ? ? ? ? ? 2v2 (iw1 + 2w2 ? w3 ) ? ? ? ? ? ? ? 2v2 (w2 ? 2w3 ) ? ? ? ? ? ? 2v2 (2iw1 ? w2 )

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? . ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

(1.48)

? But by eqn. 1.43, we should be able to ?rst construct the tensor product of the of the operators A ? and apply the resulting operator to the tensor product of |v and |w . Working this out using and B

1. Mathematical background the Kronecker product, we have ? ? ? ? ? ? ? i 2 ? 1 ? ? ? ? ? ? ? ? ?i ? ? ? ? ? ? ? ? ? ? ? ? 0 1 ? 2 ? ? ? ? ? ? ? ? ? ? ? 2 ? ? 2i ?1 0 ? ? ? ? ? ? ? ? ? ? ? ? ? i 2 ? 1 i 2 ?1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?i ? ? 0 1 ?2 ? 0 1 ?2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2i ?1 0 2i ?1 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? i 2 ?1 ? i 2 ? 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2? ? 0 1 ?2 ? ? 0 1 ?2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2i ?1 0 2 i ?1 0 ? ? 2 ?1 1 ?2i i ? ? ? ? ? ? ? 1 ?2 0 ?i 2 i ? ? ? ? ? ? ? ?1 0 2 i 0 ? ? ? ? ? ? ? ? 0 0 2 i 4 ?2 ? ? ? ? ? ? ? 0 0 0 2 ?4 ? ? ? ? ? ? 0 0 4 i ?2 0 ?

15

??B ? A

?

? ? ? 1 ? ? ? ? ? ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? i 0 2i 0 0 0

=

=

(1.49)

and ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? w1 ? ? ? ? ? ? ? ? ? ? ? v1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? w ? 2 ? ? ? ? ? ? ? ? ? ? ? ? ? ? v2 ? ? w ? ? ? ? ? ? 3 ? ? ? ? ? ? ? ? ? ? ? v1 w1 ? ? ? ? ? ? ? v2 w1 ? ? ? ? ? ? ? v1 w2 ? ? ? ? , ? ? ? ? v2 w2 ? ? ? ? ? ? ? v1 w3 ? ? ? ? ? ? vw ?
2 3

? ? ? ? ? |v ? |w ? ? ? ? ?

(1.50)

1. Mathematical background so ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? i 0 2 1 ?1 ?2 0 0 0 0 1 0 2 2i 0 4i ?2i ?i i 4 2 ?2 ?? ?? ? i ? ? ? ? ? ? ? ? ? ? ? ? ? ? 2i ? ? ? ? ? ? ? ? ?? ? ? ? ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ?2 ? ? ? ? ? ? ? ? ?? ? ? ? ? ?4 ? ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ? v 1 w1 ? ? ? ? ? ? ? v2 w1 ? ? ? ? ? ? ? v1 w2 ? ? ? ? ? ? ? ? v2 w2 ? ? ? ? ? ? ? v1 w3 ? ? ? ? ? ? vw ?
2 3

16

??B ? (|v ? |w ) ? A

2 i ?1 0 0 0 0 0 0

=

? ? ? iv1 w1 + 2v2 w1 ? v1 w2 + v2 w2 ? 2iv1 w3 + iv2 w3 ? ? ? ? ? ? ? v1 w2 ? iv2 w2 ? 2v1 w3 + 2iv2 w3 ? ? ? ? ? ? ? 2iv1 w1 + 2v2 w1 ? v1 w2 + iv2 w2 ? ? ? ? ? ? ? ? 2iv2 w1 + 4v2 w2 ? 2v2 w3 ? ? ? ? ? ? ? 2v2 w2 ? 4v2 w3 ? ? ? ? ? ? 4iv w ? 2v w
2 1 2 2

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

=

? ? ? (v1 ? iv2 ) (iw1 + 2w2 ? w3 ) ? ? ? ? ? ? ? (v1 ? iv2 ) (w2 ? 2w3 ) ? ? ? ? ? ? ? (v1 ? iv2 ) (2iw1 ? w2 ) ? ? ? ? ? ? ? ? 2v2 (iw1 + 2w2 ? w3 ) ? ? ? ? ? ? ? 2v2 (w2 ? 2w3 ) ? ? ? ? ? ? 2v2 (2iw1 ? w2 ) ? |v ? B ? |w , A

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? (1.51)

?

and we con?rm that this example follows ??B ? |v ? B ? |v ? |w = A ? |w A

(1.52)

when we use the Kronecker product representation for the tensor product. Since the matrix representation is very convenient for ?nite dimensional vector spaces, we frequently use the Kronecker product to calculate the tensor product and then shift back to the abstract Dirac notation.

1. Mathematical background

17

1.4

Infinite Dimensional Spaces

So far, we have largely ignored the main complication that arises when we move from a ?nite dimensional space to an in?nite one: the spectrum of eigenvectors for a self-adjoint operator is no longer guaranteed to form a basis for the space. To deal with this problem, we will have to work in a slightly more speci?c kind of vector space, called a Hilbert space, denoted H . A Hilbert space is de?ned below [4].

De?nition 1.16 (Hilbert Space). Let W be a general linear vector space and suppose that V ? W is a vector space formed by any ?nite linear combinations of the basis set {|vα }α∈Λ . That is, if
n

|u =
i=1

uαi |v

αi

,

(1.53)

for some ?nite n, then |u ∈ V . We say the Hilbert space H formed by completing V contains any vector that can be written as
n

|u = lim provided


n→∞

uαi |v
i=1

αi

,

(1.54)

uαi
i=1

2

(1.55)

exists and is ?nite.

Note that for the vector spaces described in the above de?nition, the Hilbert space associated with them always follows V ? H ? W , and that W = H = V holds if (but not only if) W has ?nite dimension. Without spending too much time on the technicalities, there is a generalized spectral theorem that applies to spaces very closely related to, but larger than, Hilbert spaces [4]. To determine precisely what this space should be, we must ?rst develop a certain subspace of a Hilbert space, which we de?ne by including all vectors |u subject to


uu =
n=1

uαn nm

2

(1.56)

converging for all m ∈ N. For a Hilbert space, we require a much weaker condition, as we do not have the rapidly increasing nm in each term of the summand. We de?ne this space as ?, and note that always ? ? H [4]. The rami?cations of the extra normalization requirement for a vector to be in ? can be thought of as a requirement for an extremely fast decay as n → ∞. We now de?ne the

1. Mathematical background

18

Figure 1.1: The spaces V ? ? ? H ? ?× ? W . The area shaded blue is the rigged Hilbert space triplet. space of interest, called the conjugate space of ?, and written as ?× in terms of its member vectors [8]. Any vector |w belongs to ?× if


wu =
n=1

w? n un

(1.57)

converges for all |u ∈ ? and w| is continuous on ?. Since we noted that for a vector |u to be in ?, it must vanish very quickly at in?nity, |w is not nearly as restricted as a vector in H . Thus, we have the triplet ? ? H ? ?× , (1.58)

which is called a rigged Hilbert Space triplet, and is shown in ?gure 1.1 [4].9 We noted earlier that the set of eigenvectors of a self-adjoint operator need not form a basis for that operator’s space if the space has in?nite dimension. This means that the spectral theorem would break down, which is what we wish to avoid. Fortunately, a generalized spectral theorem has been proven for rigged
9

The argument used here is rather subtle. If the reader is not clear on the details, it will not impair the comprehension of later sections. To thoroughly understand this material, we recommend ?rst reading the treatment of normed linear spaces in ref. [8], and then the discussion of rigged Hilbert spaces in refs. [4] and [11].

1. Mathematical background

19

Hilbert space triplets, which states that any self adjoint operator in H has eigenvectors in ?× that form a basis for H [4]. Due to this, we will work in a rigged Hilbert space triplet, which we will normally denote by the corresponding Hilbert space, H . We do this with the understanding that to be completely rigorous, it might be necessary to switch between the component sets of the triplet on a case-by-case basis. Now that we have outlined the space in which we will be working, there is an important special case of an in?nite dimensional basis that we need to examine. If our basis is continuous, then we can convert all of our abstract summation formulas into integral forms, which are used very frequently in quantum mechanics, since the two most popular bases (position and momentum) are usually continuous.10 Speci?cally, suppose we have a continuous, orthonormal basis for a rigged Hilbert space H given by φ
φ∈Φ

, where Φ is a real interval. Then, if we have [4] vφ φ ,
φ∈Φ

|u =
φ∈Φ

uφ φ , |v =

(1.59)

we ?nd a special case of eqn. 1.6. This is uv = dφ · u? φ vφ ,

Φ

(1.60)

? , de?nition 1.10 where the integral is taken over the real interval Φ. Similarly, for an operator A becomes [4] ? = Tr A ? , theorem 1.14 is and for self-adjoint A ?= A dφ · aφ φ φ .
Φ

? φ , dφ · φ A

(1.61)

Φ

(1.62)

When working in a continuous basis, these integral forms of the inner product, trace, and spectral
10

A common form of confusion when ?rst studying quantum mechanics is the abstract notion of vectors. In classical mechanics, a vector might point to a particular spot in a physical space. However, in quantum mechanics, a vector can have in?nite dimensionality, and so can e?ectively point to every point in a con?guration space simultaneously, with varying magnitude. For this reason, a very clear distinction must be drawn between the vectors used in the formalism of quantum mechanics and the everyday vectors used in classical mechanics.

1. Mathematical background

20

theorem will often be more useful in calculations than their abstract sum counterparts, and we make extensive use of them in chapter 3.

CHAPTER

2

Formal Structure of Quantum Mechanics

W

e now use the mathematical tools developed last chapter to set the stage for quantum mechanics. We begin by listing the correspondence rules that tell us how to represent

physical objects mathematically. Then, we develop the fundamental quantum mechanical concept of the state and its associated operator. Next, we investigate the treatment of composite quantum mechanical systems. Throughout this chapter, we work in discrete bases to simplify our calculations and improve clarity. However, following the rigged Hibert space formalism developed in section 1.4, translating the de?nitions in this section to an in?nite-dimensional space is straightforward both mathematically and physically.

2.1

Fundamental Correspondence Rules of Quantum Mechanics

At the core of the foundation of quantum mechanics are three rules. The ?rst two tell us how to represent a physical object and describe its physical properties mathematically, and the third tells us how the the object and properties are connected. These three rules permit us to state a physical problem mathematically, work the problem mathematically, and then interpret the mathematical result physically [4]. The ?rst physical object of concern is the state, which completely describes the physical aspects of some system [4]. For instance, we might speak of the state of a hydrogen atom, the state of a photon, or a state of thermal equilibrium between two thermal baths.

21

2. Formal Structure of Quantum Mechanics

22

Axiom 2.1 (State Operator). We represent each physical state as a unique linear operator that is self-adjoint, nonnegative, and of unit trace, which acts on a Rigged Hilbert Space ? and call it the state operator. H . We write this operator ρ

Now that we have introduced the state, we can discuss the physical concepts used to describe states. These concepts include momentum, energy, and position, and are collectively known as dynamical variables [4].

Axiom 2.2 (Observable). We represent each dynamical variable as a Hermitian linear operator acting on a rigged Hilbert space H whose eigenvalues represent all possible values of the dynamical variable. We write this operator using our hat (?) notation, and call it an observable.

We now link the ?rst two axioms with the third [4].

Axiom 2.3 (Expectation Value). The expectation value, or average measurement of the ? over in?nitely many identically prepared states (called a virtual value of an observable O ? and given by ensemble of states) is written as O ? ≡ Tr ρ ? . ?O O (2.1)

Though we claimed that these three axioms form the fundamental framework of modern quantum mechanics, they most likely seem foreign to the reader who has seen undergraduate material. In the next section, we work with the state operator and show that, in a special case, the formalism following from the correspondence rules outlined above is identical to that used in introductory quantum mechanics courses.

2.2

The State Operator

? , the state operator. However, the formal de?nition is very abstract, so in In axiom 2.1, we de?ned ρ this section we investigate some of the properties of the state operator in an attempt to solidify its meaning. Physicists divide quantum mechanical states, and thus state operators, into two broad

2. Formal Structure of Quantum Mechanics

23

categories. Any given state is either called pure or impure. Sometimes, impure states are also referred to as mixtures or mixed states. We now precisely de?ne a pure state [4].

De?nition 2.4 (Pure State). A given state is called pure if its corresponding unique state ? , can be written as operator, ρ ?≡ ψ ψ , ρ (2.2) where ψ ∈ H is called the state vector in a rigged Hilbert space H , ψ ∈ H ? is the linear functional corresponding to ψ , and ψ ψ = 1. If a state cannot be so represented, it is called impure.

Although the importance of pure and impure states is not yet evident, we will eventually need an e?cient method of distinguishing between them. The de?nition, which is phrased as an existence argument, is not well-suited to this purpose. To generate a more useful relationship, consider a pure state. We have ?2 = ρ ?ρ ?= ψ ρ ψ ψ ψ = ψ ψψ ψ = ψ (1) ψ = ψ ? ψ = ρ.

(2.3)

Thus, if a state is pure, it necessarily follows [4] ? 2 = ρ. ? ρ

(2.4)

Although seemingly a weaker condition, this result turns out to also be su?cient to describe a pure state. To show this, we suppose that our state space is discrete and has dimension D.1 Invoking the spectral theorem, theorem 1.14, we write
D

?= ρ
n=1

ρn φn

φn ,

(2.5)

? , corresponding to the unit-normed eigenvectors where {ρn }D is the spectrum of eigenvalues for ρ n=1 ?, of ρ φn
D n=1

?=ρ ? 2 , we have . If we consider some 1 ≤ j ≤ D with j, D ∈ Z and let ρ ? φj = ρ ?2 φ j , ρ

(2.6)

This is mainly for our convenience. The argument for an in?nite-dimensional space is similar, but involves the generalized spectral theorem on our rigged Hilbert space.

1

2. Formal Structure of Quantum Mechanics which is ρ j φ j = ρ2 j φj , so ρ j = ρ2 j or ρ j 1 ? ρ j = 0.

24

(2.7)

(2.8)

(2.9)

? must also follow this relationship, they must all either be one or Since all of the eigenvalues of ρ ? = 1, so exactly one of the eigenvalues must be one, while all the zero. But by axiom 2.1, Tr ρ others are zero. Thus, eqn. 2.5 becomes ? = φq1 ρ φq1 ,

(2.10)

? is a pure state, and we have shown su?ciency [4]. where we have taken q1 = 1. Evidently, ρ At this point, it is logical to inquire about the necessity of the state operator, as opposed to a state vector alone. After all, most states treated in introductory quantum mechanics are readily represented as state vectors. However, there are many states that are prepared statistically, and so cannot be represented as a state vector. An example of one of these cases is found in section 2.5. These impure states or mixtures turn out to be of the utmost importance when we begin to discuss quantum decoherence, the main focus of this thesis [12]. We now turn our attention to the properties of pure states, and illustrate that the state vectors de?ning pure state operators behave as expected under our correspondence rules. By axiom 2.3, ? of a state ρ ? is we know that the expectation value of the dynamical variable (observable) A ? = Tr ρ ? . ?A A ? is a pure state, then we can write If ρ ?= ψ ρ ? becomes Hence, A ? = Tr ψ A ? , ψ A (2.13) ψ . (2.12)

(2.11)

2. Formal Structure of Quantum Mechanics which, by de?nition 1.10, is [4]
D

25

? A

=
n=1 D

φn

ψ

? φn ψ A ? φn ψ A (2.14) φα

=
n=1

φn ψ ? ψ , ψ A

=

where we have used de?nition 1.8 to pick the basis

α∈R

to be orthonormal and contain the

vector ψ .2 This is the standard de?nition for an expectation value in introductory quantum ? be pure [2, 13]. mechanics, which we recover by letting ρ

2.3

Composite Systems

In order to model complex physical situations, we will often have to consider multiple, non-isolated states. To facilitate this, we need to develop a method for calculating the state operator of a composite, or combined, quantum system [4].

Axiom 2.5 (Composite State). Suppose we had a pure composite system composed of n n ? i . Then, the composite state operator ρ ? of this combined system is given substates, ρ i= 1 by ?≡ρ ?1 ? ρ ?2 ? · · · ? ρ ?n , ρ (2.15) where (?) is the tensor product, given in de?nition 1.15.

? where ? is pure, there exists some characteristic state vector ψ of ρ Note that if ρ ψ = ψ1 ? ψ2 ? · · · ? ψn

(2.16)

? i . As an important notational aside, eqn 2.16 is frequently shortened and each ψi corresponds to ρ to [6] ψ = ψ1 ψ2 ...ψn ,
2

(2.17)

This works since ψ is guaranteed to have unit magnitude by de?nition 2.4.

2. Formal Structure of Quantum Mechanics

26

where the tensor products are taken as implicit in the notation. Just as we discussed dynamical variables associated with certain states, so can we associate dynamical variables with composite systems. In general, an observable of a composite system with n substates is formed by [6] ? ≡O ?1 ?O ?2 ?···?O ? n, O

(2.18)

? i is an observable of the ith substate. We have now extended the concepts of state where each O and dynamical variable to composite systems, so it is logical to treat an expectation value of a composite system. Of course, since a composite system is a state, axiom 2.3 applies, so we have ? = Tr ρ ? . ?O O

(2.19)

However, composite systems a?ord us opportunities that single systems do not. Namely, just as we trace over the degrees of freedom of a system to calculate expectation values on that system, we can trace over some of the degrees of freedom of a composite state to focus on a speci?c subsystem.3 We call this operation the partial trace over a composite system, and we de?ne it precisely below [6].

De?nition 2.6 (Partial Trace). Suppose we have an operator ? =Q ?1 ?Q ?2 ?···?Q ? n. Q ? over Q ? i is de?ned by The partial trace of Q ? ≡Q ?1 ?Q ?2 ?···?Q ? i?1 · Tr Q ?i ·Q ? i+ 1 ? · · · ? Q ? n. Tri Q (2.21) (2.20)

If the partial trace is applied to a composite system repeatedly such that all but one of the subsystem state operators are traced out, the remaining operator is called a reduced state operator [6].
3

Here, a degree of freedom of a state can be thought of as its dimensionality. It is used analogously with the notion in a general system in classical mechanics, where the dimensionality of a system’s con?guration space corresponds to the number of degrees of freedom it possesses. For more on this, see ref. [14].

2. Formal Structure of Quantum Mechanics

27

? with n De?nition 2.7 (Reduced State Operator). Suppose we have a composite system ρ subsystems. The reduced state operator for subsystem i is de?ned by ? (i) = Tr1 ? Tr2 ? · · · ? Tri?1 ? Tri+1 ? · · · ? Trn ρ ? . ρ (2.22)

The partial trace and reduced state operator turn out to be essential in the analysis of composite systems, although that fact is not immediately obvious. To illustrate this, we consider ? m that acts only on the km th subsystem of a composite system. We choose a basis some observable O |Φk
n k =1

, where each element is formed by the Kronecker product of the basis elements of the

corresponding subsystems. That is, each basis vector has the form |Φk = φ1 φ2 ...φn , where each φl is one of the orthonormal basis vectors of the lth substate space. Then, from axiom 2.3, we have ?m O ?m ?O = Tr ρ
n

=
k=1

? m |Φk ?O Φk | ρ ? m φk φk ...φk . ?O φk1 φk2 ...φkn ρ 1 2 n
k1 ,k2 ,...,kn

=

(2.23)

We use the resolution of the identity, eqn. 1.35, to write our expectation value as ? ? ? ? ?? ρ ? ? ? ? ? ? ?? φ j1 φ j2 ...φ jn ? O φ φ ...φkn , ? ? ? m k1 k2

φk1 φk2 ...φkn
k1 ,k2 ,...,kn

φ j1 φ j2 ...φ jn
j1 , j2 ,..., jn

(2.24)

where φ j1 φ j2 ...φ jn corresponds to a basis vector. This becomes ? φ j1 φ j2 ...φ jn φk1 φk2 ...φkn ρ
k, j

? m φk φk ...φk . φ j1 φ j2 ...φ jn O 1 2 n

(2.25)

? acts as identity on all but the mth subsystem, by eqn. 1.43, we have If the observable O ? φ j1 φ j2 ...φ jn φk1 φk2 ...φkn ρ
k, j

? m φk φ jm O m

φ j1 ...φ jm?1 φ jm+1 ...φ jn φk1 ...φkm?1 φkm+1 ...φkn . (2.26)

2. Formal Structure of Quantum Mechanics

28

Since our chosen basis is orthonormal, for any non-zero term in the sum, we must have j = k (except for jm and km ), in which case the ?nal inner produce is unity. Hence, we get ? φk1 ...φkm ?1 φ jm φkm +1 ...φkn φk1 φk2 ...φkn ρ
k1 ,k2 ,...,kn , jm

? m φk . φ jm O m

(2.27)

?=ρ ?1 ? ρ ?2 ? · · · ? ρ ? n , we have If we apply eqn. 1.43, letting ρ ? 1 φk1 φk1 ρ
k1 ,k2 ,...,kn , jm

? m φ jm · · · φkn ρ ? n φkn ? 2 φk2 · · · φkm ρ φk2 ρ

? m φk , φ jm O m

(2.28)

or ? 1 Tr ρ ? 2 · · · φkm ρ ? m φ jm · · · Tr ρ ?n Tr ρ
km , jm

? m φk . φ jm O m

(2.29)

Since each trace is just a scalar, we can write ? ? ? ? ? 1 Tr ρ ?2 · · · ρ ? m · · · Tr ρ ?n ? Tr ρ ? ? ? ? ? ? ?? φ jm ? O φ . ? ? ? m km

φkm
km

φ jm
jm

(2.30)

Recognizing the de?nition 2.7 for the reduced state operator and the resolution of the identity from eqn. 1.35, we ?nd [4]

?m = O
km

? m φk = Tr ρ ?m . ? O ? (m) 1 ? (m) O φkm ρ m

(2.31)

Due to this remarkable result, we know that the reduced state operator for a particular subsystem is enough to tell us about any observable that only depends on the subsystem. Further, we end up with a formula for the expectation value of a component observable very similar to axiom 2.3 for observables of the full system.

2.4

Quantum Superposition

Though we have introduced some of the basic formalism of the state, we are still missing one of the key facets of quantum mechanics. This piece is the superposition principle, which, at the time of this writing, is one of the core aspects of quantum mechanics that no one fully understands. However, due to repeated experimental evidence, we take it as an axiom.

2. Formal Structure of Quantum Mechanics

29

Axiom 2.8 (Superposition Principle). Suppose that a system can be in two possible states, represented by the state vectors |0 and |1 . Then, ψ = α |0 + β |1 , where α, β ∈ C, is also a valid state of the system, provided that |α|2 + β = 1.
2

(2.32)

The superposition principle allows us to create new and intriguing states that we would not have access to otherwise. In fact, if we have n linearly independent states of a system, any point on the unit n-sphere corresponds to a valid state of the system.4 If we consider a two-state system with an orthonormal basis |0 , |1 , the 2-sphere of possible states guaranteed by the superposition principle is conveniently visualized imbedded in 3-space. This visualization of a two-state system is called the Bloch sphere representation, and is pictured in ?gure 2.1 [6]. To calculate the position of a system in Bloch space, we use the formula ? ? r0 1 + r σ , ρ

(2.33)

where |r is the 3-vector, |r ≡ r1 |e1 + r2 |e2 + r3 |e3 , and σ is the vector of Pauli spin matrices, |σ ≡ σx |e1 + σ y |e2 + σz |e3 . (2.34)

(2.35)

The Pauli matrices are ? ? ? ? 0 ? ?x ? ? σ ? ? ? 1 ? ? 1 ? ? ? ? ? ?, 0 ?

(2.36)

The reader might wonder why the superposition principle is necessary, after all, we know that state vectors exist in a Hilbert space, and Hilbert spaces act linearly. However, we were not guaranteed until now that any vector of unit norm in Hilbert space represents a valid physical situation. The superposition principle gives us this, which allows us great freedom in constructing states.

4

2. Formal Structure of Quantum Mechanics

30

Figure 2.1: Two-state systems can be visualized as being vectors on a two-sphere, known in quantum

physics as the Bloch sphere. The angles φ and θ are de?ned in eqn. 2.48 for pure states, and the axes x, y, and z are de?ned in eqn. 2.35 for all states.

? ? ? ? 0 ? ?y ? ? σ ? ? ? i and ? ? ? ? 1 ? ? σz ? ? ? ? ? 0 Writing eqn. 2.33 explicitly, we ?nd ? ? ? ? r0 + r3 ? ??? ρ ? ? ? r + ir 1 2

? ? ?i ? ? ? ? ? ?, 0 ? ? ? ? ? ? ? ? ?. ?1 ? 0

(2.37)

(2.38)

? ? r1 ? ir2 ? ? ? ? ? ?. r0 ? r3 ?

(2.39)

? by eqn. 2.33. This is trivially a basis for all two by two matrices, so we can indeed represent any ρ ? = 1, we know Further, if we use the fact that Tr ρ ? ? (r0 + r3 ) + (r0 ? r3 ) = 2r0 = 1, Tr ρ

(2.40)

2. Formal Structure of Quantum Mechanics so r0 = 1/2. With this constraint in mind, it is conventional to write eqn. 2.33 as [6]

31

?? ρ

1+ rσ 2

.

(2.41)

? is self-adjoint, the diagonal entries must all be real, so r3 ∈ R. By the same reasoning, Also, since ρ r1 + ir2 = (r1 ? ir2 )? .

(2.42)

Since r1 and r2 are arbitrary, we can choose either of them to be zero, and the resulting equation must hold for all values of the other. Hence, r1 = r? and r2 = r? , so both r1 and r2 are real, and |r is 2 1 a real-valued vector. Since |r is real, we use it as a position vector that tells us the location of the system in Bloch space and call it the Bloch vector. If we have a pure state ψ = α |0 + β |1 ,

(2.43)

we can express the location of the state in terms of the familiar polar and azimuthal angles of polar-spherical coordinates. Taking into account our rede?ned, conventional |r , eqn. 2.41 is 1+ rσ 2 ? ? 1 + rz 1? ? ? ? ? ? 2? ? r + ir x y ? ? rx ? ir y ? ? ? ? ? ?. 1 ? rz ?

(2.44)

We use the polar-spherical coordinate identities for unit vectors = = = sin θ cos φ, sin θ sin φ, cos θ, (2.45)

rx ry rz

2. Formal Structure of Quantum Mechanics to determine ? ? 1 + rz 1? ? ? ? ? 2? ? r + ir x y ? ? rx ? ir y ? ? ? ? ? ? 1 ? rz ? ? ? 1 + cos θ 1? ? ? ? ? 2? ? sin θ cos φ + i sin θ sin φ ? ? ? 1 + cos θ sin θe?iφ ? ? 1? ? ? ? ? ? ? ? ? 2? ? sin θeiφ 1 ? cos θ ? ? ? ? sin θ cos φ ? i sin θ sin φ ? ? ? ? ? ? ? 1 ? cos θ

32

=

=

? θ ? 2 sin θ 2 cos2 θ 1? ? 2 2 cos 2 ? ? = ? 2? ? 2 sin θ cos θ eiφ 2 sin2 θ 2 2 2 ? θ ?iφ ? ? sin θ cos2 θ ? 2 2 cos 2 e ? = ? ? ? ? sin θ cos θ eiφ sin2 θ 2 2 2

e?iφ

? ? ? ? ? ? ? ? ?

? ? ? ? ? ? . ? ? ?

(2.46)

If we let α ≡ cos (θ/2) and β ≡ eiφ sin (θ/2), the right side of eqn. 2.33 becomes ? ? ? |α|2 ? ? ? ? ? ? βα? ? ? αβ? ? ? ? ?? ψ ? 2 ? β ?

? ψ = ρ.

(2.47)

Hence, the state vector of the pure state is [6]

ψ = cos

θ θ |0 + eiφ sin |1 . 2 2

(2.48)

We note that the coe?cient on |0 is apparently restricted to be real. However, unlike state operators, state vectors are not unique; physically identical state vectors may di?er by a phase factor eiγ [4]. The notion of superposition also enables us to re?ne our classi?cation of composite systems. Besides distinguishing between pure and impure states, physicists subdivide composite pure states into two categories: entangled states and product states.

? is a pure composite quantum system with De?nition 2.9 (Product State). Suppose ρ associated state vector ψ . If there exist state vectors φ1 and φ2 such that ψ = φ1 ? φ2 , (2.49)

then we call ψ a product state. If no such vectors exist, then we say ψ is entangled.

2. Formal Structure of Quantum Mechanics

33

To construct entangled states, we take product states and put them into superposition. In illustration of this concept, we consider the following example.

2.5

Example: The Bell State

An important example of an entangled state of two two-state systems is called the Bell State. Before we de?ne this system, we need to develop some machinery to work with two-state systems. We use the orthonormal basis set introduced previously for a single, pure, two-state system, |0 , |1 , which we represent as column matrices by ? ? ? ? ? ? 1 ? ? ? ? ? ? ? ?, ? ? ? 0 ? ? ? ? ? ? ? ? 0 ? ? ? ? ? ? ? ? ?. ? ? 1 ? ?

|0

|1

(2.50)

In this representation, we de?ne an orthonormal basis for two of these two-state systems as [6] |0 ? |0 , |0 ? |1 , |1 ? |0 , |1 ? |1 = |00 , |01 , |10 , |11 ,

(2.51)

which have matrix representations ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 1 ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? 1 ? ? ? ? ? ? ? ? ? , 01 ? , | ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 0 ? ? ? ? ? ? ? ? 0 ? 0 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? 0 ? ? ? ? ? ? ? ? ? , 11 ? . | ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 1 ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? 0 1 ?

|00

|10

(2.52)

2. Formal Structure of Quantum Mechanics By the superposition principle, we de?ne the state ? ? 1 ? ? ? ? √ ? ? ? 2 ? ? ? ? ? ? ? ? ? ? ? 0 ? ? |00 + |11 ? ? ? ≡ ?? , ? ? √ ? ? ? ? ? ? 2 ? ? 0 ? ? ? ? ? ? ? ? ? 1 ? ? √ ?
2

34

ψB

(2.53)

which is the Bell state. To check if this state is entangled, we see if we can write ψB = φA ? φB for some vectors φA and φB . As matrices, this equation is ? ? 1 ? ? ? ? √ ? ? ? 2 ? ? ? ? ? ? ? ? ? ? ? ? ? 0 ? ? ? ? a1 ? ? ? ? ? =? ? ? ? ? ? ? ? ? ? a ? ? ? 0 ? 2 ? ? ? ? ? ? ? ? ? ? 1 ? ? √
2

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? b ? 1 ? ? ? ? ? ? ? ? ? ? ? ? = ? ? ? ? ? ? ? ? ? ? b ? ? ? ? 2 ? ? ? ? ? ?
1 √ 2

? ? a1 b1 ? ? ? ? ? ? ? a1 b2 ? ? ? ? ?. ? ? ? a2 b1 ? ? ? ? ? ? a2 b2 ?
1 √ 2

(2.54)

This is a system of four simultaneous equations,
1 √ 2

= a1 b1 , 0 = a1 b2 , 0 = a2 b1 , and
1 √ 2

= a2 b2 . Since 0, which is a

= a1 b1 , a1

0 and b1

0. Then, since a1 b2 = 0, b2 = 0. But

= a2 b2 , so b2

contradiction. Hence, φA and φB do not exist, so ψB is entangled. [6] Next, we compute the state operator corresponding to ψB . By de?nition 2.4, since the Bell state is pure by construction, its state operator is ? ρ = = = ψB ψB

?

00| + 11| |00 + |11 √ √ 2 2 |00 00| + |00 11| + |11 00| + |11 11| 2 ? ? ? 1 0 0 1 ? ? ? ? 2 2 ? ? ? ? ? ? ? ? ? ? ? ? ? 0 0 0 0 ? ? ? ? ? ? . ? ? ? ? ? ? ? ? 0 0 0 0 ? ? ? ? ? ? ? ? ? ? ? ? 1 0 0 1 ? ? 2 2

(2.55)

Even though we constructed the Bell state from a state vector, we will explicitly verify its purity as

2. Formal Structure of Quantum Mechanics an example. We ?nd ? ρ
2

35

= = = =

|00 00| + |00 11| + |11 00| + |11 11| |00 00| + |00 11| + |11 00| + |11 11| 2 2 2 (|00 00| + |00 11| + |11 00| + |11 11|) 4 (|00 00| + |00 11| + |11 00| + |11 11|) 2 ρ, (2.56)

which con?rms that the Bell state is pure. Next, suppose we want to measure some particular facet of the ?rst subsystem. Since the Bell state is entangled, we cannot “eyeball" the result, but rather we need to use the reduced state machinery we developed in de?nition 2.7. The reduced state operator for the ?rst subsystem is

? (1) ρ

= = = = = = ?

? Tr2 ρ Tr2 |00 00| + |00 11| + |11 00| + |11 11| 2

1 Tr2 (|0 0| ? |0 0| + |1 0| ? |1 0| + |0 1| ? |0 1| + |1 1| ? |1 1|) 2 1 |0 0| · Tr (|0 0|) + |1 0| · Tr (|1 0|) + |0 1| · Tr (|0 1|) + |1 1| · Tr (|1 1|) 2 1 |0 0| · 1 + |1 0| · 0 + |0 1| · 0 + |1 1| · 1 2 |0 0| + |1 1| 2 ? ? ? ? ? 1 ? 0 ? ? 2 ? ? ? ? (2.57) ? ?. ? ? 0 1 ? ? 2

Oddly enough, ? (1) ρ
2

=

|0 0| + |1 1| 4

? (1) , ρ

? (1) is impure [6]. Surprisingly, a pure composite system does not necessarily contain pure so ρ ? (1) in terms of the Pauli matrices and the identity as in eqn. 2.33, we subsystems. If we express ρ ? (1) is |r = 0. We already noted that in Bloch space, the ?nd that the Bloch vector corresponding to ρ unit two-sphere represents all the possible pure state con?gurations for a two-state system. However, the unit ball represents all state con?gurations; the impure states have r r < 1 [6]. The Bell state, with r r = 0, is a special case of a totally mixed or impure state, meaning that the

2. Formal Structure of Quantum Mechanics

36

subsystem is entirely statistical (classical). By symmetry, if we had traced out the ?rst subsystem ? (1) = ρ ? (2) , so we actually have an entangled state composed of rather than the second, we ?nd ρ totally classical subsystems.

2.6

Projection Onto a Basis

So far, we have worked mostly in an abstract Hilbert space, although we have taken brief forays into matrix representations of states and observables. In this section, we formalize the notion of a representation of an operator in a basis. We are mainly interested in in?nite and continuous bases, which we use to de?ne a very useful structure [13].

De?nition 2.10 (Wavefuntion). Suppose that we have an in?nite and continuous basis for H , {x}x∈R . Then, for some pure state vector ψ ∈ H , we form the wavefunction ψ:R→C de?ned by ψ(x) ≡ x ψ . (2.59) (2.58)

We note that if ψ =
x∈R

ax |x , a? x x| ,
x∈R

(2.60) (2.61)

ψ = so ? = Tr ρ
x∈R



ψx =
x∈R

ψ? (x)ψ(x),

(2.62)

where we have used the complex symmetry of the inner product given by eqn. 1.6. But since this is a sum over a continuous interval, it can be written as an integral. We obtain ? = Tr ρ dx · ψ? (x)ψ(x) = dx · ψ(x) = 1,
2

(2.63)

as the state operator has unit trace. Since our sum is in?nite, it must be that ψ(x) decays at in?nity su?ciently fast such that the integral converges. This special class of functions is known as the set

2. Formal Structure of Quantum Mechanics

37

of square-normalizable functions, and is often denoted as L2 . Physically, this means that the wavefunction must be localized in some sense, so that at extreme distances it is e?ectively zero. Just as we projected a vector into a basis and obtained a function, we can project a linear operator acting in Hilbert space onto a basis to obtain a linear operator in function space. We denote such operator with a check (ˇ), and de?ne it by [13] ˇ ψ(x) ≡ x| O ? ψ , O

(2.64)

? is an operator on a Hilbert space. An interesting application of this considers the matrix where O ? in the position basis. If ρ ? is pure, then elements, given by eqn. 1.21, of a state operator ρ

? y = xψ ρ(x, y) = x| ρ

ψ y = ψ(x)ψ? ( y).

(2.65)

Since we previously established that every valid wavefunction must vanish quickly at in?nity, it follows that su?ciently o?-diagonal elements of the state operator must vanish quickly, as well as distant points along the diagonal.

CHAPTER

3

Dynamics

Q
3.1

uantum dynamics is the framework that evolves a quantum state forward in time. We begin by considering the Galilei group of transformations, under which all non-relativistic

physics is believed to be invariant. We show that this group leads to the fundamental commutator relations that govern quantum dynamics, and then use them do derive the famous Schr?dinger equation. Finally, we consider the free particle in the position basis.

The Galilei Group

Fundamental to the notion of dynamics is the physical assumption that certain transformations will not change the physics of a situation [4]. All known experimental evidence supports this assumption, and it seems reasonable mathematically. This set of transformations forms a group, called the Poincaré group of space translations, time translations, and Lorentz transformations.1 However, for our purposes we take v c, so the Poincaré group becomes the classical Galilei

group, which we take as an axiom. For clarity, we assume a pure state in one temporal and one spacial dimension, but this treatment can be readily extended to impure states in three-dimensional space [16].
1

The term group here is used in the formal, mathematical sense. We will not dwell on many of the subtleties that arise due to this here, and the interested reader is directed to ref. [15].

38

3. Dynamics

39

Axiom 3.1 (Invariance Under the Galilei Group). Let G be the Galilei group, which contains elements generated by the composition of the operators ˇ ψ(x, t) S ˇ T ψ(x, t) ˇ ψ(x, t) L = ψ( x + , t ) = ψ(x, t + ) = ψ(x + t, t), (3.1)

where ψ(x, t), given by de?nition 2.10, is a function of position and time, and (ˇ) represents ? be ˇ ∈ G and let A an operator on the space of such functions, as de?ned in eqn. 2.64. Let g an observable of the state ψ with eigenvectors φn and eigenvalues {an }n∈R . n∈R ? ˇ = v , we assert Then, if A φn = an φn and for all wavefunctions v(x, t), gv ? φn ≡ an φn A and φn ψ
2

(3.2)
2

≡ φn ψ

.

(3.3)

In essence, eqns. 3.2 and 3.3 refer to the invariance of possible measurement and invariance of probable outcome, and thus the invariance of all physics, under the Galilei group. We now write a motivating identity using the Galilei group. Considering the state wavefunction ψ(x, t), we ?nd [16] ˇ ?1 T ˇ ?1 L ˇ T ˇ ψ(x, t) = L = = = = = = ˇ? T ˇ? L ˇ T ˇ ψ(x, t) L ˇ? T ˇ? L ˇ ψ(x, t + ) L ˇ? T ˇ ? ψ(x + (t + ), t + ) L ˇ ? ψ(x + (t + ), t) L ψ(x + (t + ) ? t, t) ψ(x +
2

, t) (3.4)

ˇ 2 ψ(x, t). S

We conclude that these transformations do not commute, which will play a major role in the dynamics of quantum mechanics. Before we move to a Hilbert space, we need to convert our Galilei group into a more useful form. Due to eqn. 3.3, we can make use of Wigner’s theorem, ? on a Hilbert which guarantees that any Galilei transformation corresponds to a unitary operator U

3. Dynamics space that obeys2 ?. ?U ?? = U ? ?U ? =1 U

40

(3.5)

? is an observable. If we take ? is a unitary representative of a Galilei group member and A Thus, if U that ? |u |u ≡ U for all |u ∈ H , we have ?U ? φn = an φn ? A ? φn = an U ? φn , A (3.6)

(3.7)

so ?U ? an φn = an φn . ? ?U ? φn = U ? ?A U Hence, we get ?U ? φn ? U ? φn = an φn ? an φn = 0 ? ?A ?. A ? , we have [4] Since this equation holds for all eigenvectors of A ??U ?U ?=U ?U ? =U ?U ? ?A ? =0?A ? ?A ? ?A ?A ? ?. A (3.9) (3.8)

(3.10)

We now take our unitary transformation to be a function of a single parameter, t, subject to ? Then, for small t, we take the Taylor expansion of U ? (t1 + t2 ) = U ? (t1 )U ? (t2 ) and U ? (0) = 1. ? about U t = 0 to get3 ? ? + t dU ? (t) = 1 U dt
2 t=0

+ ... .

(3.11)

Wigner’s theorem is complicated to prove. See ref. [17] for a thorough treatment.

We will be making frequent use of the Taylor expansion. Readers unfamiliar with it are advised to see ref. [10].

3

3. Dynamics Similarly, we know that [16] ? 1 ?U ?? = U ? ?? ? + t dU U + ... = 1 dt t=0 ? ? ?? ? + t dU U ? +U ? dU = 1 + ... dt dt t=0 ? ?? ? + t dU + dU + ... , ? 1 dt dt t=0

41

(3.12) ? Since 1 ? =U ? (0) = U ? ? (0) = 1. ?U ? ? for all t, it must be that as t ? 0 and U ? ?? dU dU + dt dt We now let ? dU dt
t=0

?. =0
t=0

(3.13)

?, ≡ iK

(3.14)

? to ? is self-adjoint. We impose the boundary condition U ? (0) = 1 which is well-de?ned so long as K ?nd the solution to this ?rst order di?erential equation,
? ? (s) = eiKt U .

(3.15)

Since any unitary operator can be represented in this form, we now de?ne the three generating operators of the Galilei group. They are [16] ˇ x ψ(x) = S ˇ t ψ(x) = T ˇ v ψ(x) = L
? ? x ψ ≡ x| e?ixp x| S ψ

? t ψ ≡ x| e?ith ψ x| T ? v ψ ≡ x| eiv f? ψ , x| L (3.16)

?

? , and p ? are self-adjoint, and the particular signs and parameters associated with the where f?, h transformations are matters of convention.

3. Dynamics

42

3.2

Commutator Relationships

? , obeys the eigenvalue We next introduce three particular observables. First, the position operator, Q equation ? |x = x |x , Q (3.17)

where |x is an eigenvector of the position, i.e. a state of de?nite position. Second, the momentum ? , follows operator, P ? p =p p . P (3.18)

We require that the expectation values of these operators follow the classical relationship [2] ? d Q dt

? ≡ P

.

(3.19)

? , also known as the Hamiltonian, in analogy to the Further, we de?ne the energy operator H classical total energy of a system, which is the kinetic energy P2 /(2m) plus some potential energy V . It is ? 2 + V. ? ≡ 1 P H 2m First, note that ?P ?= H ?,P ? = 0. Next, recall that so H ? ψ(t) . ψ(t + ) = T By the de?nition of the derivative, we have [16] d ψ(t) dt ψ(t + ) ? ψ(t) e?i
? h

(3.20)

1 ?2 ? =P ? 1 P ?2 + V = P ?H ?, P +V P 2m 2m

(3.21)

(3.22)

= = = = =

lim
→0

lim
→0

ψ(t) ? ψ(t)
2

? + ?i h ? /2 + ... ψ(t) ? ψ(t) 1?i h lim
→0 →0

? ψ(t) ? h ? 2 ψ(t) + ... lim ?ih ? ψ(t) . ?ih (3.23)

3. Dynamics Following identical logic, we ?nd [16] d ?. ψ(t) = +i ψ(t) h dt Since ψ(t) is pure, we use eqn. 2.14 to write d ? Q (t) = dt = = = = d ? ψ( t ) ψ(t) Q dt d ? ψ(t) + ψ(t) Q ? d ψ(t) ψ(t) Q dt dt ?Q ? ψ(t) ? ψ(t) ? ψ(t) Qi ? h i ψ(t) h ?Q ? ψ(t) ? ?Q ?h ψ(t) i h ?, Q ? ψ(t) , ψ(t) i h

43

(3.24)

(3.25)

so d ? ?, Q ? . Q (t) = i h dt Then, by eqn. 3.19, we have 1 ? ?, Q ? P = i h m ? ψ(t) 1 ? ?, Q ? ψ(t) . P ψ(t) = ψ(t) i h m (3.26)

(3.27)

Since this result holds for arbitrary ψ(t) , we get 1 ? ?, Q ? , P=i h m or [16] ? =i1P ?,h ?. Q m (3.29)

(3.28)

We next continue working with the position operator to derive a second relation. Recall that from eqn. 3.10, a unitary transformation de?ned by ψ ? ψ =U

(3.30)

transforms an operator as ? =U ?U ?A ? ?. A (3.31)

3. Dynamics

44

? ? x0 = e?ix0 p ? to Q ? according So, if our unitary operator is S , we can transform the position operator Q

to
? ? +ix0 p ?ix0 p ? =S ? x0 Q ?S ?? Q Qe ? . x0 = e

(3.32)

? , we know4 By our de?nition of Q ? |x = x |x ? Q ? |x = x |x . Q

(3.33)

Further, eqn. 3.2 tells us ? |x = x |x . Q Thus, ? ?Q ? |x = (x ? x ) |x = (x ? (x + x0 )) |x = ?x0 |x .5 Q Note that this relationship holds for arbitrary x0 , and hence for all |x . This implies [16] ? =Q ? ? x0 . Q ? , we have Recalling our de?nition for Q
? ? +ix0 p ? ? x0 . e?ix0 p Qe ? = Q

(3.34)

(3.35)

(3.36)

(3.37)

As before, we expand the exponential terms in a Taylor series to obtain ? ? ? ? ? ? ?


n=1

? ?ix0 p n!

n?

? ? ? ? ? ? ?? ? ? ? ?Q? ?



n=1

? ix0 p n!

n?

? ? ? ? ? ?

=

? 1 + ix0 p ? + ... Q ? + ... 1 ? ix0 p

? ? ix0 p ? + ix0 Q ?p ?Q ? + ... = Q ? + ix0 Q ?p ? + ... ??p ?Q = Q ? + ix0 Q ?,p ? + ... = Q ? ? x0 . = Q
4

(3.38)

? , as the spectrum of allowed positions This is because |x and |x are valid eigenvectors of Q ? (the eigenvalues for Q) is the entire real line.
5

ˇ x0 ψ(x) = ψ(x0 + x) = ψ(x ). Note that x = x + x0 , since S

3. Dynamics Hence, in the limit as x0 → 0, eqn. 3.37 is [16] ?,p ?,p ? = ?1 ? Q ? =i i Q

45

(3.39)

? v0 = e+iv f?, we get Next, we examine the momentum operator. Taking our unitary operator to be L ? = e+iv0 f?Pe ? ?iv0 f?. P

(3.40)

If we operate on states of de?nite momentum, we know ? p = p p = mv p . P

(3.41)

By direct analogy with the states of de?nite position above, we ?nd [16] ? ?iv0 f? = P ? ? mv0 . ? = e+iv0 f?Pe P

(3.42)

As above, we ?nd the Taylor expansion of the exponentials to obtain ? ? ? ? ? ? ? ?
∞ n? ? ? +iv0 f? ? ? ? ? ? ? ?? P ? ? ? ? ? ? n! ∞ n? iv0 f? ? ? ? ? ? ? n! ?

=

? 1 ? iv0 f? + ... 1 + iv0 f? + ... P

n=1

n=1

? + iv0 f?P ? ? iv0 P ? f? + ... = P ? + iv0 f?P ? ?P ? f? + ... = P ? + iv0 f?, P ? + ... = P ? ? mv0 . = P (3.43)

In the limit as v0 → 0, we have ? = ?m ? f?, P ? = im. i f?, P ?, in which case we have [16] It is a convention to de?ne f? ≡ mq ? = i. ?, P q

(3.44)

(3.45)

3. Dynamics We now have ?,P ? H ? ?,h Q ?,p ? Q ? ?, P q = 0, = i 1 ? P, m

46

= i, = i. (3.46)

We make the standard de?nition for the position, momentum, and energy operators in terms of the Galilei group generators. It is [16] ?, ? ≡ q ?≡ p ? ≡ h ?, P ?, H Q

(3.47)

where

is a proportionality constant known as Planck’s reduced constant, and is experimentally

determined to be ≈ 10?34 joule ? seconds in SI units. Then, eqn. 3.46 reads [16] ?, H ? P ?,H ? Q ?,P ? Q = 0, = i 1 ? P, m (3.49) (3.48)

= i ,

where

?,P ? =i Q

(3.50)

is especially important, and is called the canonical commutator. As a consequence of our work so far this chapter, we now are in the position to evolve a state ? in time. From eqn. 3.10, we have operator ρ ? =U ?U ?A ?? A

(3.51)

3. Dynamics
?

47

? . Letting A ?=ρ ? =T ? t = e?itH/ , we have ? , the state operator, and U for an arbitrary observable A ? = e?itH/ ρ ? e+itH/ . ρ
? ?

(3.52)

Thus, by the de?nition of the derivative, ? ?ρ ? ? eitH/ ? ρ ? ρ e?itH/ ρ ?t ρ ? = lim ? = lim . t→0 t→0 t t
? ?

(3.53)

Expanding the exponential terms in a Taylor series, we get 1?
? itH

?t ρ ? ?

= lim
t→0

? 1+ + ... ρ t ? iH + ...

? itH

? + ... ? ρ

= lim ?
t→0

? iH

?+ρ ? ρ ? iH

= ? = i

? iH

?+ρ ? ρ

? , ? H ρ,

(3.54)

so the equation of motion for the state operator is

i ?t ρ ? . ?= ? H ? ρ,

(3.55)

3.3

The Schrodinger ¨ wave equation

Now that we have the commutator relations in eqn. 3.49, we can touch base with elementary quantum mechanics by deriving the Schr?dinger wave equation. We work in the position basis, where our basis vectors follow ? |x = x |x . Q Considering some state vector ψ =
x∈R

(3.56)

ax |x ,

(3.57)

3. Dynamics its wavefunction, given by de?nition 2.10, is ? ? ? ? ψ(x) = x ψ = x| ? ? ? ? , we ?nd by eqn. 2.64 that Considering Q ˇ ψ(x) = x| Q ? ψ = x| x ψ = x x ψ = xψ(x). Q ? ? ? ? = ax . ax |x ? ? ?

48

(3.58)

x∈R

(3.59)

ˇ turns out to be multiplication by x. Using this result with eqn. 2.14, we So, in the position basis, Q ?nd [2] ? Q = ? ? ? ? = ? ? ? = ? ψ ψ Q ? ? ? ? ? ? ? ? ? ? ? ?Q ? ?? a x|? y ? ? y ? ? ? ? ? ? x∈R y∈R ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? a x a Q y | ? ? y ? ? x ? ? ?? ? ? x∈R y∈R ? ?? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? a y y a x | ? ? y ? ? x ? ? ? ?? ? a? x
x∈R y∈R

= =

a? x xax
x∈R

= =

dx · ψ(x)? xψ(x) ˇ ψ(x). dx · ψ(x)? Q (3.60)

We would like to ?nd a similar expression for momentum within the position basis. To do this, we ?,P ? = i , which corresponds to consider the canonical commutator from eqn. 3.46, Q ?,P ? =i x| Q ˇ,P ˇ ψ(x) = i ψ(x) ? Q

(3.61)

3. Dynamics Considering some dummy function f (x), we have [2] i f (x) = = = = = df df ?x +i f i dx i dx df df x ?x ? f i dx i dx i df d ? x· f x i dx i dx d d x ? x f i dx i dx x ˇP ˇ f. ˇ ?P ˇQ Q

49

(3.62)

ˇ f = x f in the position basis, Since we know Q ˇf = P d f. i dx

(3.63)

We now drop our test function to obtain the famous operator relationship

ˇ= P

d . i dx

(3.64)

Now, recall from eqn. 3.22, ψt= It follows that ?t ψt = ? ?t e?i ? so ? ˇt ψt (x) = x| ? ?t ψt = x| ? iH ψt = ? i x| H ? ψt = ? i H ˇ ψt (x). ? But by eqn. 3.20, ˇ ψt (x) = x| H ? ψt = x | H 1 ?2 1 2 ˇ ψt (x) + V ψt (x). P + V ψt = p 2m 2m (3.67)
?/ H

? ψt=0 = e?i =T

?/ H

ψ0 .

(3.65)

ψ0

=?

? iH

e?i

?/ H

ψ0 = ?

? iH

ψt ,

(3.66)

(3.68)

Hence, we have ˇt ψt (x) = ? i 1 P ˇx ˇ 2 ψt (x) ? V ψt (x) = ? i 1 ? ? 2m 2m i
2

ψt (x) ? V ψt (x).

(3.69)

3. Dynamics This is rewritten as

50

i

2 ?2 ψ(x, t) ?ψ(x, t) =? + V ψ(x, t), 2m ?x2 ?t

(3.70)

and is the time-dependent Schr?dinger equation [2]. Remarkably, so long as V is time-independent, this equation turns out to be separable, so we can e?ectively pull o? the time-dependence. To do this, we suppose [2] ψ(x, t) ≡ ψ(x)?(t),

(3.71)

and substitute into the Schr?dinger equation. We have
2 ?2 ψ(x)?(t) ?ψ(x)?(t) =? + V ψ(x)?(t), 2m ?t ?x2

i

(3.72)

which is i ψ(x) or i provided ?(t), ψ(x)
2 1 ??(t) 1 ?2 ψ(x) =? + V, ?(t) ?t 2m ψ(x) ?x2 2 ?2 ψ(x) ??(t) + ?(t) = V ψ(x)?(t), 2m ?t ?x2

(3.73)

(3.74)

0. We now have two independent, single-variable functions set equal , so we

know each of the functions must be equal to some constant, which we name E. That is, we have [2] E = E = 1 ˇ dt ?(t), ?(t) 1 ˇ2 d ψ(x) + V, 2m ψ(x) x
2

i ?

(3.75)

where we have let the partial derivatives go to normal derivatives, since we now have single-variable functions. The time-dependent piece has the solution ?(t) = e?iEt/ ,

(3.76)

and the time-independent piece is usually written as [2]

3. Dynamics
2

51

?

2m

ˇ2 ψ(x) + V ψ(x) = Eψ(x), d x

(3.77)

which is the time-independent Schr?dinger equation. Although this result cannot be reduced further without specifying V , we can use eqn. 3.68 to ?nd ˇ ψ(x) = Eψ(x). H

(3.78)

ˇ, This means that the values for the separation constant E are actually the possible eigenvalues for H the position representation of the Hamiltonian (energy operator). Further, if we ?nd ψ(x), we can construct ψ(x, t) by ψ(x, t) = ?(t)ψ(x) = e?iEt/ ψ(x). If we compare this to eqn. 3.22,
?/ ? t ψt=0 = e?itH ψt = T ψ0 ,

(3.79)

(3.80)

we ?nd a distinct similarity between the form of time evolution in Hilbert space using the unitary ? t operator and time evolution in position space using the complex exponential of the eigenvalues T ˇ operator on function space. of the associated H

3.4

The Free Particle

Now that we have derived the Schr?dinger equation, we will put it to use by treating the case of a free particle, when the potential V = 0. In this case, the time-independent Schr?dinger equation (eqn. 3.77) reads
2

? which we write as

2m

ˇ2 ψ(x) = Eψ(x), d x

(3.81)

ˇ2 ψ(x) = ?k2 ψ(x), d x where k≡ √ 2E

(3.82)

.

(3.83)

3. Dynamics This equation has a solution [13] ψ(x) = Aeikx ,

52

(3.84)

which is sinusoidal with amplitude A. Note that we identi?ed the constants in our equation as k with some foresight, as it turns out to be the wave number, k = 2π/λ, of the solution. However, this solution does not decay at in?nity, so the condition imposed by de?nition 2.4 is violated. That is [2], ? ? ? ? ? ? ? ?? ? ? ? ? ?? ? ? a? ? x x|? ?? ? a? x ay
x∈R y∈R

ψψ

= = =

x∈ R

y∈R

? ? ? ? ay y ? ? ? ?

xy

a? x ax
x∈R

= = = =

dx · ψ? (x)ψ(x) dx · A? e?ikx Aeikx |A|2 ∞, dx (3.85)

so we cannot pick appropriate A such that ψ ψ = 1. Hence, ψ must not be a physically realizable state. The resolution to this problem is to use a linear combination of states with di?erent values for A. The general formula for this linear combination is [2]

ψ(x) =

dk · φ(k)eikx ,

(3.86)

where φ(k) is the coe?cient that replaces A in our linear combination. Each of the component states of this integral are called plane waves, while the linear combination is called a wave packet. We will make use of the plane wave components for free particles later, so we need to investigate their form further. Consider the eigenvalue problem ˇ fp (x) = p fp (x), P

(3.87)

3. Dynamics

53

where fp is an eigenfunction and p is an eigenvalue of the momentum operator in the position basis. Using eqn. 3.63, we write i This has a solution [2] fp (x) = Aeipx/ , (3.89) ˇx fp (x) = p fp (x). d (3.88)

which is of identical form to eqn. 3.84. If we identify the eigenfunctions of the position operator with the plane wave states, we get the famous de Broglie relation [2, 4, 11, 13], p = k.

(3.90)

Recall that plane wave states are not normalizable, and thus cannot be physically realizable states. This means that in the position basis, states of de?nite momentum are not permissible, which is a famous consequence of the Heisenberg uncertainty principle.6 The wave packet, then, can be thought of as a superposition of states of de?nite momentum, giving rise to a state of de?nite position. That is [2], ψ(x) = where we have switched to units in which dp · φ(p)eipx , (3.91)

≡ 1, as we will do for the remainder of this thesis.

The uncertainty principle reads ?x?p ≥ /2 [2]. If we have a state of de?nite position, ?p = 0, so, roughly, ?x = ∞. This is the result that we have already seen; states of de?nite momentum are not square-normalizable in the position basis.
6

CHAPTER

4

The Wigner Distribution

T

he Wigner distribution was the ?rst quasi-probability distribution used in physics. Invented in 1932 by E.P. Wigner, it remains in wide use today in many areas, especially quantum

mechanics and signal analysis [18]. The Wigner distribution has been used to develop an entirely new formalism of quantum mechanics in phase-space, a space of position vs. momentum, which we touch on brie?y in section 4.5 [19]. In the following chapter, we ?rst de?ne the Wigner distribution and derive some of its fundamental properties. Next, we discuss the Wigner distribution of a combined system and treat a free particle. Following that, we extend the distribution to its associated transform. We then create a table of useful inverse relationships between the state operator and Wigner distribution required in subsequent sections. Finally, we construct the Wigner distribution for a simple harmonic oscillator as an example and observe its correspondence to a classical phase-space probability distribution.

4.1

Definition and Fundamental Properitees

In this section, we explore the basic properties of the Wigner distribution, starting with its de?nition, which is stated below [20]

54

4. The Wigner Distribution

55

De?nition 4.1 (The Wigner distribution). Consider the matrix elements of some state ? y . Then, the Wigner distribution W associated with ρ operator, given by ρ(x, y) = x| ρ is given by 1 ?, p, t) ≡ W (x dδ · e?ipδ ρ(x, y, t), (4.1) 2π ? = (x + y)/2 and δ = x ? y. This is also usefully written where x ? , p) = W (x 1 2π 1 1 ? + δ, x ?? δ , dδ · e?ipδ ρ x 2 2 (4.2)

where time dependence is understood and not written explicitly.

Note that the Wigner distribution is given by a special case of the Fourier transform of ?) and di?erence (δ) coordinates.1 Using this de?nition, the state operator with respect to the mean (x we now list and verify some of the well known properties of the Wigner distribution.

4.1.1

Inverse Distribution

As one might guess, just as the Wigner distribution is de?ned in terms of the state operator, it is possible to de?ne the state operator in terms of the Wigner distribution. This distinction is arbitrary: valid formulations of quantum mechanics have been made with the Wigner distribution as the primary object, while the state operator takes a secondary seat. However, historically the state operator and its associated vector have been the objects of primary importance in the development of quantum mechanics [21]. If we wish to express a state operator in terms of an associated Wigner distribution, we can make use of the relation [22] ρ(x, y) = ? . dp · eipδ W p, x

(4.3)

In order to show that this is well-de?ned, we note that the Plancherel theorem states [2] f (p) = 1 2π dδ · e?ipδ F f (δ) ? F f (δ) = dp · eipδ f (p),

(4.4)

for some function f and its Fourier transform, F ( f ), so long as the functions decay su?ciently fast at in?nity. From this, is evident that the state operator is a kind of Fourier transform of the
1

We assume some basic familiarity with the Fourier transform. If this topic is unfamiliar, the reader is advised to see ref. [10].

4. The Wigner Distribution

56

Wigner distribution, as we claimed in the previous section, so our inverse relationship is indeed appropriate.

4.1.2

Reality of the Wigner Distribution

One of the most important of the basic properties we will cover is that the Wigner distribution is always real-valued.2 That is,

?, p, t) ∈ R. W (x

(4.5)

?, p). This gives us [23] In order to show this, we will take the complex conjugate of W (x ?, p) W ? (x = = = = 1 2π 1 2π 1 2π 1 2π 1 1 ? + δ, x ?? δ dδ · eipδ ρ? x 2 2 δ=?∞ 1 1 (?dδ) · e?ipδ ρ? x ? ? δ, x ?+ δ 2 2 δ= ∞ ∞ 1 1 ? + δ, x ?? δ dδ · e?ipδ ρ? x 2 2 δ=?∞ ∞ 1 1 ? + δ, x ?? δ dδ · e?ipδ ρ x 2 2 δ=?∞ (4.6)
?∞ ∞

?, p), = W (x

?, p) = W (x ?, p), we ? . Since we found W ? (x where we used eqn. 1.25 for the self-adjoint operator ρ ?, p) ∈ R, as we claimed in eqn. 4.5. have W (x

4.1.3

Marginal Distributions

Based on our de?nition of the Wigner distribution, we note two important marginal distributions. They are [24]

?, p) = x ?| ρ ? ? |x dp · W (x

(4.7)

Although it is real-valued, the Wigner distribution is not always positive. It is called a quasi-probability distribution since it is analogous to a true probability distribution, but has negative regions. We will deal these apparent negative probabilities more in section 4.5.

2

4. The Wigner Distribution and

57

? · W (x ?, p) = p ρ ? p . dx

(4.8)

To show these results, we recall the de?nition of the Wigner distribution. We have

? , p) = dp · W (x = = =

1 1 ? + δ, x ?? δ dδ · e?ipδ ρ x 2 2 1 1 1 ? + δ, x ?? δ dδ · ρ x dp · e?ipδ 2π 2 2 1 1 1 ? + δ, x ? ? δ 2πδD (δ) dδ · ρ x 2π 2 2 1 1 ? + δ, x ?? δ dδ · δD (δ)ρ x 2 2 dp · 1 2π

?, x ?) = ρ (x = ?| ρ ? , ? |x x (4.9)

where δD is called the Dirac delta,3 and has the important properties [10] dx · δD ( y) f (x + y) ≡ f (x)

(4.10)

and dx · e?ixy ≡ 2πδD ( y). (4.11)

In preparation for calculating the postion marginal distribution, it is useful to discuss the momentum representation of the state operator. Analogous to the position representation, we de?ne

? p , ? (p, p ) ≡ p p ρ

(4.12)

The Dirac delta is roughly a sharp spike at a point, and zero elsewhere. Technically, it is not quite a function, but it is a very useful construct in theoretical physics. For more information, see ref. [10].

3

4. The Wigner Distribution

58

where the (?) is used to distinguish position matrix elements from momentum matrix elements. In terms of the momentum representation, the Wigner distribution is

?, p) ? WP (x, p ?) ≡ W (x

1 2π

1 1 ? + λ, p ?? λ , ? p dλ · e?ixλ ρ 2 2

(4.13)

?= where p

p+p 2

and λ = p ? p are the average and di?erence momentum coordinates, in direct

? and δ. We are now ready to calculate the position marginal distribution. We have analogy to x ? (x ?, p) ? dxW = = = = = ? = ? P (x, p ?) dxW 1 1 ? + λ, p ?? λ ? p dλ · e?ixλ ρ 2 2 1 1 1 ? + λ, p ?? λ ? p dλ · ρ dxe?ixλ 2π 2 2 1 1 1 ? + λ, p ? ? λ 2πδD (δ) ? p dλ · ρ 2π 2 2 1 1 ? + λ, p ?? λ ? p dλ · δD (δ)ρ 2 2 dx ?, p ?) ? (p ρ ? (p, p) ρ ? p , p ρ (4.14) 1 2π

which is what we claimed.

4.2

Wigner distributions of combined systems

Recall that in section 2.3, we de?ned the state operator of a composite system as ? 1+ 2 = ρ ?1 ? ρ ?2 , ρ

(4.15)

where (?) is the tensor product. In analogy to this, we de?ne the Wigner distribution of a composite system to be [22] W1+2 (x1 , x2 , p1 , p2 ) ≡ W1 (x1 , p1 )W2 (x2 , p2 ).

(4.16)

4. The Wigner Distribution

59

In section 2.3, we also developed the partial trace, which was a method for extracting information about a single sub-state operator in a composite state operator. Not surprisingly, we de?ne an analogous operation, which e?ectively annihilates one of the sub-Wigner distributions in a composite distribution by integrating out the degrees of freedom of the sub-distribution. Formally, we call this the projection function A : W1+2 → W1 and de?ne it as dx2 dp2 W1+2 .

A (W1+2 ) ≡

(4.17)

To understand how it works, we evaluate it on the initial total Wigner distribution. This is A (W1+2 ) = = = = = A W1 (x1 , p1 )W2 (x2 , p2 ) dx2 dp2 W1 (x1 , p1 )W2 (x2 , p2 ) W1 (x1 , p1 ) W1 (x1 , p1 ) dx2 dp2 W2 (x2 , p2 ) dx2 ρ2 (x2 , x2 )

W1 (x1 , p1 )Tr ρ2 (4.18)

= W1 (x1 , p1 ),

where we have used eqns. 4.7 and de?nition 1.10 to integrate W2 and perform the full trace of ρ2 . Thus, A behaves as desired, in direct analogy to the partial trace on composite state operators.

4.3

Equation of Motion for a Free Particle

Now that we have laid out the basic properties of the Wigner distribution, we need to understand how to use it to describe a physical system. In this section, we investigate how a Wigner distribution evolves in time in the absence of a potential. Recall that in section 3.4, we established the Hamiltonian of a free system as ?2 ? = P . H 2m (4.19)

Given the Hamiltonian, we can calculate the time evolution of the state operator of the system via the commutator relation ?,ρ ? = ?i H ? , ?t ρ (4.20)

4. The Wigner Distribution developed in eqn. 3.55, to obtain i ? ?H ?ρ ?2 ? P ? 2ρ ?=i ρ ?H ? = ?P ? , ?t ρ ρ 2m

60

(4.21)

noting that m is a scalar. So far, we have a general operator equation for the evolution of the system. If we want to know more speci?c information about its motion, we need to choose a basis onto which we may project our equation. Choosing momentum, we multiply both sides of the equation by p from the left and p from the right, where p and p are two arbitrary momentum states of our system. This gives us ? p p ?t ρ = p i ?2 ? P ? 2ρ ?P ? p . ρ 2m

(4.22)

? . Hence, P ? p =p p , Since p and p are states of de?nite momentum, they are eigenvalues of P ? = p p, and likewise for p . So, our equation of motion becomes p P ? p p ?t ρ = = = = = = i ?2 ? p ?2 ρ ?p ? p ρ 2m i i ?2 p ? ?2 ρ ?p ? p p ρ p p 2m 2m i i ? p ? ?ρ ? pp ? p p ρ p pp 2m 2m i i ?p p p ? ? p p p ρ p pρ 2m 2m i ? p p 2 ? p2 p ρ 2m i ? p p ?p p +p . p ρ 2m p

(4.23)

?, for p and p by de?ning λ ≡ p ? p and Next, we substitute di?erence and mean variables, λ and p ? ≡ p + p . This substitution is algebraically equivalent to p = p ? + λ/2 and p = p ? ? λ/2, so 2p ? + λ/2 ?t ρ ? ? λ/2 = ? p p i ? + λ/2 ρ ? ? λ/2 2p ?λ. ? p p 2m

(4.24)

4. The Wigner Distribution

61

We multiply both sides of the equation by dλ · e?iλx /(2π) (the kernel of the Fourier transform) and integrate from λ = ?∞ to λ = +∞. The left hand side is = = = = ? 1 ?iλx ? + λ/2 ?t ρ ? ? λ/2 ? p e p 2π 1 ? + λ/2 ρ ? ? λ/2 ? p dλ · ?t e?iλx p 2π 1 ? + λ/2 ρ ? ? λ/2 ? p ?t dλ · e?iλx p 2π dλ · ? , t) ?t WP (x, p ?, p, t), ?t W ( x (4.25)

LHS

? . We where we use the fact that the only explicitly time dependent piece of the integrand is ρ also assume that the integral converges and, in the last step, we use eqn. 4.13 for the Wigner distribution in the momentum basis. Proceeding in a similar fashion on the right hand side, we get = = = = = = ? dλ · 1 ?iλx i ? + λ/2 ρ ? ? λ/2 2p ?λ ? p e p 2π 2m λ ? + λ/2 ρ ? ? λ/2 ? p dλ · e?iλx p i

RHS

? i · ip m2π ? 1 p ? + λ/2 ρ ? ? λ/2 ? p dλ · ?ie?iλx p ? m 2π ? 1 p ? + λ/2 ρ ? ? λ/2 ? p dλ · ?x e?iλx p ? m 2π ? p 1 ? + λ/2 ρ ? ? λ/2 ? p ? ?x dλ · e?iλx p m 2π ? p ? , t) ? ?x WP (x, p m ? p ?, p, t), ? ?x ? W (x m

(4.26)

where we use the fact that e?iλx was the only factor in the integrand that explicitly depended on x. We again assume that the integral converges and use eqn. 4.13 for the Wigner distribution in the momentum basis. Thus, equating the right hand and left hand sides in the position representation leaves [24]

?, p, t) = ? ?t W (x

p ?, p, t), ?x ? W (x m

(4.27)

which is the equation of motion for a free system in terms of its Wigner distribution.

4. The Wigner Distribution

62

Although it might seem convoluted to introduce the Wigner form of this equation rather than using the evolution of a free particle in terms of its state operator, the power of the Wigner distribution is that it allows us to treat position and momentum simultaneously.

4.4

Associated Transform and Inversion Properties

Now that we have determined some of the properties of the Wigner distribution, it is useful to de?ne the Wigner transform of an arbitrary distribution of two variables.

De?nition 4.2 (The Wigner transform). Let D(x, y) be an arbitrary distribution of two variables, x and y, and possibly have an implicit temporal dependence. Then, the Wigner transform W of D, a special case of the Fourier transform, is de?ned as W D(x, y) ≡ 1 2π dδ · ·e?ipδ D(x, y), (4.28)

where δ = x ? y, as identi?ed in de?nition 4.1.

By de?nition, we know the Wigner transform of ρ(x, y) immediately. It is W ρ(x, y) = 1 2π ?, p). dδ · e?ipδ ρ(x, y) = W (x

(4.29)

We arrive at a more interesting result by considering W ?t ρ(x, y) = dδ · e?ipδ ?t ρ(x, y).

(4.30)

Clearly, neither δ nor e?ipδ depend explicitly on time. Assuming that the integral converges, we have dδ · e?ipδ ?t ρ(x, y) = ?t where we have applied de?nition 4.2. So, ?, p), W ?t ρ(x, y) = ?t W (x ?, p), dδ · e?ipδ ρ(x, y) = ?t W (x (4.31)

(4.32)

as desired.

4. The Wigner Distribution

63

In the following sections, we work out some of the Wigner transforms of functions that we will need later. The results of these derivations are summarized in table 4.1 below.

?= Table 4.1: Wigner transforms of important quantities, where x Expression Transform ?, p) ρ(x, y) W (x ?, p) ?t ρ(x, y) ?t W ( x i 2 2 ?, p ? ? ? ρ ( x , y ) ? p ? W x ? x x y 2

x+ y 2

and δ = x ? y.

(x ? y) ?x ? ? y ρ(x, y) 2 x ? y ρ(x, y)

?, p) ?2?p p · W (x 2 ?, p) ?? p W ( x

4.4.1

2 The Wigner transform of (i/2) ?2 x ? ? y ρ(x, y)

By de?nition 4.2, i 2 ? ? ?2 y ρ(x, y) = 2 x dδ · e?ipδ i 2 ? ? ?2 y ρ(x, y), 2 x

V≡W

(4.33)

where we have de?ned V for our convenience. Then, we know from Clairaut’s theorem that since all partial derivatives of the state operator are continuous, ?x ρ(x, y), ? y ρ(x, y) = 0 [25]. V then expands to V= i 2
2 dδ · e?ipδ ?2 x ? ? y + ?x ? y ? ? y ?x ρ(x, y).

(4.34)

? + 1/2 · δ and y = x ? ? 1/2 · δ, which implies ?δ x = 1/2 and We next note that by de?nition 4.1, x = x ?δ y = ?1/2. Hence, 2?δ x = 1 and 2?δ y = ?1. We rework V to be = i ·2 2
2 dδ · e?ipδ (?δ x) ?2 x + ?δ y ? y + (?δ x) ?x ? y + ?δ y ? y ?x ρ(x, y)

V

= i = i

dδ · e?ipδ

?ρ(x, y)/?x ?x ?ρ(x, y)/? y ? y ?ρ(x, y)/? y ?x ?ρ(x, y)/?x ? y + + + ?x ?δ ?y ?δ ?x ?δ ?y ?δ (4.35)

dδ · e?ipδ ?δ ?x ρ(x, y) + ? y ρ(x, y) .

Now, by de?nition 4.1, ?x ? x = ?x ? y = 1, so ?ρ(x, y) ?ρ(x, y) ?x ?ρ(x, y) ? y ?ρ(x, y) ?ρ(x, y) = + = + = ?x ρ(x, y) + ? y ρ(x, y). ? ? ? ?x ?x ?x ? y ?x ?x ?y

(4.36)

4. The Wigner Distribution Thus, we have V=i Next, we integrate by parts to get V = e?ipδ ?x ? ρ(x, y)
∞ δ=?∞

64

dδ · e?ipδ ?δ ?x ? ρ(x, y).

(4.37)

?i

dδ · ?δ e?ipδ ?x ? ρ(x, y).

(4.38)

Noting that the state operator is continuous in the position basis, we ?nd lim ?x ? ρ(x, y) 1 1 ? + δ, x ?? δ lim ?x ?ρ x 2 2 1 1 ?? δ ? + δ, x = ?x ? lim ρ x δ→±∞ 2 2 1 1 = ?x δ, ? δ ? lim ρ δ→±∞ 2 2 =
δ→±∞

δ→±∞

= 0.

(4.39)

Further, 0 ≤ e?ipδ ≤ 1 ?δ, so e?ipδ ?x ? ρ(x, y) Hence, V = ?i dδ · ?δ e?ipδ ?x ? ρ(x, y) = ?i dδ · ?ipe?ipδ ?x ? ρ(x, y) = ?p?x ? dδ · e?ipδ ρ(x, y). (4.42)
∞ δ=?∞

(4.40)

= 0.

(4.41)

That is, by de?nition 4.1,

W

i 2 ? ? ?2 ? y ρ(x, y) = ?p?x 2 x

?, p , dδ · e?ipδ ρ(x, y) = ?p?x ?W x

(4.43)

which is what we wanted to show.

4. The Wigner Distribution

65

4.4.2

The Wigner transform of (x ? y) ?x ? ? y ρ(x, y)

By de?nition 4.2, V ≡ W (x ? y) ?x ? ? y ρ(x, y) = dδ · e?ipδ (x ? y) ?x ? ? y ρ(x, y),

(4.44)

where we have again de?ned V for our convenience. Since δ = x ? y, we have V= dδ · δe?ipδ ?x ? ? y ρ(x, y).

(4.45)

As we did in the previous section, we note that ?δ x = 1/2 and ?δ y = ?1/2, so 2?δ x = 1 and 2?δ y = ?1. Thus, V=2 dδ · δe?ipδ ?ρ(x, y) ?x ?ρ(x, y) ? y + . ?x ?δ ? y ?δ

(4.46)

Next, we use the chain rule to ?nd V=2 dδ · δe?ipδ ?δ ρ(x, y).

(4.47)

After that, we use integration by parts to get V = 2 δe?ipδ ρ(x, y)
∞ δ=?∞

?2

dδ · ρ(x, y)?δ δe?ipδ .

(4.48)

As before, we investigate the boundary term. The non-oscillatory component follows lim δρ(x, y) = 0,

δ→±∞

(4.49)

since ρ(x, y) goes to zero rapidly o? the diagonal, as we noted in section 2.6. Since 0 ≤ e?ipδ ≤ 1 ?δ,

(4.50)

we have
δ→±∞

lim e?ipδ δρ(x, y) = 0,

(4.51)

so V = ?2 dδ · ρ(x, y)?δ δe?ipδ . (4.52)

4. The Wigner Distribution Finally, note that ?δ δe?ipδ = ?p pe?ipδ , hence V = ?2 dδ · ρ(x, y)?p pe?ipδ = ?2?p p ? , p) . dδ · e?ipδ ρ(x, y) = ?2?p p · W (x

66

(4.53)

(4.54)

That is,

? , p) , W (x ? y) ?x ? ? y ρ(x, y) = ?2?p p · W (x

(4.55)

as desired.

4.4.3

The Wigner transform of x ? y ρ(x, y)

2

By de?nition 4.2, V ≡ W x ? y ρ(x, y) =
2

dδ · e?ipδ x ? y ρ(x, y),
2

(4.56)

where, as in the past two sections, we have de?ned V for our convenience. By de?nition 4.1, δ = x ? y, so V= Now, since
?ipδ δ2 e?ipδ = ? i2 δ2 e?ipδ = ??2 , pe

dδ · e?ipδ δ2 ρ(x, y) =

dδ · δ2 e?ipδ ρ(x, y).

(4.57)

(4.58)

we have V= or
?ipδ dδ · ??2 ρ(x, y) = ??2 pe p

?, p), dδ · e?ipδ ρ(x, y) = ??2 p W (x

(4.59)

?, p), W x ? y ρ(x, y) = ??2 p W (x

2

(4.60)

which is what we wanted to show.

4. The Wigner Distribution

67

4.5

Example: The Wigner Distribution of a Harmonic Oscillator

We will next develop the Wigner distribution for the quantum harmonic oscillator. The Hamiltonian is [2] ?2 ? = P + 1 kx2 , H 2m 2 where k is the spring constant, and the angular frequency is k . m (4.61)

ω=

(4.62)

From the time-independent Schr?dinger equation, eqn. 3.77, we have
2

?

2M

ˇ2 ψ(x) + 1 kx2 ψ(x) = Eψ(x). d x 2

(4.63)

This equation is readily solved using power series, and has the well-known family of solutions [4, 2, 13] 1 1 (mωx ? ?x ) ψn (x) = √ √ 2mω n! which correspond to states of constant energy [2] En = n + 1 ω. 2
n

mω π

1/ 4

e?

mω 2 2 x

,

(4.64)

(4.65)

For the purposes of this example, we will concentrate on the ground state (n = 0) and the ?rst three excited states, shown in ?gure 4.1. In order to calculate the Wigner distribution of these states, we must use eqn. 4.2, so we need explicit forms for the matrix elements of the state operators in the position basis. Fortunately, since the harmonic oscillator is pure, we easily obtain these by ? n y = x ψn ρn (x, y) = x| ρ ψn y = ψ? n (x)ψn ( y),

(4.66)

where we used eqn. 3.58 to identify the wavefunction ψn ( y) and its complex conjugate ψ? n (x). Since ψ(x) is real we have,

ρn (x, y) =

1 mω n! π

1/ 2

1 2 mω

n

(mωx ? ?x )n mω y ? ? y

n

e?

mω 2 2 x

e?

mω 2

y2

.

(4.67)

4. The Wigner Distribution

68

Figure 4.1: The ?rst four energy states, ψn (x), of the harmonic oscillator. Particularly, for n = 0 through n = 3, in units where m = ω = ρ0 (x, y) = = 1, we have

x2 + y2 1 √ e? 2 , π x2 + y2 1 ρ1 (x, y) = 2xy √ e? 2 , π 1 x2 +y2 1 2 2 2 2 ?2e?x + 4e?x x2 ?2e? y + 4e? y y2 √ e 2 , ρ2 (x, y) = 8 π x2 + y2 1 1 2 2 2 2 ρ3 (x, y) = 12e?x x ? 8e?x x3 12e? y y ? 8e? y y3 √ e+ 2 , 48 π

(4.68)

which we plot in ?gure 4.2. Now that we have the general form of the state operator matrix elements, it is just a matter of evaluating eqn. 4.2 to get the corresponding Wigner distributions.

4. The Wigner Distribution

69

Figure 4.2: The position representation of the state operator, ρn (x, y), for the ?rst four energy states of the
harmonic oscillator. In the density plots, yellow indicates maximum values, while blue indicates minimum.

Starting with n = 0, we have ? , p) = W0 (x = = = 1 2π 1 2π 1 1 ? + δ ψ0 x ?? δ dδ · e?ipδ ψ0 x 2 2 2 ωm 1/4 ? 1 ωm(x ?+ 1 2 δ) dδ · e?ipδ e 2 π
? ? 2 δ) dδ · e?ipδ e? 4 ωm(x
1 1 2

2 ωm 1/4 ? 1 ωm(x ?? 1 2 δ) e 2 π

1 ωm 1/2 2π π 1 ? p2 ? x ?2 e , π

(4.69)

which is just a three-dimensional Gaussian distribution. The calculations involved for the excited states are similar, but the algebra is signi?cantly less trivial. They are easily performed using a computer algebra system, so we state the result. The Wigner distributions are ?, p) = W0 (x ?, p) = W1 (x ?, p) = W2 (x ?, p) = W3 (x 1 ?p2 ?x ?2 e π ?2 ? 1 ?p2 ?x 2p2 + 2x ?2 e π ?4 + 4p2 x ?2 ? 4p2 ? 4x ?2 + 1 ?p2 ?x 2p4 + 2x ?2 e (4.70) π ?6 + 12p2 x ?4 ? 18x ?4 + 12p4 x ?2 ? 36p2 x ?2 + 18x ?2 + 4p6 ? 18p4 + 18p2 ? 3 ?p2 ?x 4x ?2 e , 3π

which are plotted in ?gure 4.3. Note how W0 > 0 for all values of x and p, but the higher energy states are sometimes negative. As we mentioned brie?y before, the Wigner distribution is

4. The Wigner Distribution

70

?, p), of the ?rst four energy states of the harmonic oscillator with Figure 4.3: The Wigner distribution, Wn (x their well-known shape [19]. In the density plots, yellow indicates maximum values, while blue indicates minimum.

motivated by classical phase-space probability distributions, but is permitted to have negative values. These “negative probabilities” are a weird signature of a quantum mechanical system. To ?, p), shown in ?gure 4.4. At the high energy make this analogy more concrete, we consider W10 (x of n = 10, the oscillations inside the Wigner distribution become increasingly rapid. In the classical limit, as n → ∞, we expect the negative portions to overlap and cancel with the positive components, giving us a positive-de?nite, classical probability distribution. In order to force this for n = 10, we perform a careful function smoothing, known as a convolution, of W10 with a simple Gaussian. Mathematically, this is [26]
c W10 (X, P) ≡ ?) ?(P?p) ? · W10 (x ?, p)e?(X?x dxdp .
2 2

(4.71)

As shown in ?gure 4.4, this averages the inner oscillations to zero, but retains a large, outer, positive ring. This is what we expect, since a classical simple harmonic oscillator has eliptical orbits in phase-space.

4. The Wigner Distribution

71

Figure 4.4: The position representation of the state operator, ρ1 0(x, y), at n = 10, its associated Wigner

?, p), and the smoothed Wigner distribution generated by convolution with a Gaussian, distribution, W10 (x c W10 (X, P).

CHAPTER

5

The Master Equation for Quantum Brownian Motion

I

n this chapter, we develop the fundamental equation of quantum decoherence, the master equation for quantum brownian motion. The master equation dictates the time-evolution of a

system and an environment with which the system interacts (these terms will be precisely de?ned later). To facilitate this, we use the formalism of the Wigner distribution developed in chapter 4, since it incorporates both position and momentum simultaneously, and consider how the system’s Wigner distribution changes with time. Then, we invert the Wigner transformation to get the master equation, written in terms of the the system’s state operator. After the equation is developed in this chapter, we examine its physical meaning and work through an example in chapter 6.

5.1

The System and Environment

The idea of collisions between a system and environment can be represented intuitively in a physical picture, as shown in ?gure 5.1. However, before we begin, we need to de?ne precisely the notion of system and environment. Further, we need to specify what we mean by an interaction or collision between the system and environment.

De?nition 5.1 (System). A system, or system particle, denoted as S, is a single, one-dimensional point particle. It has momentum pS , mass mS , and position xS .

72

5. The Master Equation for Quantum Brownian Motion

73

Figure 5.1: A graphic representation of the system and environment. Note that one of the environment
particles is undergoing a collision with the system. For simplicity we consider the corresponding one-dimensional problem.

De?nition 5.2 (Environment). An environment of a system S is denoted ES , and consists of an ideal one-dimensional gas of light particles. Each of these particles has momentum pE , mass mE , and position xE . We will often abbreviate ES to E if it is clear to what system S the environment belongs.

De?nition 5.3 (Collision). A collision between a particle of an environment E and a system S is de?ned as an instantaneous transfer of momentum that conserves both kinetic energy and momentum.

It is important to note that the system we are considering is very large (massive) when compared to the individual environmental particles. Precisely, we take [22] mE mS 1,

(5.1)

and we will typically neglect terms of second or higher order in this factor. Now that we have de?ned the key objects treated by the master equation, we begin to

5. The Master Equation for Quantum Brownian Motion

74

investigate its structure. As stated above, we ?rst want to consider how the Wigner Distribution of the system, WS , changes with time. Quantum mechanically, we separate this change into two pieces. First, WS undergoes standard unitary time evolution, with the system treated as a free particle. Second, S collides with environment particles, and the collisions alter the system’s energy and momentum. In section 4.3, we considered the change in the Wigner distribution of a particle due to its free evolution, which we will make use of later. Now, we begin to consider the in?uence of an environment on a system.

5.2

Collisions Between Systems and Environment Particles

Before we begin to examine how a system behaves in the presence of an environment, we ?rst consider the collision between a system particle and one particle from an environment. For each collision, we derive equations for momentum and position change. First, we address momentum change. Let pS and pE denote the initial momenta of a system and an environment particle. By de?nition 5.3, the interaction between the two particles is totally elastic. That is, both kinetic energy and momentum are conserved. We write kinetic energy conservation as [9] p2 S 2mS which is equivalent to ? E pE + p ?E = ?mE pS ? p ?S pS + p ?S , mS p E ? p + p2 E 2mE = ?2 p S 2mS + ?2 p E 2 mE ,

(5.2)

(5.3)

and momentum conservation as [9] ?S + p ?E , ps + pE = p

(5.4)

which is also written as ? E = ? pS ? p ?S . pE ? p (5.5)

We then assume that, since a collision occurred, the momenta of both the system and environment

5. The Master Equation for Quantum Brownian Motion ?E particle have changed, i.e. pE ? p ?S 0 and pS ? p

75

0. So, we divide eqn. 5.3 by eqn. 5.5 to get

?E pE + p ?E ?S pS + p ?S mS pE ? p ?mE pS ? p = , ?E ?S pE ? p ? pS ? p which implies ?E = mE pS + p ?S . mS p E + p ?S and p ?E . We have [9] Then, we solve eqns. 5.5 and 5.7 simultaneously for both p ?E = mE pS + pS + pE ? p ?E mS pE + p ?E ?(mS ? mE )pE + 2mE pS = (mE + mS )p ?E = ? p 2mE mS ? mE pE + pS mS + mE mS + mE

(5.6)

(5.7)

? ?

(5.8)

and ?S = mE pS + p ?S mS pE + pE + pS ? p ?S (mS ? mE )pS + 2mS pE = (mS + mE )p ?S = p mS ? mE 2mS pS + pE , mS + mE mS + mE (5.9)

? ?

which are the changes in the momenta of the environment particle and the system. Now that we have investigated the momentum change that results from a collision, we will develop the corresponding position change. To do this, we use the plane wave treatment for the total system we developed in section 3.4 and note how changes in momentum imply changes in position. The wavefunction of the composite system containing both the system and environment particle, a product state, is given by1 φ = φS φE . (5.10)

Using equation 3.89, we can form the composite plane wave, φi , from the individual incident plane wave of each particle.2 This is φi = eipS xS eipE xE .
1

(5.11)

Since the product state vector is φ = φS ? φE , the wavefunction form of the composite state takes ordinary multiplication. Remember that plane waves are states of de?nite momentum. We are using them in this case because we are conserving the momentum in the collision.
2

5. The Master Equation for Quantum Brownian Motion After collision, using the momentum representation, the plane wave, φ f , becomes
?S xS ip φ f = eip e ? E xE .

76

(5.12)

By eqns. 5.8 and 5.9, this can be written as Exponent of φ f mS ? mE 2mS mS ? mE 2mE pS + pE xS + i ? pE + pS xE mS + mE mS + mE mS + mE mS + mE 2mS mS ? mE 2mE mS ? mE pS xS + pE xS ? pE xE + pS xE = i mS + mE mS + mE mS + mE mS + mE mS ? mE 2mE 2mS mS ? mE = ipS xS + xE + ipE xS ? xE . mS + mE mS + mE mS + mE mS + mE = i

We de?ne
? S xS i p ?S ipE x φ f = eip e ?E xE ≡ eipS x e ?E ,

(5.13)

where [22] ?S = x and ?E = x mS ? mE 2mS xS ? xE . mS + mE mS + mE (5.15) 2mE mS ? mE xS + xE mS + mE mS + mE (5.14)

This way, we now have position and momentum representations of the collision. As is common in physics, we need to require that these collision interactions are local.3 Thus, throughout the collision, we take |xS ? xE | and |xS ? xE | |xE | , (5.17) |xS | (5.16)

since the potential energy, V (xS ? xE ) → 0 as |xS ? xE | → ∞. Recalling that from eqn. 5.1 mE mS
3

1,

(5.18)

It is important to emphasize that locality is not an approximation, but is necessary to include in our treatment. Ideally, we would work this into our equations formally. However, for simplicity, we can achieve local interactions by requiring this condition.

5. The Master Equation for Quantum Brownian Motion it is reasonable to ignore contributions to distances of order mE (xS ? xE ) mS Enforcing the locality of collision, we ?nd = = = = = = ? mS ? mE 2mE xS + xE mS + mE mS + mE 2mE 2mE mS + mE ? xS + xE mS + mE mS + mE mS + mE 2mE 2mE 1? xS + xE mS + mE mS + mE 2mE 2mE xS ? xS + xE mS + mE mS + mE 2mE (xE ? xS ) xS + mS + mE mE 2 mE (xE ? xS ) ? 2 (xE ? xS ) + ... xS + 2 mS mS xS xS , xE .

77

(5.19)

?S x

(5.20)

and = = = = = ? 2mS mS ? mE xS ? xE mS + mE mS + mE 2mE 2mE mS + mE 2mS + 2mE ? xS + ? xE mS + mE mS + mE mS + mE mS + mE 2mE 2mE 2? xS + ? 1 xE mS + mE mS + mE 2mE (xE ? xS ) 2xS ? xE + mS + mE mE mE 2 (xE ? xS ) ? 2 (xE ? xS ) + ... 2xS ? xE + 2 mS mS 2xS ? xE , (5.21)

?E x

which amounts to a phase shift in our plane wave state. We have now worked out all the position and momentum components we will need to treat the full case of a system coupled to an

5. The Master Equation for Quantum Brownian Motion environment. In summary, we have mS ? mE 2mS pS + pE , mS + mE mS + mE mS ? mE 2mE = ? pE + pS , mS + mE mS + mE = ? xS , ? 2xS ? xE .

78

?S p ?E p ?S x ?E x

(5.22)

5.3

Effect of Collision on a Wigner Distribution

In this section, we consider the change in the Wigner distribution of the system, WS , from one collision with an environment particle. Since we have a composite state of environment and system particle, we use equation 4.16 to write the Wigner distribution for the system and environment as WS+E = WS WE .

(5.23)

It follows that the change in the total Wigner distribution for the system and environment, ?WS+E , due to one collision is [22] ?WS+E = W S+E ? WS+E = W S W E ? WS WE ?S , p ? S WE x ?E , p ?E ? WS (xS , pS )WE (xE , pE ). = WS x (5.24)

Now that we have the change in the total (system and environment) Wigner distribution, we use eqn. 4.17 developed in section 4.2 to deduce ?W , the change in the system’s Wigner distribution, by summing (integrating) over all environmental con?gurations. We have [4] ?W = A (?WS+E ) = = ?S , p ? S WE x ?E , p ?E ? WS (xS , pS )WE (xE , pE ) dpE dxE · WS x ?S , p ?S WE x ?E , p ?E ? dpE dxE · WS x dpE dxE · WS (xS , pS )WE (xE , pE ). (5.25)

5. The Master Equation for Quantum Brownian Motion

79

To evaluate these integrals, we need to perform some algebraic manipulations on the ?rst term in eqn. 5.25. From the eqn. 5.22, we know that the ?rst term of eqn. 5.25 is (approximately) given by dpE dxE · WS xS , 2mS mS ? mE 2mE mS ? mE pS + pE WE 2xS ? xE , ? pE + pS . (5.26) mS + mE mS + mE mS + mE mS + mE

We make the substitution

u v

≡ ≡

2xS ? xE ? 2 mE mS ? mE pE + pS , mS + mE mS + mE (5.27)

from which it follows that = ?du = ? mS + mE dv. mS ? mE (5.28)

dxE · dpE

Further, since pE = we have 2mS mS ? mE pS + pE mS + mE mS + mE mS ? mE 2mS mS + mE 2mE pS + pS ? v mS + mE mS + mE mS ? mE mS + mE mS ? mE 2mS 4mE mS = pS ? pS + v (mS ? mE ) (mE + mS ) mS + mE mS ? mE mE + mS 2mS pS ? v = mS ? mE mS ? mE 2 mE pS ? mS u = pS + . mS ? mE = mS + mE mS ? mE 2mE pS ? v , mS + mE (5.29)

(5.30)

Thus, substituting eqns. 5.27, 5.28, and 5.30 into eqn. 5.26 gives mS + mE mS ? mE dvduWS xS , pS + 2 mE pS ? mS u mS ? mE WE (u, v) .

(5.31)

Next, we make the substitution u ≡ xE and v ≡ pE , so eqn. 5.26 becomes mS + mE mS ? mE dpE dxE · WS xS , pS + 2 mE p S ? mS p E mS ? mE WE xE , pE .

(5.32)

5. The Master Equation for Quantum Brownian Motion Now, eqn. 5.25 is ?W = ? = mS + mE mS ? mE dpE dxE · WS xS , pS + 2 mE pS ? mS pE mS ? mE WE xE , pE

80

(5.33)

dpE dxE · WS (xS , pS )WE (xE , pE ) dpE dxE · 2 mE pS ? mS pE mS + mE WS xS , pS + mS ? mE mS ? mE ? WS (xS , pS ) WE xE , pE .

Next, we expand WS xS , pS + 2 mE pS ? mS pE mS ? mE (5.34)

using a Taylor series expansion in momentum about p = pS . This is WS xS , pS + 2 mE pS ? mS pE mS ? mE = WS (xS , pS ) + + 2 mE pS ? mS pE ?WS (xS , pS ) mS ? mE ?p S
2

1 2 mE pS ? mS pE 2 mS ? mE

?2 W S (xS , pS ) + ... ?p 2 S

(5.35)

In order to justify dropping the high-order terms of the expansion, we need to show that 2 mE pS ? mS pE mS ? mE pS ,

(5.36)

which is not readily apparent. If we expand the term in mE /mS , we have 2 mE pS ? mS pE mE = ?2pE + 2 pS ? pE + 2 pS ? pE mS ? mE mS mE mS
2

+ ...

(5.37)

Recalling eqn. 5.1, it is obvious that while the terms of ?rst order and higher in mE /mS are small compared to pS , the ?rst term, ?2pE , is not necessarily small with respect pS . Fortunately, since we are expanding in an integrand and the average value of pE is zero, we can neglect this term and so we are justi?ed in dropping high order terms in our Taylor expansion.4 Simplifying coe?cients and dropping terms of third order or higher, eqn. 5.35 is approximately ? 2 2 ? 2mS pE ? 4mE mS pE pS + 2m2 p 2 ? ?2 W S ? 2 mE pS ? mS pE ?WS ? E S? ? ? WS (xS , pS ) + (xS , pS ) + ? ? ? ? ?p2 (xS , pS ). (5.38) mS ? mE ?p S (mE ? mS )2 S
4

The fact that the average value of pE is zero is dealt with explicitly in eqn. 5.59.

5. The Master Equation for Quantum Brownian Motion Hence, we write ?W as [22] ?W ?

81

dpE dxE · AWS (xS , pS )WE xE , pE + B?pS WS (xS , pS )WE xE , pE + C?2 pS WS (xS , pS )WE xE , pE , (5.39)

for some A, B, and C. We now work out the values of these coe?cients, starting with A, which is = = = = = mS + mE ?1 mS ? mE 2mE mS ? mE 2mE 1/mS · mS ? mE 1/mS 2mE 1 · mS 1 ? mE /mS mE 2mE mE 1+ + mS mS mS
2

A

2

+ ...

mE mE +2 mS mS mE ? 2 , mS = 2

+ ... (5.40)

where we used the approximation in eqn. 5.1 to neglect the terms of order two or higher in mE /mS . We now turn to B, given by B= mS + mE 2 mE p S ? mS p E . mS ? mE mS ? mE

(5.41)

Anticipating a series expansion, we change variables to r = mE /mS so that mE = rmS . B is then B= 1 + r 2 rpS ? pE mS + rmS 2 rmS pS ? mS pE = . mS ? rmS mS ? rmS 1?r 1?r

(5.42)

We also calculate the ?rst and second derivatives of B with respect to r. They are dB 2pE (3 ? r) ? 2(pS + 3pS r) = dr (r ? 1)3 and d2 B 4pE (5 + r) ? 12pS (1 + r) = . dr2 (r ? 1)4 (5.44)

(5.43)

5. The Master Equation for Quantum Brownian Motion Taking the Taylor series expansion of B in r about r = 0, we ?nd = B r=0 + dB dr ·r+ d2 B dr2 r2 + ... 2 r2 + ... 2 mE 2 · + ... mS

82

B

r=0

r=0

·

= ?2pE + 2pS ? 6pE · r + 12pS ? 20pE · = ?2pE + 2pS ? 6pE · ? ?2pE + 2pS ? 6pE mE + 6pS ? 10pE mS mE · mS

= 2pS

mE mE ? 2+6 pE , mS mS

(5.45)

where we used eqn. 5.1 to neglect the terms of order two or higher in mE /mS . Finally, we consider the coe?cient C, given by C=
2 2 2 2 mS + mE 2mS pE ? 4mE mS pE pS + 2mE pS . mS ? mE (mE ? mS )2

(5.46)

In the same way we worked out coe?cient B, we make the substitution mE = rmS , which gives us C=
2 2 2 1 + r 2pE ? 4rpE pS + 2r pS , 1?r (r ? 1)2

(5.47)

2 2 2 dC 4 pE (2 + r) + pS r(1 + 2r) ? pE pS 1 + 4r + r = , dr (r ? 1)4

(5.48)

and

2 2 2 2 d2 C 4 2pE pS 4 + 7r + r ? 3pE (3 + r) ? pS 1 + 7r + 4r = . dr2 (r ? 1)5

(5.49)

When we take the Taylor series expansion of C in r about r = 0, we have = C r=0 + = 2p2 E + dC dr 8p2 E ·r+ d2 C dr2 r2 + ... 2

C

r=0

r=0

·

r2 + ... 2 mE mE 2 2 = 2p2 + 18p2 E ? 16pE pS + 2pS · E + 8pE ? 4pE pS · mS mS mE 2 2 ? 2pE + 8pE ? 4pE pS · mS mE 2 mE = 2+8 p ? 4pE pS . mS E mS
2 ? 4pE pS · r + 36p2 E ? 32pE pS + 4pS ·

2

+ ...

(5.50)

5. The Master Equation for Quantum Brownian Motion Thus, using eqn. 5.39, we can write ?W as ?W ? X + Y + Z,

83

(5.51)

where X= 2 Y = 2pS mE mS mE mS dpE dxE · WE (xE , pE )WS (xS , pS ), mE mS (5.52)

dpE dxE ·WE (xE , pE )?pS WS (xS , pS )? 2 + 6

dpE dxE ·pE WE (xE , pE )?pS WS (xS , pS ), (5.53)

and Z= 2+8 mE mS
2 dpE dxE ·p2 E WE (xE , pE )?pS WS (xS , pS )?4pS

mE mS

dpE dxE ·pE WE (xE , pE )?2 pS WS (xS , pS ). (5.54)

Now, we recall from our preliminary discussion on the marginal distributions of the Wigner distribution in section 4.1.3 that dpE dxE · O(pE )WE (xE , pE )WS (xS , pS ) = = = = WS (xS , pS ) WS (xS , pS ) dxE · WE (xE , pE )

dpE · O(pE )

? pE , pE dpE · O(pE )ρ

?ρ ? WS (xS , pS )Tr O ? , WS (xS , pS ) O (5.55)

where O is an observable. Hence, our previous calculations yield X= 2 mE mS 1 WS (xS , pS ) = 2 mE WS (xS , pS ), mS pE ?pS WS (xS , pS ),

(5.56)

Y = 2pS and Z= 2+8

mE mE ?pS WS (xS , pS ) ? 2 + 6 mS mS

(5.57)

mE mS

2 p2 E ?pS WS (xS , pS ) ? 4pS

mE pE ?2 pS WS (xS , pS ). mS

(5.58)

However, originally we considered the environment as an ideal (one-dimensional) gas of environment particles, so it is reasonable to assume that any measurement of an environment particle momentum is equally likely to be in the opposite direction, i.e. pE = 0. Eqn. 5.51 then

5. The Master Equation for Quantum Brownian Motion becomes ?W ? 2 mE mE mE WS (xS , pS ) + 2pS ?pS WS (xS , pS ) + 2 + 8 mS mS mS
2 p2 E ?pS WS (xS , pS ).

84

(5.59)

We notice that mE mE WS (xS , pS ) + 2pS ?pS WS (xS , pS ) = mS mS = mE WS (xS , pS ) + pS ?pS WS (xS , pS ) mS mE 2 ?pS pS WS (xS , pS ) , mS

2

2

(5.60)

so we write the change in the Wigner distribution of the system due to one environmental collision as

?W ? 2

mE mE ?pS pS WS (xS , pS ) + 2 + 8 mS mS

2 p2 E ?pS WS (xS , pS ).

(5.61)

5.4

The Master Equation for Quantum Brownian Motion

In our simple model, the system is only under the in?uence of environmental particles, and is free otherwise. Thus, the total change in the system’s Wigner distribution with time is given by its free particle term added to some contribution due to the environment. Since the environment acts on the system through collisions, if we de?ne Γ to be the statistical number of collisions per unit time between the system and environmental particles, we combine eqns. 4.27 and 5.61 to get [22] ps ?W S mE mE = ? ?xS W (xS , pS , t) + Γ 2 ?pS pS WS (xS , pS ) + 2 + 8 mS mS mS ?t
2 p2 E ?pS .WS (xS , pS ) , (5.62)

an expression for the total change in the system’s Wigner distribution with time. We use table 4.1 to convert our equation for the Wigner distribution to an equation for the state operator of the system. This is W ?t ρS (x, y) = ? 1 i 2 mE W ? ? ?2 W (x ? y) ?x ? ? y ρS (x, y) y ρS (x, y) ? Γ mS 2 x mS mE 2 Γ 2+8 p2 (5.63) E W x ? y ρS (x, y) . mS

5. The Master Equation for Quantum Brownian Motion Noting that this is true for all ρS (x, y), we have ?t ρS (x, y) = i mE mE ?2 ? ?2 (x? y) ?x ? ? y ρS (x, y)?Γ 2 + 8 y ρS (x, y)?Γ 2mS x mS mS p2 E

85

x ? y ρS (x, y). (5.64)

2

We take the standard de?nition for the dissipation rate γ to be [22] γ≡ mE Γ, mS

(5.65)

so ?t ρS (x, y) = mS mE i 2 ?2 2+8 x ? ? y ρS (x, y)?γ(x? y) ?x ? ? y ρS (x, y)?γ 2mS mE mS p2 E x ? y ρS (x, y). (5.66) To express this result in standard form, we use the de?nition for temperature in one dimension from statistical mechanics, which is [27] p2 1 E kT ≡ , 2 2mE
2

(5.67)

where T is temperature and k is the Boltzmann constant. Using this de?nition, we examine the last term more closely and ?nd γ mS mE 2+8 mE mS p2 E x ? y ρS (x, y) = γ
2

mS mE 2 2+8 mE kT x ? y ρS (x, y) mE mS
2

= γ (2mS + 8mE ) kT x ? y ρS (x, y) ? γ2mS kT x ? y ρS (x, y),
2

(5.68)

where we have used the fact that mE

mS . Thus, our ?nal result is [22]

?t ρS (x, y) =

i 2 ?2 ? ?2 y ρS (x, y) ? γ(x ? y) ?x ? ? y ρS (x, y) ? 2mS γkT x ? y ρS (x, y), (5.69) 2 mS x

which is the accepted master equation for quantum Brownian motion [3, 12, 22]. Using dimensional analysis, we can reinsert to bring the master equation into SI units. This is

5. The Master Equation for Quantum Brownian Motion

86

?t ρS (x, y) =

i 2mS

2 ?2 x ? ? y ρS (x, y) ? γ(x ? y) ?x ? ? y ρS (x, y) ?

2mS
2

γkT x ? y ρS (x, y). (5.70)

2

The assumptions used to derive this equation are listed in table 5.1.

Table 5.1: Assumptions used for the derivation of eqn. 5.69 Assumption Equation Label Small mass ratio mE / mS 1 5.1 Locality 5.16 |xS ? xE | |xS | Statistical environment pE = 0 5.59 Dissipation γ = mE / mS · Γ 5.65 / (2 m ) 5.67 Temperature 1/2 · kT = p2 E E

CHAPTER

6

Consequences of the Master Equation

W
6.1

e now explore the physical rami?cations of the master equation for quantum Brownian motion, developed in the previous chapter. First, we investigate its physical meaning term

by term. Next, we consider the simple example of a quantum harmonic oscillator undergoing decoherence. Finally, we o?er some closing remarks on decoherence theory in general and suggestions for further reading.

Physical Significance of the first two terms

In the realm of master equations, eqn. 5.69 for quantum Brownian motion actually is simple [12]. Even so, the purpose of each term is not immediately obvious. In this section, we examine the physical meaning of the ?rst and second terms. The ?rst term is the free system evolution, as it is the transform of eqn. 4.27. It does not hurt to verify this explicitly, without employing the Wigner

87

6. Consequences of the Master Equation distribution. If we switch to SI units via eqn. 5.70, the ?rst term is i ?2 ? ?2 y ρS (x, y). = 2mS x = = = = = = = i ?S y ?2 ? ?2 y x| ρ 2mS x i i ?S y ? ?S y ?2 x | ρ ?2 x| ρ 2mS x 2mS y i ?1 i ?1 ?S y ? ?S y ?x x | ρ ? y x| ρ 2mS 2 i 2mS 2 i i ?1 ˇ 2 i ?1 ˇ 2 ?S y ? ?S y P x| ρ P x| ρ 2mS 2 x 2mS 2 y i i ˇ2 ˇ2 ?S P ?S y + ? x| ρ P x x| ρ y y 2mS 2mS i i ?2 y ?2 ρ ?S P ?S y + x| ρ x| P ? 2mS 2mS ?2 ?2 P i P ?S y ? x| ρ ?S x| ρ ? y 2 mS 2mS ?2 ?2 P P i ?S ? ρ ?S ρ y . ? x| 2 mS 2mS
2 2

88

(6.1)

By eqn. 3.20, the free system (for which V = 0) has a Hamiltonian of ?2 ?f = P , H 2ms so our equation becomes i ?f y , ? fρ ? f y = x| i ρ ?S ? ρ ?S H ?S , H x| H

(6.2)

?

(6.3)

which is ?t ρ ˇt ρS (x, y) ?S y = ? x| ? (6.4)

by eqn. 3.55. Thus, we con?rm that the ?rst term in the master equation is the free evolution of the state operator. The second term is not so obvious, and turns out to be responsible for damping our system’s motion. To explain this, we use the master equation to calculate the rate of change of the expectation value of momentum due to the second term. In the position basis, the second term

6. Consequences of the Master Equation reduces to [3] ? ?t P ?ρ ? = ?t Tr P ? ?t ρ ? = Tr P

89

2

2 2

ˇ x γ(x ? y) ?x ? ? y ρ(x, y) = ?γTr P 1 ?x γ(x ? y) ?x ? ? y ρ(x, y) i 1 1 ?x ? ? y ρ(x, y) ? γTr (x ? y) ?2 = ?γTr x ? ?x ? y ρ(x, y) i i 1 1 ?x ? ? y ρ(x, x) + γ dx · (x ? x) ?2 = ?γ dx · x ? ?x ? y ρ(x, x) i i 1 = ?γ dx · (?x ? 0) ρ(x, x) + 0 i 1 = ?γTr ?x ρ(x, y) i ? ? = ?γTr Pρ = ?γTr ? , = ?γ P (6.5)

which is

? ?t P

2

? . = ?γ P

(6.6)

Hence, the contribution to the rate of change of momentum of the second term is the dissipation (a scalar) times the momentum, pointed in the opposite direction as the momentum. This is precisely a damping e?ect, which is what we wanted to show [14].

6.2

The Decoherence Term

The last term of the master equation turns out to cause decoherence of the system, so it is central to our discussion. To interpret it, we will make some reasonable approximations. To get a better idea of the relative size of the terms, we use the SI version of the master equation, eqn. 5.70. Notice that the last term contains a numerical factor of 1/
2

≈ 1068 , while the other terms are either ?rst or

6. Consequences of the Master Equation

90

zeroth order in 1/ . Thus, we surmise that for su?ciently large x ? y , the last term will dominate equation.1 Hence, our drastically simpli?ed master equation is [3]

?t ρS (x, y) ? ?

2mS γkT
2

x ? y ρS (x, y),

2

(6.7)

which has the standard solution ρS (x, y, t) = ρS (x, y, 0)e?
2mS γkT 2

( x? y) t .
2

2

(6.8)

Since the argument of the exponential must be dimensionless, Customarily, we identify [3]

2mS γkT(x? y)

2

has units of time.

2

td ≡

2mS γkT x ? y

2

(6.9)

as the (characteristic) decoherence time of the system, which is its e-folding time.2 Notice also that the decoherence time varies with location in state-space, as it depends on both x and y. Thus, we are not surprised to ?nd that some regions decay faster than others. Further, since
2

≈ 10?68 in SI

units, the decoherence time for any reasonably large system is incredibly small.3 Next, we consider an example to show how decoherence operates on a simple situation.

6. Consequences of the Master Equation

91

Figure 6.1: The decoherence of the ground state of the quantum harmonic oscillator under the simpli?ed
master equation 6.11. In the density plots, yellow indicates maximum values, while blue indicates minimum.

Figure 6.2: The decoherence of the third excited state of the simple harmonic oscillator under the simpli?ed
master equation 6.11. In the density plots, yellow indicates maximum values, while blue indicates minimum.

6. Consequences of the Master Equation

92

6.3

Example: The Harmonic Oscillator in a Thermal Bath

So far, we have supposed that ρS is a free particle. However, note that our simpli?ed master equation, eqn. 6.7, does not explicitly depend on the system’s Hamiltonian (this was contained in the ?rst term), so we are free to replace our initial state operator with some other state operator of a di?erent system. We choose, due to its utility and familiarity, the harmonic oscillator. From our work in section 4.5, we know that state operator for the harmonic oscillator, eqn. 4.67, is ρn (x, y, t = 0) = 1 mω n! π
1/ 2

1 2 mω

n

(mωx ? ?x )n mω y ? ? y

n

e?

mω 2 2 x

e?

mω 2

y2

.

(6.10)

If we place this state operator in a thermal bath, we expect the system to evolve approximately according to eqn. 6.8, so the time dependent state operator of the harmonic oscillator is

ρn (x, y, t) =

1 mω n! π

1/ 2

1 2mω

n

(mωx ? ?x )n mω y ? ? y

n

e?

mω 2 2 x

e?

mω 2

y2 ?2mS γkT(x? y) t

2

e

.

(6.11)

In ?gures 6.1 and 6.2, we plot the state operators for n = 0 and n = 3. As is evident from the form of eqn. 6.11, the o?-diagonal matrix elements (when x y) quickly vanish with time. Physically, the

o?-diagonal elements of the state operator represent the quantum interference terms, terms that can interact only with other quantum systems. These interference terms are what give the entangled states we explored in sections 2.4 and 2.5 their interesting qualities. By zeroing the o?-diagonal elements, we take a quantum mechanical system and force it into a classical distribution. As it turns out, this interpretation becomes obvious as t → ∞. By eqn. 6.11, this is In the matrix representation of a state operator, this corresponds to the o?-diagonal elements. Recall that the totally mixed state in eqn. 2.57 had a diagonal state operator. This con?rms that decoherence works on the o?-diagonal elements of the state operator.
1 2

When t = td , ρ(x, y, td ) = 1 e ρ(x, y, t).

For example, if we suppose our environment is an ideal, one dimensional gas at room temperature with a mass of 10?26 kg per particle and a collision rate with the system of Γ ≈ 1010 collisions per second (atmospheric conditions), we ?nd the decoherence time of the system for 2 10?68 length scales of nanometers to be of order td = 2mE ΓkT(x? y)2 ≈ 2·10?26 ·1010 ≈ 10?19 s. ·10?23 ·300·10?9

3

6. Consequences of the Master Equation ? ? ? ? ? ? ?0 lim ρ(x, y, t) = ? ? t→∞ ? 2 ? ? ?ψ? (x)ψ(x) = ψ(x) if x y,

93

(6.12)

if x = y,

as shown in ?gure 6.3. This quantity is a statistical probability distribution, and as we saw with the roulette wheel at the beginning of this thesis, decoherence has e?ectively blocked us from accessing any of the quantum mechanical information present in our initial system.

6.4

Concluding Remarks

We have now developed and applied the master equation for quantum Brownian motion, and used it to clarify how a macroscopic, classical object might emerge from quantum mechanics. We started by setting the stage with the mathematics and formalism we would need to develop quantum mechanics. Then, we used the tools we made to derive the Schr?dinger equation and the equation of motion for the state operator. We then shifted and considered quantum mechanics in phase-space, where the central object is the Wigner distribution. Next, we explored some of its key properties and described and example of its application using the harmonic oscillator. After that, we used it to derive the simple master equation for one-dimensional quantum Brownian motion. We explained each of the terms physically, and ?nally considered an example of decoherence, where the master equation transformed a quantum harmonic oscillator into a classical probability distribution.

Figure 6.3: The ?nal state reached by the ?rst four energy states of the simple harmonic oscillator under the
simpli?ed master equation 6.11. The two-dimensional plots coincide with the diagonal of the density plots.

6. Consequences of the Master Equation

94

The debate still rages in the physics community; does decoherence theory solve the philosophical problems brought about by paradoxes like Schr?dinger’s cat, or does it merely postpone the problem, pushing the fundamental issue into an environmental black box [28, 29, 30]? Regardless, it provides a practical framework for performing objective measurements without an observer, which is of key importance to the emerging ?elds of quantum computation and quantum information. Current e?orts are underway to probe decoherence directly, both experimentally and theoretically. Through the use of mesoscopic systems, scientists have been able to manufacture tiny oscillators that are getting very close to the quantum regime [31, 32]. Theoretical predictions of what should be observed at the quantum-classical barrier have also been made, with the promise of experimental feasibility within a few years [33]. Just last year, scientists performed experiments involving ultra-cold chlorophyll, con?rming that even photosynthesis is a quantum-emergence phenomenon, and thus governed by decoherence theory [34]. The group went so far as to suggest that chloroplasts were actually performing quantum computation algorithms on themselves to speed-up reaction times. This idea of selective self-measurement is intriguing, but largely undeveloped theoretically. It, along with the many other application areas of quantum decoherence theory, are sure to occupy physicists for years to come.

References
1. D. Poole, Linear Algebra: A Modern Introduction (Thomson Brooks/Cole, 2006), 2nd ed. vi, 1, 3, 5, 9, 11 2. D. J. Gri?ths, Introduction to Quantum Mechanics (Prentice Hall, 2005), 2nd ed. vii, 25, 42, 48, 49, 50, 52, 53, 55, 67 3. R. Omnès, Understanding Quantum Mechanics (Princeton University Press, 1999). vii, viii, 85, 89, 90 4. L. E. Ballentine, Quantum Mechanics: A Modern Development (World Scienti?c, 1998), 2nd ed. vii, 3, 4, 7, 8, 9, 10, 11, 12, 17, 18, 19, 21, 22, 23, 24, 25, 28, 32, 38, 40, 53, 67, 78 5. G. Greenstein and A. G. Zajonc, The Quantum Challenge (Jones and Bartlett Publishers, 2006), 2nd ed. viii, ix, x 6. M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2000). viii, 5, 7, 9, 12, 13, 14, 25, 26, 29, 31, 32, 33, 34, 35 7. M. Anderson and T. Feil, A First Course in Abstract Algebra: Rings, Groups and Fields (Chapman and Hall, 2005), 2nd ed. 2, 12 8. T. Gamelin and E. Greene, Introduction to Topology (Dover Publications, 1999), 2nd ed. 2, 18 9. D. Halliday, R. Resnick, and J. Walker, Fundamentals of Physics (Wiley, 2004), seventh ed. 5, 74, 75 10. K. Riley, M. Hobson, and S. Bence, Mathematical Methods for Physics and Engineering: A Comprehensive Guide (Cambridge University Press, 1998). 6, 40, 55, 57 11. A. Sudbery, Quantum Mechanics and the Particles of Nature: An Outline For Mathematicians (Cambridge University Press, 1986). 18, 53 12. W. H. Zurek, Reviews of Modern Physics 75, 715 (2003). 24, 85, 87 13. C. Cohen-Tannoudji, B. Diu, and F. Lalo?, Quantum Mechanics (John Wiley and Sons, 1977). 25, 36, 37, 52, 53, 67 14. S. T. Thornton and J. B. Marion, Classical Dynamics of Particles and Systems (Brooks/Cole, 2004), 5th ed. 26, 89 15. H. F. Jones, Groups, Representations and Physics (Institute of Physics Publishing, 1998), 2nd ed. 38 95

References

96

16. J. F. Lindner (1992), unpublished, Buckaroo Banzai in 1+1 Dimensions. 38, 39, 41, 42, 43, 44, 45, 46 17. V. Bargmann, Journal of Mathematical Physics 5, 862 (1964). 40 18. E. Wigner, Physical Review 40, 749 (1932). 54 19. C. K. Zachos, D. B. Fairlie, and T. L. Curtright, eds., Quantum Mechanics in Phase Space (World Scienti?c, 2005). 54, 70 20. C. Zachos, arXiv:hep-th/0110114v2 (2001). 54 21. D. Styer, M. Balkin, K. Becker, and M. Burns, American Journal of Physics 70, 288 (2002). 55 22. J. Halliwell, J. Phys. A: Math. Theor. 40, 3067 (2007). 55, 58, 73, 76, 78, 81, 84, 85 23. L. Cohen, Time-Frequency Analysis (Prentice Hall, 1995). 56 24. M. Hillery, R. O’Connell, M. Scully, and E. Wigner, Physics Reports 106, 121 (1984). 56, 61 25. J. Stewart, Calculus (Brooks Cole, 2002), 5th ed. 63 26. E. Hecht, Optics (Addison-Wesley, 1998), 3rd ed. 70 27. C. Kittel and H. Kroemer, Thermal Physics (W. H. Freeman and Company, 1980), 2nd ed. 85 28. P. Meu?els, American Journal of Physics 75, 1063 (2007). 94 29. A. Hobson, American Journal of Physics 75, 869 (2007). 94 30. M. Schlosshauer, Reviews of Modern Physics 76, 1267 (2005). 94 31. M. LaHaye, O. Buu, B. Camarota, and K. Schwab, Science 304, 74 (2004). 94 32. M. Blencowe, Contemporary Physics 46, 249 (2005). 94 33. I. Katz, A. Retzker, R. Straub, and R. Lifshitz, Physical Review Letters 99 (2007). 94 34. G. Engel, T. Calhoun, E. Read, T. Ahn, and T. Mancal, Nature 446, 782 (2007). 94

Index
A Adjoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 B Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Continuous . . . . . . . . . . . . . . . . . . . . . 19, 36 of Eigenvectors . . see Spectral Theorem Orthonormality of . . . . . . . . . . . . . . . . . . . 5 Bell State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Bernstein, H. J.. . . . . . . . . . . . . . . . . . . . . . . . .viii Bloch Sphere . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Bra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 C Canonical Commutator . . . . . . . . . . . . . . . . . 46 Collision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 Composite Observable. . . . . . . . . . . . . . . . . . . . . . . . .26 State Operator . . . . . . . . . . . . . . . . . . . . . 25 State Vector . . . . . . . . . . . . . . . . . . . . . . . . 25 Conjugate Space . . . . . . . . . . . . . . . . . . . . . . . . 18 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 D De Broglie Relation . . . . . . . . . . . . . . . . . . . . . 53 Decoherence Time . . . . . . . . . . . . . . . . . . viii, 90 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Dirac Delta . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Dirac, P. A. M. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Dissipation . . . . . . . . . . . . . . . . . . . . . . . . . . 85, 89 Dual Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Dual Vector . . . . . . . . . . see Linear Functional E Energy Operator . . . . . . . . . . see Hamiltonian Entangled State . . . . . . . . . . . . . . . . . . . . . . . . . 32 Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Equation of Motion of the State Operator . . . . . . . . . . . . . . . 47 Expectation Value . . . . . . . . . . . . . . . . . . . . . . 22 Hamiltonian . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Harmonic Oscillator Decoherence of . . . . . . . . . . . . . . . . . . . . 92 Wigner Distribution Solutions . . . . . . 69 Hilbert Space . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 I Impure State . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Integral Form Inner Product . . . . . . . . . . . . . . . . . . . . . . 19 Spectral Theorem . . . . . . . . . . . . . . . . . . 19 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 K Ket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Kronecker Product. . . . . . . . . . . . . . . . . . . . . .15 L Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . . 2 Functional . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Independence . . . . . . . . . . . . . . . . . . . . . . . 3 Operator Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . 6 on a Function Space . . . . . . . . . . . . . . 37 Self-Adjoint . . . . . . . . . . . . . . . . . . . . . . . 9 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . 2 F Field (algebraic) . . . . . . . . . . . . . . . . . . . . . . . . . 2 Free Particle in Position Basis . . . . . . . . . . . . . . . . . . . . 51 Wigner Distribution . . . . . . . see Wigner G Graham-Schmidt Algorithm . . . . . . . . . . . . . 5 Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Galilei . . . . . . . . . . . . . . . . . . . . . . . vi, 38, 39 Generators . . . . . . . . . . . . . . . . . . . . . . . 46 Unitary Representatives . . . . . . . . . 41 Poincaré . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 H

97

Index M Master Equation State Operator Form . . . . . . . . . . . . . . . 85 Wigner Form . . . . . . . . . . . . . . . . . . . . . . . 84 Matrix Element . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Matrix Representation of Linear Functionals . . . . . . . . . . . . . . . . 5 of Linear Operators . . . . . . . . . . . . . . . . . 7 of Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Measurement . . . . . . . . . . . . . vii, 22, 39, 83, 94 Mixed State . . . . . . . . . . . . . . see State, Impure Mixture . . . . . . . . . . . . . . . . . . see State, Impure Momentum Operator . . . . . . . . . . . . . . . . . . . 42 in Position Basis . . . . . . . . . . . . . . . . . . . 49 O Observable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Operator . . . . . . . . . . . . . . see Linear Operator Outer Product . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 P Partial Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Plane Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Position Operator . . . . . . . . . . . . . . . . . . . . . . . 42 Product State . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Q Quantum Emergence . . . . . . . . . . . . . . . . . . . vii R Reduced State Operator . . . . . . . . . . . . . . . . 27 Resolution of the Identity . . . . . . . . . . . . . . . 11 Riesz Representation Theorem . . . . . . . . . . . 4 Rigged Hilbert Space Triplet . . . . . . . . . . . . 18 S Scalar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Schr?dinger Equation Time Dependent . . . . . . . . . . . . . . . . . . . 50 Time Independent. . . . . . . . . . . . . . . . . .51 Schr?dinger’s cat . . . . . . . . . . . . . . . . . . . vii, 94 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . 11 Generalized . . . . . . . . . . . . . . . . . . . . . . . . 18 State Impure . . . . . . . . . . . . . . . . . . . . . . . . . 23, 24 Pure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 State Operator . . . . . . . . . . . . . . . . . . . . . . . . . . 22 in Momentum Basis . . . . . . . . . . . . . . . . 57 Stern-Gerlach Analyzer . . . . . . . . . . . . . . . . . ix Superposition . . . . . . . . . . . . . . . . . . . . ix, 32, 53 Superposition Principle . . . . . . . . . . . . . . . . . 29 System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 T Temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

98 Tensor Product . . . . . . . . . . . . . . . . . . . . . . . . . 13 Two-body Collisions . . . . . . . . . . . . . . . . . . . . 78 Two-State System . . . . . . . . . . . . . . . viii, 29, 33 V Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Vector Space . . . . . . . see Linear Vector Space W Wave Packet . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Wavefunction . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Wigner Distribution Composite systems . . . . . . . . . . . . . . . . 58 De?ntion . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Free Evolution . . . . . . . . . . . . . . . . . . . . . 61 in Momentum Basis . . . . . . . . . . . . . . . . 58 Inverse. . . . . . . . . . . . . . . . . . . . . . . . . . . . .55 Marginal Distributions . . . . . . . . . . . . . 56 Projection Function . . . . . . . . . . . . . . . . 59 Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Wigner Transform De?ntion . . . . . . . . . . . . . . . . . . . . . . . . . . 62 of Important Quantities . . . . . . . . . . . . 63 Wigner, E.P. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54


赞助商链接

更多相关文章:
量子纠缠 可分性判据 纠缠度量 海森堡不确定关系 薛定...
From the view of measurements, manifestation of quantum entanglement is that ...dinger-Robertson uncertain relation Quantum decoherence 【索购全文】联系 Q1:...
在两体混合压缩态中的蒸馏纠缠
Quantum privacy amplification and the security of quantum cryptography over ...Scheme for reducing decoherence in quantum computer memory[J]. Phys Rev A,...
latex模板
quantum imaging sub-classical resolution has been achieved by using sources of entangled photons[11,13].They are fragile on account of quantum decoherence[...
Computing is driving the philosophical understanding_答案_...
And in even the best quantum computers, that choice, or "decoherence", happens in a fraction of a millisecond. Just how the choice is made, and how...
国防科技 物理 2-5周
Will quantum error correction work? 12 It can remove the effects of physically reasonable computational errors and decoherence processes, provided that enough ...
开题报告和论文综述
H. (2003). "Decoherence, einselection, and the quantum origins of the classical". Reviews of Modern Physics 75 (3): 715–765 27.Schr?dinger, E. ...
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图