9512.net
甜梦文库
当前位置:首页 >> >>

Quantum Information Theory


Quantum Information Theory
Seny Kamara
?

Dept. of Computer Science Purdue University December 10, 2001

Abstract Quantum information theory provides a foundation for such topics as quantum cryptography, quantum error-correction and quantum teleportation. This paper seeks to provide an introduction to quantum information theory for non-physicists at an undergraduate level. It covers basic concepts in quantum mechanics as well as in information theory, and proceeds to explore some results such as Von Neumann entropy, Schumacher coding and quantum error-correction.

1

Introduction

Quantum information theory is a new ?eld that attempts to quantify and describe quantum mechanical resources and the processes that act on them. In a short amount of time its study has produced many results that have had a strong impact on both quantum computing and the foundations of quantum mechanics. In particular, the discovery of quantum error-correcting codes has made the feasibility of an experimental quantum computer much more likely, and the study of quantum communication channels and of quantum measurement has provided useful insight into the laws of quantum mechanics. Section 2 o?ers an introduction to quantum mechanics, section 3 presents some basic concepts in information theory and ?nally section 4 goes over some results from quantum information theory.

2

Quantum Mechanics

Quantum mechanics is the biggest hurdle for non-physicists interested in quantum computing. This section seeks to provide a brief introduction to some elements of quantum mechanics that are needed to understand quantum information theory. For a more thorough treatment see [1, 2]. Quantum mechanics is a theoretical framework with which physicists are able to describe the world of sub-atomic particles. While the description that quantum mechanics o?ers is universally accepted as correct, the explanation it o?ers is the subject of many debates. The di?culty seems to lie in its understanding and interpretation rather than it use. The controversy largely stems from
?

Current address: seny@cs.jhu.edu

the fact that to accept quantum theory as true, we need to accept properties about nature that are very unintuitive. However we cannot dismiss it as it consistently produces the most accurate predictions experimental predictions.

2.1

Quantum States

In quantum mechanics, we study sub-atomic particles such as protons, neutrons, photons and electrons. These quantum systems are di?erentiated by their degrees of freedom which are characteristics such as polarization, spin, and energy levels. We describe a particle by its state. The state of a particle encompasses all of the information about a particle such as position, polarization, spin and momentum. In the mathematical abstraction of quantum mechanics this state is represented as a vector in a complex vector space. This vector is called a state vector, and this complex vector space is a Hilbert space. Usually a particle Ψ’ s state vector is written using Dirac’s bra-ket notation as |Ψ . Since a quantum system’ s state encompasses all its information, the ?rst thing we have to do in order to gather information about it is to de?ne what it is we want to know. Mathematically, this is done by de?ning a set of mutually orthogonal vectors in the system’ s Hilbert space that will form a basis. We then use this basis to describe, or to project, the system’s state vector onto its bases. For example if we were interested in a system that can be in one of three position, we would de?ne a basis comprised of three vectors, each one describing one of the the system’ s possible locations. The state vector would then be described by a linear function - a vector - of these three base vectors. So the dimension of a particle’ s state space is equal to its number of base states.

2.2

Superposition

Superposition is is one of the most puzzling properties of quantum systems. It is the property that a quantum system can be in two or more distinct states. Not somewhere in between states, but possibly in all states. As previously mentioned, a quantum state can represent anything from polarization to spin to position. So what this implies is that if we were interested in the position of a particle, and this particle was in a superposition of di?erent position states, we would ?nd that it was in multiple places at once. Using Dirac’ s bra-ket notation, a state |Ψ that is in a superposition of two states |φ and |? is expressed as |Ψ = α|φ + β|? , α, β ∈ C The terms α and β are called probability amplitudes and are further explained below. (1)

2.3

Measurements

Measurements play a very important role in quantum mechanics. When dealing with a quantum system, we do not know what state it is in until we make a measurement. For example, measuring a system |Ψ in a two dimensional state space spanned by two mutually orthogonal states |φ and |? , as in state 1, we would ?nd that |Ψ as either: |Ψ = |φ 2 (2)

or: |Ψ = |? or in a superposition of the base states: |Ψ = α|φ + β|? (4) (3)

When a system is in a base state as in state 2 or state 3, the act of measurement will always reveal |φ and |? . However if the particle is in a superposition of states, as in state 4, a measurement will result in either one of the base states φ or ? with probability α2 and β 2 respectively. Because it has to be either one, the probabilities α2 and β 2 have to sum to one.

2.4

Probabilities and Probability Amplitudes

Quantum mechanics is a statistical theory. This means that the information we have about a particle is stochastic. For a state such as state 4, all we know before measurement is that |Ψ is in state |φ with probability α2 and in state |? with probability β 2 . If α = 1 and β = 0 then we know with certainty that |Ψ = |Φ . The terms α and β, which are the components of the state vector, are the amount of the base states that are present in |Ψ . These terms are called probability amplitudes and are complex numbers. Squaring them results in probabilities and their sum has to equal to unity.

2.5

Pure and Mixed States

We say that a system is in a pure state if we know what its state is with certainty; and we say that it is a mixed state if we do not. So you might ask what the di?erence is between a mixed state and a superposition of states. A superposition is a consequence of measurement while a mixed state is a consequence of our ignorance (much like a normal statistical scenario). This is a subtle di?erence that can sometimes lead to confusion. For example, let’ s take a pure state such as |Ψ . We know |Ψ ’ s state with certainty since the probability amplitude associated with it is equal to one. Now suppose we change our basis. You can think of this process in mathematical terms as a simple basis rotation or in more intuitive terms as a change in what we want to know about our system (Gram-Schmidt decomposition). In our new basis, |Ψ might now look more like state 4. But |Ψ ’ s state did not change, only the basis changed, so we still know |Ψ ’ s state, which makes it a pure state even though it is a superposition. The fact that we are now expressing |Ψ as a superposition only tells us that if we measure it in this basis, we will get di?erent results with certain probabilities, however the state of |Ψ itself is still well de?ned. A mixed state however means that we do not know exactly what the state of a system is, regardless of the basis we choose to measure it in. So say we know that |Ψ ’ s state is equal to |φ with probability Pφ and |? with probability P? , we are simply ignorant of which it is. |φ and |? do not even have to be basis states and could be superpositions themselves.

2.6

Qubits

In quantum information and computing we use two-state quantum systems. We label one state |0 and the other |1 and call them qubits. For example, if we use a photon as a qubit, we label a

3

vertical polarization as |1 and horizontal as |0 . These qubits are analogous to classical bits except that they have the property of superposition and entanglement.

2.7

Bra-Ket (Dirac) Notation

Dirac notation is the standard notation used in quantum mechanics and quantum computing, so it is important to understand and recognize its meanings. While confusing at ?rst, it is quite useful and intuitive once understood. The ?rst element of Dirac notation is the ket: |? . This is used to represent vectors in Hilbert space so we use it to represent our quantum states. Since it describes a vector, it is equivalent to . all the other representations of vectors, namely the column vector or matrix representation: . . . To every ket is associated a bra: ?|. This is a vector’ s row column equivalent after complex conjugation: [· · · ]? . The purpose of all this is to be able to de?ne an inner product, which in turn enables us to de?ne a vector space. Using kets and bras, the inner product can now be de?ned as a bra-ket: ?|? To see why, suppose we have two systems:

|Ψ = α|0 + β|1 |? = γ|0 + δ|1 they can both be represented as column vectors: |Ψ = α β γ δ

(5) (6)

(7)

|? =

(8)

Now to compute their inner product we invert |? to its bra and multiply them: α β ? = γ × α + δ? × β = [γ ? δ ? ]

?|Ψ ?|Ψ

(9) (10)

As usual, if two vectors are orthogonal, their inner product will vanish and if they are equal it will be equal to 1. In addition note that bras and kets do not commute so |Ψ Φ| is di?erent than Ψ|Φ .

2.8

Tensor Products and Qubit Registers

Until now we have dealt with single qubits. But suppose we have an ensemble of qubits - a register - and we wish to ?nd the state of the entire register. To do this we use the tensor product (?), also referred to as Kronecker product. What this enables us to do is build a new state out of many smaller states. When building a register of n bits. Each bit is a two dimensional space and the register is a 2n dimensional space. So given two two-dimensional states |Ψ and |Φ de?ned as: 4

|Ψ |Φ their tensor product is:

= α|0 + β|1 = γ|0 + δ|1

(11) (12)

|Ψ ? |Φ = αγ|00 + αδ|01 + βγ|10 + βδ|11 which is a 22 dimensional space.

(13)

2.9

Linear Operators

The way we describe the evolution of a quantum system is with linear operators. These operators take a quantum system from one state to another. We express them as matrices - Hermitian matrices to be precise. So for example, applying an operator P to a system |Ψ we get: P |Ψ1 = |Ψ2 More explicitly de?ning |Ψ and P as follows: (14)

|Ψ = α|0 + β|1 0 1 P = 1 0 we get: P |Ψ = 0 1 1 0 P |Ψ = α β β α

(15) (16)

(17) (18)

The operator P, which inverses a vector, is commonly referred to as the Pauli-X operator. In quantum computing we concern ourselves with linear operators. This means that they satisfy the following properties for both kets and bras: P a|Ψ = aP |Ψ P (α|Ψ + β|Φ ) = αP |Ψ + βP |Φ 2.9.1 Pauli Operators (19) (20)

The Pauli operators are a set of four operators that are very common in quantum mechanics and quantum computing. They are de?ned as follows:

5

σx = σy = σz = I = 2.9.2 Hadamard Gate

0 1 1 0 0 ?i i 0 1 0 0 ?1 1 0 0 1

(21) (22) (23) (24)

The Hadamard gate is usually used to change a system to a superposition of states. It is de?ned as follows: 1 H=√ 2 and it brings |0 to 2.9.3
1 √ 2

1 1 1 ?1
1 √ 2

(25)

(|0 + |1 ) and |1 to

(|0 ? |1 ).

Projection Operators

Another useful type of operator are the projection operators. A projection operator simply projects a vector onto its base states. Say we have a state |Ψ that we want to project onto a basis de?ned by vectors |0 and |1 we de?ne the projection operator as: P = |0 0| + |1 1| if we apply it to Ψ we get: P |Ψ = 0|Ψ |0 + 1|Ψ |1 (27) (26)

Now assuming that |Ψ looks like state 5 in the basis de?ned by |0 and |1 , the inner product of |Ψ and base state |0 will yield:

0|Ψ 0|Ψ

= α 0|0 + β 0|1 = α

(28) (29)

Since |0 and |1 are base states, they are mutually orthogonal so their inner product vanishes. So the inner product of |Ψ and |0 returns the probability amplitude that |Ψ associates with |0 . Obviously the same is true for the inner product of |Ψ and |1 . So equation 27 gives: P |Ψ = α|0 + β|1 So we have a taken a state |Ψ and projected it on a basis de?ned by |0 and |1 . (30)

6

2.9.4

Density Operators

Density operators, also referred to as density matrices, are a way of representing ensembles of quantum states. By ensemble we mean a system comprised of other quantum systems. Given n qubits each in a state |Ψi , i ∈ {1 . . . n} and each with probability pi of being selected form the ensemble, we de?ne the density matrix as: ρ=
i

pi |Ψi Ψi |

(31)

So if we have two systems |Ψ and |Φ de?ned as: |Ψ |Φ = α|0 + β|1 = γ|0 + δ|1 (32) (33)

and with a probability function P = { 1 , 1 }, their density matrix is: 2 2 1 1 ? ? α γ [α β ] + [γ ? δ ? ] β δ 2 2 1 γ?γ γ?δ α? α α? β + ?α β?β δ?γ δ?δ β 2
α2 +γ 2 2 β ? α+δ ? γ 2 α? β+γ ? δ 2 β 2 +δ 2 2

ρ = ρ = ρ =

(34) (35) (36)

So what this matrix is giving us is the probability of measuring a certain state if we were to blindly pick a qubit from the ensemble and measure it.

2.10

Entanglement

Entanglement is a correlation that can exist between two quantum systems. This correlation has been at the center of one of the most important debates in modern physics. In 1935 Einstein, Podolsky and Rosen published a paper describing what they felt was a paradox in the theory of quantum mechanics [3]. This paradox, known as the EPR paradox, used entanglement to set quantum mechanics against special relativity. The outcome of this debate (which some still contend is not over) was the acceptance of the fact that our reality is non-local. This conclusion has rami?cations not only in physics but in philosophy as well. More can be found in [4, 5]. Entanglement is one of, if not the most, important resource that quantum mechanics has to o?er us in terms of computing. Even more so than superposition. Besides being a major component of quantum algorithms such as Shor’s factoring algorithm [6] and Grover’s search algorithm [7], it is the main reason we are able to conduct quantum teleportation [8], quantum key distribution [9, 10, 11] and quantum error-correction [12]. We say that two states are entangled when the outcome of a measurement on one, is dependent on the outcome of a measurement on the other. Given a two qubit state |Ψ de?ned as: |Ψ = α|01 + β|10 (37)

7

we can see that if we conduct a measurement on the ?rst qubit and ?nd that it is in state |0 , then the second qubit can only be in state |1 . Similarly, if we obtain |1 on the ?rst measurement, then the second qubit can only return |0 when measured. This is called an anti-correlated entangled state since the state that we will obtain upon measuring the second qubit will always be the opposite of what we obtained when measuring the ?rst one. There are other kinds of entanglement such as: |Ψ = α|00 + β|11 (38)

which are not anti-correlated, but correlated nonetheless. In addition, we can entangle more than two qubits together [13]. We can also quantify entanglement in order to ?nd out how entangled two systems are. The way we describe an entangled state mathematically is that it cannot be constructed from a tensor product of the qubits we wish to entangle. For example, say we wish to entangle two qubits |Ψ and |Φ de?ned as:

|Ψ = α|0 + β|1 |Φ = γ|0 + δ|1 Their tensor product yields: |Ψ ? |Φ = αγ|00 + αδ|01 + βγ|10 + βδ|11

(39) (40)

(41)

The di?erence between entangling two qubits and simply associating them is the sum of two terms (the middle terms in a correlated pair, and the ?rst and fourth terms in an anti-correlated pair). Intuitively this implies that entanglement is more (or in this case less) than just the sum of its parts.

3

Information Theory

Information theory was developed in the 50’ s by Claude Shannon [14]. He found a mathematical model in which to quantify the notion of information. In addition to ?nding a way to measure information, he developed a framework with which we could use it. Today, information theory is used in many scienti?c and engineering ?elds including computer science, electrical engineering and mathematics. It is especially important in in cryptography, data compression and communications. The following sections give a brief introduction to some basic topics in information theory. For a more advanced treatment see [15].

3.1

Shannon Entropy

The ?rst concept in information theory is entropy. This is how we measure information. In information theory every event is represented as a random variable whose outcomes are associated with a certain probability. So we say that we want to ?nd how much information is contained in a random variable X with a probability mass function P . The information in this variable is proportional to its uncertainty. A more intuitive understanding of the relationship between uncertainty and information can be acquired by the following scenario: Assume you were told that tomorrow it would not rain in the desert. This statement - not raining in the desert - contains very 8

little information since you would expect as much. It is a very likely scenario so it has very little uncertainty. However, if you were told that it would rain tomorrow in the desert, the statement would contain a very large amount of information as that is a very rare and unlikely event. So given a random variable X and a probability function p(x) = P r(X = x) we de?ne the entropy H(X) of X as: H(X) = ?
i

p(i) log(p(x))

(42)

Here we assume log is log2 . This is a measure of the uncertainty of an event as a function of the probability of its outcomes. It turns out that the entropy of a variable X is also the number of bits it takes to encode it.

3.2

Mutual Information

The mutual information is a measure of how much information one random variable possesses about another. Given two random variables X and Y , with individual probability distributions p(x) and p(y) and a joint probability distribution p(x, y), we de?ne the mutual information as: I(X; Y ) =
x y

p(x, y) log

p(x, y) p(x)p(y)

(43)

This tells us the amount of information remaining in X once we know Y .

3.3

Source Coding

One of the ?rst uses of information theory was in devising ways to encode data that was to be transmitted through a communication channel. In this scenario, there are two problems we are interested in: using the least amount of resources, and making sure that the data is not corrupted at the other end. This implies that we need to ?nd ways to compress and correct our message. Essentially source coding is about compressing a message into the minimum number of bits. Assuming a resource is a bit, we want to ?nd the smallest amount of bits it will take to encode an arbitrary message. Say we have a source that generates a message M comprised of n bits. In addition, the source has no memory of what it has previously output, so the value of each new bit has to be independent of the ones that preceded it. Now say that each bit has a probability p of being a 0 and 1 ? p of being a 1 and we model the source as a random variable X. Given that we have n bits, we can assume that the number of 1’ s and 0’ s in M will be: N0 = np N1 = n(1 ? p) (44) (45)

We call the messages that actually contain N0 0’s and N1 1’s typical sequences and group them in a set T . They are typical because they are the most likely sequences that S could emit. As n grows large, the probability that the source emits a typical sequence (a sequence that belongs to T ) grows closer to unity.

9

Since all bits are independent of each other (the source is memoryless) the probability of a message M is: Pr [ M ] = Πn Pr [ Mi ] (46) i=1 Since the probability of a bit being a 0 is p and a 1 is 1 ? p we have: Pr [ M ] = pnp × (1 ? p)n(1?p) Taking the log and negating both sides we get: ? log Pr [ M ] = ?np log p ? n(1 ? p) log(1 ? p) ? log Pr [ M ] = nH(X) (47)

(48) (49)

so we know that the probability of a typical message M is 2?nH(X) . We also know that the sum of the probabilities of all the typical messages cannot exceed 1. Say nT is the total number of typical sequences, we have: nT × Pr [ M ] ≤ 1 nT ≤ 2
nH(X)

(50) (51)

So we have at most 2nH(X) typical messages. Since we can assume that the source will emit a message that belongs to T , we index the sequences in T , and send the index instead of the message M . And since T has at most 2nH(X) sequences, we can encode the index in nH(X) bits. So using this method, we have encoded a message M that was n bits long into a new message that is only nH(X) bits long.

4
4.1

Quantum Information Theory
Von Neumann Entropy

While Shannon entropy (H) is used in classical information theory, quantum information theory uses Von Neumann entropy S, de?ned as: S(ρ) = ?trρ log ρ (52)

4.2

Schumacher Coding

Schumacher encoding is the quantum equivalent of noiseless source coding. The overall idea is similar to its classical counterpart, however instead of using typical sequences to encode messages, quantum source coding uses typical subspaces. Given a quantum source emitting n quantum systems each in state |Ψi with probability pi , we wish to compress the sequence. The emitted systems are letters and the entire sequence is a message. In order to perform Schumacher encoding, however, the states emitted by the quantum source (i.e. the letters) must be pure and mutually orthogonal. For a sequence of n qubits each with a probability pi to be in a pure state |Ψi , we de?ne a message M as: 10

M = |Ψ1 ? |Ψ2 ? |Ψ3 ? · · · ? |Ψn

(53)

This message is in a space H ? that is derived from applying the tensor product to the individual spaces of the qubits that compose it: H ? = H1 ? H2 ? H3 ? · · · ? Hn In addition M has density matrix: ρ=
i

(54)

ρi

(55)

where ρi is the density matrix of |Ψi . We de?ne a likely message as one that, for a total of n qubits, has mi qubits in state |Ψi , with: mi = pi n (56)

We further de?ne the set of likely messages as λ: the set of all messages that have mi qubits in state |Ψi and the set of unlikely messages γ = ?λ. Following a similar reasoning as in the classical source coding, we know that there are δ likely messages with: δ = 2nS(ρ) (57)

Since each letter can be described by at least S(ρ) qubits, and the message is n letters long, we need at least nS(ρ) qubits to describe a message. So we have a set λ that includes the δ messages that are most likely to appear from the quantum source and a set γ that includes all the other messages. We index each message in λ from 1 to δ. Now since the quantum source can only emit pure orthogonal states, all the messages are mutually orthogonal. This is because the message space H ? was built from smaller qubit spaces that were all mutually orthogonal. So now we can use the messages to de?ne two new bases Λ and Γ which are subspaces of the message space H ? . In addition:

dim(Λ) = δ dim(Γ) = dim(H ) ? dim(Λ)
?

(58) (59)

So in order to compress the message emitted by our quantum source, we take the entire message as an ensemble and measure it in the message space H ? . Since the subspace Λ is comprised of the most likely messages, a measurement on an unknown message will probably result in one of our likely messages. Since we have indexed this entire set, we simply send the index instead of the message, and since there are δ sequences in the likely set, we can encode any index in log2 δ qubits. So we have e?ectively compressed our message of n states to log2 δ qubits. And we have seen that the maximum compression one can achieve on an ensemble of distinguishable states is simply the Von Neumann entropy of that sequence. So Schumacher encoding gives an information theoretic meaning to the Von Neumann entropy. It is the information contained in a sequence.

11

4.3

Holevo Information (χ)

The amount of information in a sequence of distinguishable (i.e. orthogonal) states is equal to the Von Neumann entropy of that sequence. However what is the information content of a sequence of non-distinguishable states? This is given by the Holevo information χ: χ( ) = S(ρ) ?
i

pi S(ρi )

(60)

There are two properties about χ that should be emphasized: 1) because distinguishability (orthogonality) is a special case of non-distinguishability (nonorthogonality), the Holevo information reduces to the Von Neumann entropy for distinguishable states. 2) the Holevo information of non-distinguishable states is less than the Holevo information of distinguishable states. Besides giving us a more general measure of entropy, the Holevo information also provides an upper bound on the mutual information that is available about quantum systems. For a more detailed explanation see [16, 17].

4.4

Quantum Error Correction

Up until now everything we have covered was idealized and devoid of any practical limitations and imperfections. Unfortunately over time quantum systems have a tendency to interact with their environment. This interaction leads to errors. So in order to perform non-trivial calculations we need to isolate the components of our quantum computer from its environment. Of course this is impossible but there is a lot of research being done to ?nd ways to circumvent this interaction. One of them is quantum error-correcting codes. 4.4.1 Error Models

The ?rst step in building good error-correcting codes is to understand what kinds of errors we are faced with. In quantum computation we divide the possible errors in two groups: dissipation and decoherence. The ?rst is due to a quantum system spontaneously emitting energy. If using an atom as a qubit, with the excited state as |1 and the non-excited state as |0 , dissipation could lead the atom to spontaneously emit a quanta of energy and drop a level. This is equivalent to our qubit spontaneously changing from |1 to |0 , which is a bit ?ip. Decoherence on the other hand causes a quantum system to become entangled with its environment and can lead to changes in the qubit’ s phase. So there are three possible kinds of errors: phase shifts, bit ?ips and a combination of both. Any possible error can be decomposed into a combination of these basic error types. We model these basic error types mathematically using the Pauli matrices from section 2.9.1 For example given a state |Ψ : |Ψ = α|0 + β|1 applying the Pauli-X operator to |Ψ we get a bit ?ip: |Ψx = α|1 + β|0 (62) (61)

12

and if we apply the Pauli-Z operator we get a phase shift: |Ψz = α|0 ? β|1 (63)

Similarly we could apply the Pauli-Y operator to obtain a bit ?ip and a phase shift or the Identity operator to get no error. So when we need to reproduce the e?ect of an error on a single qubit, we simply apply a linear combination of the error operators on that qubit. And in order to apply errors to a register of qubits we just calculate the tensor product of the error operator with the Identity matrices. For example, say we want to apply a bit ?ip to the third qubit of a three qubit register. We would ?rst compute the correct bit ?ip operator:

Ef lip = I ? I ? σx and then apply it to the register. 4.4.2 Three Qubit Bit Flip Error-Correcting Code

(64)

The three qubit code is the simplest quantum error-correcting code. Many more sophisticated ones have been designed, but this one illustrates the underlying concepts well. This scheme uses redundancy to protect one qubit against bit ?ip errors by encoding each bit as three entangled physical qubits:

|0 |1 α|0
L

L L L

= |000 = |111

P P P

(65) (66) + β|111
P

+ β|1

= α|000

(67)

Now whether we encoded our original qubit as state 65, 66 or 67, there are only four scenarios possible: either none, bit one, bit two or bit three gets ?ipped. Assuming our original bit was a superposition, as state 67, after a bit ?ip, we would be left with one of the following states:

|ΨA |ΨB |ΨC |ΨD

= α|000 = α|100 = α|010 = α|001

P P P P

+ β|111 + β|011 + β|101 + β|110

P P P P

(68) (69) (70) (71)

Now we de?ne four projection operators as in section 2.9.3:

P0 = |000 000| + |111 111| P1 = |100 100| + |011 011| P2 = |010 010| + |101 101| P3 = |001 001| + |110 110| 13

(72) (73) (74) (75)

These projections are non-destructive and will not cause a superposition to collapse, so we can apply each one sequentially on the logical qubit. Assuming that the qubit was in state 68, meaning that there were no errors, applying P0 results in: Ψ|P0 |Ψ = (α 000| + β 111|) · (|000 000| + |111 111|) · (α|000 + β|111 ) Ψ|P0 |Ψ = 1 (76) (77)

The set of three qubit systems {|000 , |001 , . . . , |111 } are all mutually orthogonal, so their inner product vanishes. This means that we will obtain 1 when we apply P0 to a qubit that has no error, and 0 when we apply it to any other qubit; similarly we will obtain 1 when we apply P1 to a qubit whose ?rst bit is ?ipped, and 0 when we apply it to another and so on and so forth. This result from the projector is called the error syndrome and it tells us where the error occurred. All we have to do now is ?ip the appropriate bit. Note that using this scheme, we only know where the error occurs, but we cannot determine what the original value was. 4.4.3 Three Qubit Phase Flip Error-Correcting Code

In this section we describe a way to correct a phase ?ip. A phase ?ip, represented by the Pauli-Y operator ?ips the relative phase between the base states of a system. A phase ?ip is a purely quantum mechanical e?ect and has no classical equivalent. This means that it a?ects only qubits that are in a superposition. It acts as follows: |0 + |1 → |0 ? |1 (78)

To correct this we encode each logical qubit as a superposition of physical states using a Hadamard gate. This gives: 1 = √ (|0 + |1 ) 2 1 |1 L = √ (|0 ? |1 ) 2
L

|0

(79) (80)

1 (|0 + |1 )L = √ 2

1 1 √ (|0 + |1 ) + √ (|0 ? |1 ) 2 2

(81)

The e?ect of a phase ?ip (applying a Pauli-Y operator) on these new qubits is: 1 1 √ (|0 + |1 ) → √ (|0 ? |1 ) 2 2 1 1 √ (|0 ? |1 ) → √ (|0 + |1 ) 2 2 1 √ 2 1 1 √ (|0 + |1 ) + √ (|0 ? |1 ) 2 2 1 →√ 2 14 1 1 √ (|0 + |1 ) ? √ (|0 ? |1 ) 2 2

(82) (83) (84)

1 Now if we label state 79 as | + ++ , state 80 as | ? ?? and state 81 as √2 (| + ++ + | ? ?? ), we ?nd that the phase ?ip simply ?ipped | + ++ to | ? ?? and vice versa. This means that now we can apply the three qubit bit ?ip code on | + ++ and | ? ?? .

References
[1] R. Shankar. Principles of Quantum Mechanics. Plenum Press, New York, 1994. [2] A. Peres. Quantum Theory: Concepts and Methods. Kluwer Academic Publishers, 1995. [3] A. Einstein, B. Podolsky, and N. Rosen. Can quantum mechanical description of reality be considered complete? Physical Review, 47:777, 1935. [4] J. Bell. On the Einstein Podolsky Rosen paradox. Physics, 1(3):195–200, 1964. [5] J. Bell. Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, 1993. [6] P. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484–1509, 1997. [7] L. Grover. Quantum mechanics helps in searching for a needle in a haystack. Physical Review Letters, 79(2):325–328, July 1997. [8] C. Bennet, G. Brassard, C. Crepeau, R. Josza, A. Peres, and W. Wooters. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Physical Review Letters, 70:1895–1899, 1993. [9] C. Bennett and G. Brassard. Quantum cryptography: Public key distribution and coin tossing. In Proceedings of IEEE International Conference on Computers Systems and Signal Processing, pages 175–179, 1984. [10] C. Bennet. Quantum cryptography using any two nonorthogonal states. Physical Review Letters, 68(21):3121–3124, May 1992. [11] A. Ekert. Quantum cryptography based on bell’s theorem. Physical Review Letters, 67(6):661– 663, August 1991. [12] P. Shor. Scheme for reducing decoherence in quantum computer memory. Physical Review A, 52:2493–2496, 1992. [13] D. Greenberger, M. Horne, and A. Shimony. Bell’ s theorem without inequalities. American Journal of Physics, 58:1131, 1990. [14] C. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379– 423, 1949. [15] T. Cover and J. Thomas. Elements of Information Theory. Wiley & Sons, 1991. [16] J. Preskill. Caltech Physics 229 Lecture Notes. http://www.theory.caltech.edu/people/preskill/ph229/. [17] M. Nielsen and I. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, Cambridge, 2000.

15


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图