当前位置:首页 >> >>

Karhunen-Lo`eve Eigenvalue Problems in Cosmology How should we Tackle Large Data Sets

Submitted to ApJ. Available from h t t p://www.mpa-garching.mpg.de/~max/karhunen.html (faster from Europe) and from h t t p://www.sns.ias.edu/~max/karhunen.html (faster from the US).

Karhunen-Loeve Eigenvalue Problems in Cosmology: How should we Tackle Large Data Sets?
Max Tegmarky , Andy N. Taylor , & Alan F. Heavens y
10 September 1996
Max-Planck-Institut fur Physik, Fohringer Ring, D-80805 Munchen Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, U.K. max@ias.edu, ant@roe.ac.uk, afh@roe.ac.uk

ABSTRACT Since cosmology is no longer \the data-starved science", the problem of how to best analyze large data sets has recently received considerable attention, and KarhunenLoeve eigenvalue methods have been applied to both galaxy redshift surveys and Cosmic Microwave Background (CMB) maps. We present a comprehensive discussion of methods for estimating cosmological parameters from large data sets, which includes the previously published techniques as special cases. We show that both the problem of estimating several parameters jointly and the problem of not knowing the parameters a priori can be readily solved by adding an extra singular value decomposition step. It has recently been argued that the information content in a sky map from a next generation CMB satellite is su cient to measure key cosmological parameters (h, , , etc.) to an accuracy of a few percent or better | in principle. In practice, the data set is so large that both a brute force likelihood analysis and a direct expansion in signal-to-noise eigenmodes will be computationally unfeasible. We argue that it is likely that a Karhunen-Loeve approach can nonetheless measure the parameters with close to maximal accuracy, if preceded by an appropriate form of quadratic \precompression". We also discuss practical issues regarding parameter estimation from present and future galaxy redshift surveys, and illustrate this with a generalized eigenmode analysis 0 6=b using redshift space of the IRAS 1.2 Jy survey optimized for measuring distortions. Key words: Cosmology: theory, large scale structure of Universe, cosmic microwave background { Astronomical methods: data analysis, statistical

The problem of analysis of large data sets is one that, until recently, has not been a major concern of cosmologists. Indeed, in some areas no data existed to be analyzed. In the last few years, this situation has rapidly changed. A highlight in this transition has been the discovery of uctuations in the cosmic microwave background (CMB) by the Cosmic Background Explorer (COBE) satellite (Smoot et al. 1992). In its short lifetime, COBE produced such a large data set that a number of sophisticated data-analysis methods were developed speci cally to tackle it. In addition, the advent of large galaxy redshift surveys has created a eld where the data sets increase by an order of magnitude in size in each generation. For instance, the surveys where the object selection was based on the Infrared Astronomy

Satellite (IRAS) contain several thousand galaxies: 2; 000 (QDOT; Lawrence et al. 1996) 5; 000 (Berkeley 1.2Jy; Fisher et al. 1995) and 15; 000 (PSC-z; Saunders et al. 1996). The proposed next-generation surveys will have much larger numbers of objects | around 250,000 in the AngloAustralian Telescope 2 degree Field galaxy redshift survey (Taylor 1995) and 106 for the Sloan Digital Sky Survey (Gunn & Weinberg 1995). Similarly plate measuring machines, such as the APM at Cambridge and SuperCOSMOS at the Royal Observatory, Edinburgh, can produce very large catalogues of objects, and numerical simulations of galaxy clustering are even now capable of producing so much data that the analysis and storage of the information is in itself a challenge. A standard technique for estimating parameters from data is the brute force maximum likelihood method, which


M. Tegmark, A. N. Taylor, & A. F. Heavens
in n di erent ux bins. Before collecting the data, we think of x as a random variable with some probability distribution L(x; ), which depends in some known way on a vector of model parameters = ( 1 ; 2 ; :::; m ): (1) Such model parameters might for instance be the spectral index of density uctuations, the Hubble constant h, the cosmic density parameter or the mean redshift of gamma-ray bursts. We will let 0 denote the true parameter values and let refer to our estimate of . Since is some function of the data vector x, it too is a random variable. For it to be a good estimate, we would of course like it to be unbiased, i.e.,

illustrates why people have been driven towards developing more sophisticated methods. For n data items (e.g., pixels in a CMB map, or Fourier amplitudes from a transformed galaxy distribution), the maximum likelihood method requires inversion of an n n matrix for each set of parameter values considered | and this is for the simplest possible case where the probability distribution is Gaussian. Since a next-generation CMB satellite might produce a high resolution sky map with 107 pixels, and the CPU time required for an inversion scales as n3 , a brute force likelihood analysis of this type of data set will hardly be feasible in the near future. Fortunately, it is often possible to greatly accelerate a likelihood analysis by rst performing some appropriate form of data compression, by which the data set is substantially reduced in size while nonetheless retaining virtually all the relevant cosmological information. In this spirit, a large number of data-compression methods have been applied in the analysis of both CMB maps (e.g. Seljak & Bertschinger 1993; Gorski 1994) and galaxy redshift surveys (e.g. Davis & Peebles 1983; Feldman et al. 1994; Heavens & Taylor 1995). A powerful prescription for how to do this optimally is the Karhunen-Loeve eigenvalue method (Karhunen 1947), which has recently been applied to both CMB maps (Bond 1994; Bunn 1995; Bunn & Sugiyama 1995) and redshift surveys (Vogeley 1995; Vogeley & Szalay 1996). The goal of this paper is to review the more general framework in which these treatments belong, and to present some important generalizations that will facilitate the analysis of the next generation of cosmological data sets. The rest of this paper is organized as follows. In Section 2, we review some useful information-theoretical results that tell us how well parameters can be estimated, and how to determine whether a given analysis method is good or bad. In Section 3, we review the Karhunen-Loeve data compression method and present some useful generalizations. In Section 4 we illustrate the theory with various cosmology applications, including the special case of the signal-to-noise eigenmode method. In Section 5 we discuss limitations of method and possible ways of extending it to make the analysis feasible for huge data sets such as a 107 pixel future CMB map. Finally, our results are summarized and discussed in Section 6.

h i=




and give as small error bars as possible, i.e., minimize the standard deviations
i i

? h ii

2 1 2


In statistics jargon, we want the BUE i , which stands for the \Best Unbiased Estimator". A key quantity in this context is the so-called Fisher information matrix, de ned as

F ij

likelihood estimator, or ML-estimator for brevity, de ned as the parameter vector ML that maximizes the likelihood function L(x; ). (Above we thought of L(x; ) as a probability distribution, a function of x. However, when limiting ourselves to a given data set x and thinking of L(x; ) as a function of the parameters , it is customarily called the likelihood function.) Using this notation, a number of powerful theorems have been proven (see e.g. Kenney & Keeping 1951 and Kendall & Stuart 1969 for details):

@2 L ; (4) @ i@ j where L ? ln L. Another key quantity is the maximum


(i) For any unbiased estimator, i 1= F ii . (ii) If there is a BUE , then it is the ML-estimator or a function thereof. (iii) The ML-estimator is asymptotically BUE. The rst of these theorems, known as the Cramer-Rao inequality, thus places a rm lower limit on the error bars that one can attain, regardless of which method one is using to estimate the parameters from the data. This is the minimum error bar attainable on i if all the other parameters are known. If the other parameters are estimated from the data as well, the minimum standard deviation rises to 1=2 i (F ?1 )ii . The second theorem shows that maximum-likelihood (ML) estimates have quite a special status: if there is a best method, then the ML-method is the one. Finally, the third result basically tells us that in the limit of a very large data set, the ML-estimate for all practical purposes is the best estimate, the one that for which the Cramer-Rao inequality becomes an equality. It is these nice properties that have made ML-estimators so popular.


How accurately can we estimate model parameters from a given data set? This question was basically answered 60 years ago (Fisher 1935), and we will now summarize the results, which are both simple and useful.

Suppose for de niteness that our data set consists on n real numbers x1 ; x2 ; :::;xn , which we arrange in an n-dimensional vector x. These numbers could for instance denote the measured temperatures in the n pixels of a CMB sky map, the counts-in-cells of a galaxy redshift survey, the rst n coefcients of a Fourier-Bessel expansion of an observed galaxy density eld, or the number of gamma-ray bursts observed

2.1 The classical results

Eigenvalue Problems
Although the proof of the Cramer-Rao inequality is rather lengthy, it is quite easy to acquire some intuition for why the Fisher information matrix has the form that it does. This is the purpose of the present section. Let us Taylor expand L around the ML-estimate . By de nition, all the rst derivatives @ L=@ i will vanish at the ML-point, since the likelihood function has its maximum there, so the local behavior will be dominated by the quadratic terms. Since L = exp ?L], we thus see that the likelihood function will be approximately Gaussian near the ML-point. If the error bars are quite small, L usually drops sharply before third order terms have become important, so that this Gaussian is a good approximation to L everywhere. Interpreting L as a Bayesian probability distribution in parameter space, the covariance matrix T is thus given simply by the second derivatives at the ML-point, as the inverse of the Hessian matrix: (T ?1 )ij Note that the Fisher information matrix F is simply the expectation value of this quantity at the point = 0 (which coincides with the ML-point on average if the MLestimate is unbiased). It is basically a measure of how fast (on average) the likelihood function falls o around the MLpoint, i.e., a measure of the width and shape of the peak. From this discussion, it is also clear that we can use its inverse, F ?1 , as an estimate of the covariance matrix T h t i ? h ih it (6) of our parameter estimates when we use the ML-method. Let us now explicitly compute the Fisher information matrix for the case when the probability distribution is Gaussian, i.e., where (dropping an irrelevant additive constant n ln 2 ]) 2L = ln det C + (x ? )C ?1 (x ? )t ; (7) where in general both the mean vector and the covariance matrix C = h(x ? )(x ? )t i (8) depend on the model parameters . Although vastly simpler than the most general situation, the Gaussian case is nonetheless general enough to be applicable to a wide variety of problems in cosmology. De ning the data matrix D (x ? )(x ? )t (9) and using the well-known matrix identity ln det C = Tr ln C , we can re-write equation (7) as 2L = Tr ln C + C ?1 D : (10) We will use the standard comma notation for derivatives, where for instance C ;i @@ C : (11)


2.2 The Fisher information matrix

will also be symmetric matrices. Using the matrix identities (C ?1 );i = ?C ?1 C ;i C ?1 and (ln C );i = C ?1 C ;i , we thus obtain (12) 2L;i = Tr C ?1 C ;i ?C ?1 C ;i C ?1 D + C ?1 D;i : When evaluating C and at the true parameter values, we have hxi = and hxxt i = C + t, which gives 8 = C, > hDi < hD;i i = 0, (13) > : hD;ij i = ;i ;t + ;j ;t . j i Using this and equation (12), we obtain hL;i i = 0. In other words, the ML-estimate is correct on average in the sense that the average slope of the likelihood function is zero at the point corresponding to the true parameter values. Applying the chain rule to equation (12), we obtain 2L;ij = Tr ? C ?1 C ;i C ?1 C ;j +C ?1 C ;ij + C ?1 (C ;i C ?1 C ;j +C ;j C ?1 C ;i )C ?1 D ? C ?1 (C ;i C ?1 D;j + C ;j C ?1 D;i ) ? C ?1 C ;ij C ?1 D + C ?1 D;ij ]: (14) Substituting this and equation (13) into equation (4) and using the trace identity Tr AB ] = Tr BA], many terms drop out and the Fisher information matrix reduces to simply 1 (15) F ij = hL;ij i = 2 Tr Ai Aj + C ?1 M ij ]; where we have de ned the matrices Ai C ?1 C ;i = (ln C );i and M ij hD;ij i = ;i ;t + ;j ;t . This old and wellj i known result is also derived by Bunn (1995) and Vogeley & Szalay (1996).

@2L : @ i@ j


2.3 The Gaussian case

2.4 An example: parameter estimation with a future CMB experiment

Let us illustrate the above results with a simple example where = 0 and C is diagonal. Suppose a next generation CMB satellite generates a high-resolution all-sky map of the CMB uctuations. Let us for the moment ignore the complication of the galactic plane, and expand the temperature distribution in spherical harmonics a`m . We de ne our data vector x as the set of these coe cients from ` = 2 up to some cuto `max, i.e., n = (`max + 1)2 ? 4 and xi = ai , where we have combined ` and m into a single index i = 1; 2; 3::: as i = `2 + ` + m + 1. With the standard assumption that the CMB uctuations are isotropic, we have ( = 0, h i (16) C ij = ij C` + 4 N 2 e b2 `(`+1) .

Since C is a symmetric matrix for all values of the parameters, it is easy to see that all the derivatives C ;i , C ;ij , etc.,

Here C` denotes the angular power spectrum of the CMB, and the second term incorporates the e ects of instrumental noise and beam smearing (Knox 1995, Tegmark & Efstathiou 1996). N is the number of pixels in the sky map, is the r.m.s. pixel noise, p (17) b FWHM= 8 ln 2 0:425 FWHM; and FWHM is the full-width-half-max beam width. Assuming that the CMB and noise uctuations are Gaussian, equation (15) gives the Fisher information matrix


M. Tegmark, A. N. Taylor, & A. F. Heavens

diagonalize the matrix T (or equivalently, its inverse F ). The eigenvectors will tell us which m parameter combinations can be independently estimated from the CMB data, and the corresponding eigenvalues tell us how accurately this can be done. It has been pointed out (Bond et al. 1994) that there will be a considerable parameter degeneracy if data is only available up to around the rst Doppler peak. This is clearly illustrated in Figure 1: most of the curves lack strong features in this range, so some of them can be well approximated by linear combinations of the others. If a CMB experiment has high enough resolution to measure the power out to l 103 , however, this degeneracy is broken. Jungman et al. (1996b) compute the Fisher matrix for a model with the eleven unknown parameters = ( ; b h2 ; h; ; nS ; ;nT ; T=S; ; Q; N ); (19) the density parameter, the baryon density, the Hubble parameter, the cosmological constant, the spectral index of scalar uctuations, the \running" of this index (a measure of the deviation from power law behavior), the spectral index of tensor uctuations, the quadrupole tensor-to-scalar ratio, the optical depth due to reionization, and the number of light neutrino species, respectively. They nd that even Figure 1. The derivatives of the CDM power spectrum with when estimating all these parameters simultaneously, the rerespect to various parameters. sulting error bars on some of the key parameters are of the order of a few percent or better for experiments with a resolution good enough to probe the rst few Doppler peaks. `X max 2` + 1 2 Even 4 2 e b `(`+1) ?2 @C` @C` :(18) the abundance of hot dark matter can be constrained C` + N F ij = in this fashion (Dodelson et al. 1996). An up-to-date discus2 @i @j `=2 sion of which additional parameters can be constrained when using large-scale-structure data as well is given by Liddle et This handy expression, also derived by Jungman et al. al. (1996). (1996a), tells us that the crucial functions which determine In should be stressed that the formalism above only tells the attainable accuracy are the derivatives of the power specus what the attainable accuracy is if the truth is somewhere trum with respect to the various parameters. Examples of near the point in parameter space at which we compute the such derivatives are shown in Figure 1. For instance, the power derivatives. Figure 1 corresponds to standard COBEderivative with respect to the reionization optical depth, normalized CDM, i.e., to @C` =@ , is shaped as ?C` for ` 10, since earlier reionization suppresses all these multipoles by the same factor (20) 0 = (1; 0:015; 0:5; 0; 1; 0; 0; 0; 0; 20 K ; 3): e?2 . The derivative with respect to the quadrupole normalization, @C` =@Q, of course has the same shape as the The worst scenario imaginable would probably be extremely power spectrum itself. early reionization, since 1 would eliminate almost all Equation (18) has a simple geometric interpretation. We small-scale power and introduce a severe degeneracy by makcan think of the m functions @C` =@ i as vectors in a Hilbert ing all the power derivatives almost indistinguishable for space of dimension (`max?1), and think of the Fisher matrix ` 10, the region where they di er the most in Figure 1. component F ij as simply the dot product of the vectors Needless to say, accurate parameter estimation also re@C` =@ i and @C` =@ i . This dot product gives a rather small quires that we can compute the theoretical power spectrum weight to low `-values, essentially because there are fewer maccurately. It has been argued (Hu et al. 1995) that this can modes there and correspondingly larger cosmic variance. In be done to better than 1%, by accurately modeling various addition, the weights are seen to be exponentially suppressed weak physical e ects such as Helium recombination and pofor large `, where beam smearing causes the e ect of pixel larization. We will return to other real-world issues, such noise to explode. If the parameter dependence of C` was as foreground contamination and incomplete sky coverage, such that all n vectors @C` =@ i were orthogonal under this further on. dot product, then F and the parameter covariance matrix T = F ?1 would be diagonal, and the errors in the estimates of the di erent parameters would be uncorrelated. The more 3 OPTIMAL DATA COMPRESSION: similarly shaped two curves in Figure 1 are, the harder it KARHUNEN-LOEVE METHODS will be to break the degeneracy between the corresponding Above we saw how small error bars one could obtain when two parameters. In the extreme case where two curves have using all the information in the data set. We also saw that identical shape (are proportional to each other), the Fisher for large data sets, these minimal error bars could be apmatrix becomes singular and the resulting error bars on the proximately attained by making a likelihood analysis using two parameters become in nite. It is therefore interesting to

the entire data set. However, there are of course many situations where this approach is not feasible even in the simple Gaussian case.

3.1 The need for data compression

If we wish to estimate m parameters from n data points, this would involve inverting an n n matrix at a dense grid of points in the m-dimensional parameter space. The CPU time required to invert such a non-sparse matrix scales as n3 , and the number of grid points grows exponentially with m, so there are obviously limits as to what can be done in practice. For instance, the \brute force" COBE analysis of Tegmark & Bunn (1995) had n = 4038 and m = 2, and involved inverting 4038 4038 matrices at a couple of hundred grid points. A future high-resolution all-sky CMB map might contain of order n = 107 pixels, and direct numerical inversion of matrices this large is clearly unfeasible at the present time. Also, to numerically map out the likelihood function in the 11-dimensional parameter space explored by Jungman et al. (1996b) with reasonable resolution would entail evaluating it at such a large number of grid points, that it would be desirable to make the inversion at each grid point as fast as possible. Likewise, the spherical harmonic analysis of the 1.2Jy redshift survey (Heavens & Taylor, 1995, Ballinger Heavens & Taylor 1996), and the PSC-z survey (Tadros et al., in preparation), required a likelihood analysis of some 1208 modes to nd the redshift distortion parameter and a nonparametric stepwise estimate of the power spectrum. Because of this, it is common to carry out some form of data compression whereby the n data points are reduced to some smaller set of n0 numbers before the likelihood analysis is done. Since the time needed for a matrix inversion scales as n3 , even a modest compression factor such as n=n0 = 5 can accelerate the parameter estimation by more than a factor of 100.

3.2 The optimization problem

In this section, we will discuss the special case of linear data compression, which has as an important special case the socalled signal-to-noise eigenmode method. We will return to non-linear data compression methods below, in Section 5. The most general linear data compression can clearly be written as y = Bx; (21) 0 -dimensional vector y is the new compressed where the n data set and B is some arbitrary n0 n matrix. Thus the new data set has ( hyi =B , (22) hyyt i ? hyihyit = BCB t , and substituting this into equation (15) we nd that the new ~ Fisher information matrix F is given by 1 ~ F ij = 2 Tr (BCB t )?1 (BC ;i B t )(BCB t )?1 (BC ;j B t ) + (BCB t )?1 (BM ij B t )]: (23)

this expression, leaving us with simply ~ F ij = 1 Tr B ?t (Ai Aj + C ?1 M ij )B t ] = F ij : (24) 2 In other words, B just makes a similarity transform of the matrix within the trace in equation (15), leaving the Fisher information matrix F unchanged. This re ects the fact that when B is invertible, no information is lost or gained, so that the error bars on a measured parameter are unchanged. For instance, replacing a galaxy-cut COBE map consisting of say 3600 pixels by its spherical harmonic coe cients up to ` = 59 (there are 3600 such coe cients) would be an invertible transformation, thus making no di erence whatsoever as to the attainable error bars. Likewise, expansion in galaxy-cut spherical harmonics gives exactly the same result as expansion in an orthonormal basis for the galaxy-cut spherical harmonics (as in Gorski 1994), since the mapping from the non-orthonormal basis to the orthonormal basis is invertible. This result is obvious if we think in terms of information: if the mapping from x to y is invertible, then y must clearly contain exactly the same information that x does, since we can reconstruct x by knowing y. Rather, the interesting question is what happens when n0 < n and B is not invertible. Each row vector of B speci es one of the numbers in the new data set (as some linear combination the original data x), and we wish to know which such row vectors capture the most information about the parameters . If B has merely a single row, then we can write B = bt for some vector b, and the diagonal (i = j ) part of equation (23) reduces to 2 t ;i 2 1 t ~ F ii = 2 bbC ;i b + ((bbt Cb)) : (25) t Cb Let us now focus on the problem of estimating a single parameter i when all other parameters are known | we will return to the multi-parameter case below, in Section 3.6. ~ ii Since the error bar i F ?1=2 in this case, we want to nd the b that maximizes the right-hand side of equation (25). Although this is a non-linear problem in general, there are two simple special cases which between them cover many of the situations that appear in cosmology applications: The case where the mean is known: ;i = 0 The case where the covariance matrix is known: C ;i = 0 Below we show rst how these two cases can be solved separately, then how the most general case can be for all practical purposes solved by combining these two solutions.

Eigenvalue Problems 5 If n = n0 and B is invertible, then the B -matrices cancel in

3.3 When the mean is known
~ (2F ii )1=2 = jb C ;i bj : t

When is independent of i , the second term in equation (25) vanishes, and we simply wish to maximize the quantity

Since the mean does not depend on the parameters (i.e., it is known), let us for simplicity rede ne our data by x 7! x ? , so that hxi = 0. Note that whereas the denominator in equation (26) is positive (since C is a covariance matrix and thus positive de nite), the expression

b Cb


M. Tegmark, A. N. Taylor, & A. F. Heavens this B multiplied by some invertible matrix, which as we bt C ;i b in the numerator might be negative, since C ;i is saw will leave the information content unchanged). not necessarily positive de nite. We therefore want to make (bt C ;i b)=(bt Cb) either as large as possible or as small as How does the error bar i depend on the choice of n0 ? Equation (29) implies that possible, depending on its sign. Regardless of the sign, we thus seek the b for which this ratio takes an extremum, i.e., C ;i B t = CB t ; (33) we want the derivatives with respect to the components of b to vanish. Since this ratio is clearly unchanged if we multiply where ij ij i , i.e., a diagonal matrix containing all the b by a constant, we can without loss of generality choose to eigenvalues. Since equation (32) implies that normalize b so that the denominator equals unity. We thus BCB t = I ; (34) seek an extremum of bt C ;i b subject to the constraint that t Cb = 1: b (27) it follows from equation (33) that This optimization problem is readily solved by introducing BC ;i B t = : (35)

a Lagrange multiplier and extremizing the Lagrangean bt C ;i b ? bt Cb: (28) Varying b and setting the result equal to zero, we obtain the generalized eigenvalue problem C ;i b = Cb: (29) Since C is symmetric and positive de nite, we can Cholesky decompose it (e.g., Press et al. 1992) as C = LLt for some invertible matrix L. Multiplying equation (29) by L?1 from the left, we can rewrite it as (L?1 C ;i L?t )(Lt b) = (Lt b); (30) which we recognize as an ordinary eigenvalue problem for the symmetric matrix (L?1 C ;i L?t ), whose solution will be n orthogonal eigenvectors (Lt bk ) and corresponding eigenvalues k , k = 1; 2; :::;n. Let us sort them so that j 1 j j 2 j ::: j n j: (31) Let us choose the kth row of B to be the row vector bt , so k that the compressed data set is given by yk = bt x. The ork thogonality property (Lt bk ) (Lt bk ) / kk in combination with our chosen normalization of equation (27) then tells us that our compressed data satis es hyk yk i = h(btk x)(xt bk )i = btk Cbk = btk LLt bk = kk ; (32) i.e., hyyt i = I . The compressed data values y thus have the nice property that they are what is known as statistically orthonormal, i.e., they are all uncorrelated and all have unit variance. Since we are assuming that everything is Gaussian, this also implies that they are statistically independent | in fact, y is merely a vector of independent unit Gaussian random variables. In other words, knowledge of yk gives us no information at all about the other y-values. This means that the entire information content of the initial data set x has been portioned out in disjoint pieces in the new compressed data set, where y1 contains the most information about i , y2 is the runner up (once the information content of y1 has been removed, y2 contains the most information about i ), and so on. If we use all the n vectors bk as rows in B , then B will be invertible, and we clearly have not achieved any compression at all. However, once this matrix B has been found, the compression prescription is obvious: if we want a compressed data set of size n0 < n, then we simply throw away all but the rst n0 rows of B . It is straightforward to show (see e.g. Therrien 1992; Vogeley & Szalay 1996) that if we x an integer n0 and then attempt to minimize i , we will arrive at exactly the same B as with our prescription above (or
0 0 0 0 0 0 0

Therefore the diagonal part of equation (23) reduces to simply ~ 2F ii = Tr f(BCB t )?1 (BC ;i B t )g2 ] = Tr


n X


k : (36)

It is thus convenient to plot 2 as a function of the mode k number k, as we have done in Figure 2 for the speci c case of measuring the redshift distortion parameter from the 1.2Jy redshift survey, to see at what k the information content starts petering out. Alternatively, one can plot i as a function of n0 as in Figure 3 and see when the curve attens out, i.e., when the computational cost of including more modes begins to yield diminishing returns. If one feels uncomfortable about choosing n0 by \eyeballing", one can obtain a more quantitative criterion by noting that using only a single mode bk would give i = 1=j k j. Thus the quantity i j k j can be thought of as the signal-to-noise ratio for the kth mode, and one can choose to keep only those modes where this ratio exceed unity (or some other constant such as 0:2).

Introducing a Lagrange multiplier and proceeding exactly as in the previous section, this is equivalent to the eigenvalue problem (L?1 M ii L?t )(Lt b) = (Lt b): (38) However, since the matrix M ii = 2 ;i ;t is merely of rank i one, this problem is much simpler than that in the previous section. Since the left hand side is 2(L?1 ;i )( ;t b) / (L?1 ;i ); (39) i ?1 ;i ) regardless what the vecit points in the direction (L tor b is, so the only non-trivial eigenvector of (L?1 M ii L?t ) is (Lt b1 ) = (L?1 ;i ); (40) with a corresponding eigenvalue ?1 ;i j2 = Tr CM ii ]: (41) 1 = 2jL

~ F ii = bbM ii b : t Cb

When C is independent of i , the rst term in equation (25) vanishes, and we simply wish to maximize the quantity (37)

3.4 When the covariance matrix is known

Eigenvalue Problems
All other eigenvalues vanish. In other words, b1 = C ?1 ;i , and the new compressed data set consists of just one single number, y1 = ;t C ?1 x (42) i which contains just as much information about the parameter i as the entire data set x did. Another way to see this is to compute the Fisher information before and after the compression, and note that ~ F ii = F ii = ;ti C ?1 ;i : (43) In other words, all the information about i in x is has been distilled into the number y1 . Thus the ML-estimate of i is just some function of y1 . For the special case where depends linearly on the parameter, i.e., = ;i i where ;i is a known constant vector, this function becomes trivial: the expectation value of equation (42) reduces to hy1 i = ( ;ti C ?1 ;i ) i ; (44) which means that on average, up to a known constant ( ;t C ?1 ;i ), the compressed data is just the desired pai rameter itself. In the most general case, both and C depend on the parameters . This happens for instance in the CMB case when the components of x are chosen to be estimates of the power spectrum C` , as in Section 5 below. Clearly, the more ways in which the probability distribution for x depends on the parameters, the more accurately we can estimate them. In other words, when both and C depend on , we expect to do even better than in the two cases discussed above. Finding the vectors b that extremize equation (25) is quite a di cult numerical problem. Fortunately, this is completely unnecessary. Solving the eigenvalue problem in Section 3.3 gives us n00 compression vectors capturing the bulk of the cosmological information coming from the rst term in equation (25). Section 3.4 gives us one additional vector ;t C ?1 that captures all the cosmoi logical information coming from the second term. In other words, if we simply use all these n00 + 1 compression vectors together, we have for all practical purposes solved our problem. Since a direct numerical attack on equation (25) could never reduce these n00 + 1 vectors to fewer than n00 without widening the resulting error bars, the time savings in the ensuing likelihood analysis would at most be a completely negligible factor (n00 + 1)=n00 ]3 1 + 3=n00 compared the simple prescription we have given here.


3.5 When neither is known | the general case

involved taking a rather minimalistic approach to this issue, by optimizing the eigenmodes for one particular parameter (typically the overall normalization of the uctuations) and assuming that these eigenmodes will capture most of the relevant information about the other parameters as well. If we want to estimate several parameters from a data set simultaneously, then how should we best compress the data? As we saw in Section 2, the covariance matrix T of our parameter estimates is approximately F ?1 , so for the multivariate case, we have 1=2 (45) i = (F ?1 )ii instead of i = F ?1=2 . One approach to the problem would ii thus be trying to minimize the diagonal elements of F ?1 rather than, as above, trying to minimize the diagonal elements of F . This is, however, quite a cumbersome optimization problem. Fortunately, there is an alternative solution to the problem which is easy to implement in practice and in addition is easy to understand intuitively, in terms of information content. Suppose that we repeat the standard KL procedure described above m times, once for each parameter i , and let n0i denote the number of modes that we choose to use when estimating the ith parameter. We then end up with m di erent compressed data sets, such that the n0i numbers in the ith set contain essentially all the information that there was about i . This means that the union y of all these compressed data sets, consisting of

n00 =

m X i=1



3.6 Estimating several parameters at once

The Karhunen-Loeve method described above was designed to condense the bulk of the information about a single parameter into as few numbers yk as possible. Although this particular problem does occasionally arise, most cosmology applications are more complicated, involving models that contain more than one unknown parameter. The previous applications of Karhunen-Loeve methods to CMB maps (Bond 1994; Bunn & Sugiyama 1995; Bunn 1995) and galaxy surveys (Vogeley 1995; Vogeley & Szalay 1996), have

numbers, retains basically all the information from x that is relevant for our joint estimation of all the parameters. The problem is of course that there might be plenty of redundancy in y. Indeed, if we typically compressed by say a factor n=n00 10 and had 11 parameters as in the CMB model i of Jungman et al. (1996b), then we would have achieved a counterproductive \anti-compression" with n00 > n. How can we remove the unnecessary \pork" from y? The n0i eigenvectors selected via the KL compression method span a certain subspace of the n-dimensional space in which the data vector x lives, and we can think of the data compression step as simply projecting the vector x onto this subspace. For each parameter i , we found such a subspace, and the redundancy stems from the fact that many of these subspaces overlap with one another, or point in very similar directions. In order to compress e ciently and yet retain almost all the the information about all the parameters, we want to nd a small set of vectors that span as much as possible of the union of all the subspaces. As described in detail in e.g. Press et al. (1992), this is readily accomplished by making a Singular Value Decomposition (SVD) and throwing away all vectors corresponding to very small singular values. Let us de ne B i = B , where B is the KL compression matrix optimized for estimating the ith parameter. Here we multiply the row vectors by their corresponding eigenvalues so that better vectors will receive more weight in what follows. Combining all the row vectors of the transposed compression matrices B t into a single matrix, SVD i allows us to factor this matrix as ? t B 1 : : : B tm = U V t ; (47)

M. Tegmark, A. N. Taylor, & A. F. Heavens where U t U = I , V t V = I and the matrix is diagonal.

One customarily writes = diag f i g, and refers to the diagonal elements i as the singular values. These are basically a generalization of eigenvalues to non-square matrices. We de ne our nal compression matrix as B U t: (48) The columns of U with corresponding non-zero singular values i form an orthonormal set of basis vectors that span the same space as all the initial compression vectors together. The column vectors of U with vanishing singular values form an orthonormal basis for the null-space of U V t , representing redundant information. Similarly, the vectors corresponding to singular values near zero capture information that is almost entirely redundant. By making a plot similar to Figure 2, showing k as a function of k, one decides on the nal number n0 < n00 of row vectors in B to keep. Once n0 has been xed, one can of course (if one prefers? ) make the n0 numbers in the new compressed data set statistically orthogonal as before by Cholesky decomposing their covariance matrix BCB t = LLt and replacing z = Bx by L?t z . In summary, we have found that when we wish to estimate more than one parameter from the data, we can obtain a close to optimal data compression with the following steps: (i) Compute the KL eigenmodes that contain the bulk of the information about the rst parameter. (ii) Repeat this procedure separately for all other parameters. (iii) Arrange all the resulting vectors, multiplied by their eigenvalues, as the rows of B . (iv) Make an SVD of B and throw away all rows corresponding to very small singular values.

The simplest approach is an iterative scheme, whereby the KL-analysis is carried out twice, using the ML-estimate of from the rst run as the ducial \true" value in the second run. A more rigorous approach is to compute compression vectors b for a number of di erent assumptions about 0 that include the extreme corners of parameter space, and then combine all these vectors into a single compression matrix B by singular-value decomposition exactly as described in the previous section.

In this section, we illustrate the theoretical discussion above with a number of cosmology examples, and show how the recently published work on CMB maps and galaxy surveys ts into the general KL-scheme as special cases. We will see that the typical compression factor is about 10 in both an IRAS example and a COBE example.


Here we apply a KL-analysis to the problem of redshift-space distortions in the 1.2Jy IRAS galaxy survey (Fisher et al. 1995). This problem has previously been analyzed in detail by Heavens & Taylor (1995, hereafter HT95) who used a spherical harmonic analysis. Here we repeat their analysis, but include a KL data-compression step to investigate how many modes are needed for an accurate determination of the redshift distortion parameter (49) b ; where b denotes the conventional linear bias factor, the ratio

4.1 A large-scale-structure example: redshift-space distortions


The KL-approach has sometimes been criticized for not being model-independent, in the sense that one needs to make an a priori guess as to what the true parameter values 0 are in order to be able to compute the compression matrix B . Although there is no evidence that a bad guess as to 0 biases the nal results (see e.g. Bunn 1995 for a detailed discussion of these issues, including numerical examples), a poor guess of 0 will of course lead to a data compression that is no longer optimal. In other words, one would expect to still get an unbiased answer, but perhaps with larger error bars than necessary. In practice, this loss of e ciency is likely to be marginal. For instance, as a test, Bunn (1995) performed a KL-analysis of a COBE CMB map with n 1 assuming a blatantly erroneous spectral index n = 2 to compute B (n = 2 is ruled out at about 3 a posteriori), and compared this with the results obtained when assuming n = 1. The error bars where found to change only marginally. If one nonetheless wishes to do something about this e ciency problem, it is fortunately quite easy in practice.

3.7 The problem of not knowing the parameters a priori

between the uctuations in luminous matter and the total matter uctuations. As was rst shown by Kaiser (1987), the peculiar motions of galaxies induces a characteristic radial distortion in the apparent clustering pattern that depends only on this parameter . As the initial data vector x, we use the coe cients obtained by expanding the observed galaxy distribution in combinations of spherical harmonics and spherical Bessel functions as described in HT95, with the known mean values subtracted o so that = hxi = 0. HT95 show that the covariance matrix is given by 1X C = sn + 2 ( + V )( + V )P (k );(50)

The only merit of the statistical orthogonality is that it will make hyyt i more close to diagonal near the ducial parameter values , which might conceivably speed up the matrix inversion if an iterative method is used.

where the indices , and run over the above-mentioned modes, and P (k) is the power spectrum. The matrices and V encapsulate the e ects of nite survey volume convolution and redshift distortions, respectively. The matrix sn contains the shot-noise contribution. All three matrices are independent of . For our illustrative example, we assume a parametrised form of the power spectrum suggested by Peacock & Dodds (1994), which means that is the only unknown parameter. Hence m = 1 and = 1 = . Di erentiating equation (50), we obtain 1 (51) C ;1 = @ C = 2 (V P t + PV t ) + (V PV t ) ; @

Eigenvalue Problems
4.2 A special case: the signal-to-noise eigenmode method


Figure 2.

KL-eigenvalues = 1=


Figure 3. Error bar on beta as a function of n0 , the number of eigenmodes used.

where P P (k ) (i.e., P = diag fP (k g). We assume an a priori value = 1 to evaluate this matrix. Using n = 1208 modes in the initial data set x as in HT95, we obtain the eigenvalues k shown in Figure 2. The resulting error bar =
" 0 n X #?1=2

is plotted as a function of n0 (the number of modes used) in Figure 3, and is seen to level out at around n0 = 100. This means that although HT95 used 103 modes to estimate , almost all the information is actually contained in the best 102 linear combinations of these modes. In other words, we can obtain basically identical results to those found in HT95 by using a compressed data set only 10% of the original size. Since the matrix inversion time scales as n03 , this compression factor of 10 thus allows the inversion to be carried out 1000 times faster.




The above-mentioned example was rather generic in the sense that C depended on in a nonlinear way (in this case, C depended quadratically on ). However, the special case of the KL-method where the parameter dependence is linear (or rather a ne) is so common and important that it has acquired a special name: the signal-to-noise eigenmode method. This is the special case where = 0, we have only one unknown parameter , and the covariance matrix can be written in the form C = S + N; (53) where the known matrices S and N are independent of . For reasons that will become clear from the examples below, S and N are normally referred to as the signal and noise matrices, respectively, and they are normally both positive de nite. Since dC =d = S, equation (29) gives the generalized eigenvalue problem Sb = (S + N )b: (54) By Cholesky decomposing the noise matrix as N = LLt , this is readily rearranged into the ordinary eigenvalue problem (L?1 SL?t )(Lt b) = 0 (Ltb); (55) 0 where =(1 ? ). Since the matrix to be diagonalized, (L?1 SL?t ), is loosely speaking (N ?1=2 SN ?1=2 ), a type of signal-to-noise ratio, its eigenvectors b are usually referred to as the signal-to-noise eigenmodes. Historically, this terminology probably arose because one was analyzing one-dimensional time series with white noise, where N / I , so that (L?1 SL?t ) = SN ?1 was the signal-to-noise ratio even in the strict sense. Vogeley & Szalay (1996) refer to the change of variables x 7! L?1 x as prewhitening, since it transforms S + N into (L?1 SL?t ) + I , i.e., transforms the noise matrix into the white noise matrix, the identity matrix. Equation (32) showed that in the general KL-case, the compressed data set was statistically orthonormal, hyyt i = I . In the S/N-case, the compressed data has an additional useful property: both the signal and noise contributions to y are uncorrelated separately. It is easy to show that btk Sbk = 0 btk Nbk / kk ; (56) which means that the covariance matrix hyyt i will remain diagonal even if we have misestimated the true signal-tonoise ratio. In other words, if the sole purpose of our likelihood analysis is to estimate the overall normalization from equation (53), we only need to invert diagonal matrices. The signal-to-noise method arises in a wide variety of contexts. For instance, it is a special case not only of the KLmethod, but also of the power spectrum estimation method of Tegmark (1996), corresponding to the case were the width of the window functions is ignored.
0 0 0

4.2.1 Signal-to-noise analysis of CMB maps The signal-to-noise eigenmode method was introduced into CMB analysis by Bond (1994), who applied it to sky maps from both the COBE and FIRS experiments. It has also


M. Tegmark, A. N. Taylor, & A. F. Heavens
4. This is an example of what we will call pre-compression, and we will return to this topic below, in Section 5.
4.2.2 Signal-to-noise analysis of galaxy surveys The application of the signal-to-noise eigenmode method to galaxy surveys was pioneered by Vogeley (1995) and has since been further elaborated by Vogeley & Szalay (1996). Although these are both method papers, deferring actual parameter estimation to future work, the former explicitly evaluates the eigenmodes for the CfA survey and shows that there are about 103 modes with a signal-to-noise ratio exceeding unity. In galaxy survey applications, the noise matrix N contains the contribution from Poisson shot noise due to the nite number of galaxies per unit volume, rather than on detector noise as in the CMB case. With galaxies, the question of what to use as the initial data vector x is not as simple as in the CMB case, since there is no obviously superior way to \pixelize" the observed density eld. Vogeley & Szalay (1996) divide space into a large number of disjoint volumes (\cells"), and derive the resulting signal matrix S in the approximation of ignoring redshift distortions. One alternative is using \fuzzy" cells as discussed by Tegmark (1995), for instance averages of the galaxy distribution where the weight functions are Gaussians centered at some grid of points. Another alternative, which has the advantage of making it easier to include the e ect of redshift distortions, is to use the coe cients from an expansion in spherical harmonics and spherical Bessel functions as was done by HT95, Ballinger et al. (1995; hereafter BHT95) and in Section 4.1 above. In BHT95 the real space power spectrum of density perturbations was left as a free function, to be evaluated in a stepwise fashion, along with the distortion parameter. In the case of only estimating the power, by xing to some constant value, the problem reduces to a signal-tonoise eigenmode problem, but with m free parameters | the power at m speci ed wavenumbers. Due to the mixing of wavenumbers by the and V matrices, these are generally not independent and so we must apply the SVD method to construct orthogonal eigenmodes. This application will be discussed elsewhere. We conclude this section with a few additional comments on the above-mentioned \pixelization" issue. Compared to a CMB analysis, a galaxy survey analysis involves one more step, which is reducing the point data to the \initial" data vector x. What is the relation between the rst step (weighting galaxies) and the second step (weighting modes)? In Feldman, Kaiser & Peacock (1994) and Heavens & Taylor (1995), schemes for optimal weighting of galaxy data were presented. This weighting of the data (step 1) is prior to and complementary to the mode-weighting discussed in this paper (step 2). The data-weighting is designed to maximize the signal-to-noise of individual modes, and one might intuitively expect thus this to be a good mechanism for obtaining large eigenvalues k . The techniques of the present paper can then subsequently be used to trim the data in an optimal way. The data-weighting scheme alone does not guarantee that we get the best answer possible. This is because it maximizes the diagonal elements in the covariance matrix, ignoring correlations between modes. This does therefore not guarantee that the complete Fisher infor-

been applied to the COBE data by Bunny , and used it to constrain a wide variety of cosmological models (Bunn 1995; Bunn & Sugiyama 1995; Bunn, Scott & White 1995; Hu, Bunn & Sugiyama 1995; White & Bunn 1995; Yamamoto & Bunn 1996). Here the uncompressed data set consists of the CMB temperatures in n pixels from a sky map (for instance, n = 4038 or 4016 for a COBE map with a 20 galactic cut, depending on the pixelization scheme used). If foreground contamination is negligible (see e.g. Tegmark & Efstathiou 1996 for a recent review) and the pixel noise is uncorrelated (as it is for COBE to an excellent approximation | see Lineweaver et al. 1994) one has 8 = 0, < ? Sij = P1 2`4+1 P` (^ i nj )W`2 C` ; n ^ (57) `=2 : 2 N ij = ij i ; ^ where ni is a unit vector pointing the direction of the ith pixel, i is the r.m.s. noise in this pixel, P` are the Legendre polynomials and W` is the experimental beam function. This expression for S should only be used if one marginalizes over the monopole and dipole in the ensuing likelihood analysis | otherwise S should be corrected as described in Tegmark & Bunn (1995). In all the above-mentioned applications, the compression was optimized with respect to the overall normalization of the power spectrum (say = C2 ), so the matrix S was independent of . It has been found (Bunn & Sugiyama 1995) that a compression factor of about 10, down to n0 400 modes, can be attained without signi cant loss of information. This is about the same compression factor that we obtained in our redshift distortion example above. Although these above-mentioned papers constrained a wide variety of cosmological parameters, the compression was in all cases optimized only for one parameter, the overall power spectrum normalization. Although this procedure can be improved as was described in Section 3.6, this does not appear to be causing substantial loss of efciency in the COBE case. Support for this conclusion is provided by Tegmark & Bunn (1995), where it is found that KL-compression optimized for measuring the normalization gives error bars on the spectral index n that are less than 10% away from the minimal value that is obtained without any compression at all. It should also be noted that if one optimized the KL-compression for a di erent parameter , say = n, then C would no longer be of the simple form of equation (53). Thus the signal-to-noise eigenmode treatment no longer applies, and the more general equation (29) must be used instead. As has been pointed out by Gorski (1994), the rapid fall-o of the COBE window function W` implies that virtually all the cosmological information in the COBE maps is contained in the multipoles ` 30. When implementing the KL-compression in practice, it is useful to take advantage of this fact by replacing the data set by the 957 spherical harmonic coe cients a`m for 2 ` 30, since this reduces the size of the matrix to be diagonalized by about a factor

In fact, both Bond and Bunn independently reinvented the entire KL-method.

Eigenvalue Problems


Figure 4. The three heavy lines show the smallest error bars attainable for the three parameters Q, n and as a function of the numberof modes used, i.e., the error bars obtained when using the separatelyoptimizedKL-modes for each parameter.The three thin lines show the error bars obtained when using the 500 SVD modes, illustrating that these contain essentially all the relevant information about all three parameters.

Figure 5.

The error bars on the power spectrum normalization are shown for hypothetical COBE experiments with di erent noise levels. From top to bottom, they correspond to a noise enhancement factor 10, the real 4 year data, and noise reduction factors of 10, 100 and 1000.

mation matrix will be optimal. In general, it does not seem tractable to devise a data-weighting scheme which optimizes the Fisher matrix directly.

Here we apply a KL-analysis to the problem of estimating the quadrupole normalization Q, the spectral index n and the reionization optical depth from the 4-year COBE data (Bennett et al. 1996). As the initial data vector x, we use the temperatures in the N = 4016 pixels that lie more than 20 from the galactic plane. Just as in Section 4.2.1, = 0 and C = S + N is given by equation (57). Computation of C ;i thus reduces to computing the derivatives of the angular power spectrum C` that occur in the sum, and we do this in practice using the software described in Seljak & Zaldarriaga (1996). The ducial model has the standard CDM power spectrum shown in Figure 1. The three heavy lines in Figure 4 show the results of performing a separate KL-analysis for each parameter. In good agreement with the ndings of Bunn & Sugiyama (1995), we see that virtually all information about Q is contained in the rst 400 modes. Similarly, we see that all essentially all the information about n and is contained in the rst 400 modes optimized for these parameters. Since the KL-modes for a parameter by construction give the smallest error bars, the curves corresponding to any other choice of modes would lie above these solid curves. For instance, if we used the KLmodes optimized for to measure Q, the resulting curve would lie above the bottom one if we were to plot it in Figure 4. Figure 1 provides a simple interpretation of this: since dC` =d 0 for ` 10, the -modes do not contain information about the lowest multipoles, since this information is useless for measuring (even though it is important for measuring Q).

4.3 A multi-parameter CMB example

We implement the SVD technique described in Section 3.6 by taking the 500 best modes for each parameter and SVD-compressing these down to 500 modes. The thin lines in Figure 4 show the resulting error bars. We see that although they lie above the minimal curves for k 500, they all \catch up" at k = 500. In other words, these 500 modes retain virtually all the relevant information about Q, n and , since using all of them gives virtually identical error bars to those obtained when using the full N = 4016 uncompressed data set.

How e ective will KL-compression be for future CMB data sets? Might it be the case that when noise levels are much lower, then all N KL-modes will have S/N 1, so that no compression is possible? Figure 5 shows that the answer to the second question is no. For this example, we have repeated the above-mentioned COBE analysis for a range of di erent noise levels. With ten times less noise, the compression factor is seen to drop to about 4016=700 7 and another order of magnitude of noise reduction lowers the compression factor to about 4. The upcoming MAP experiment forecasts a noise level w?1 4 2 =N 2:5 10?15 , as compared to w?1 10?12 for COBE and w?1 5 10?18 for the planned COBRAS/SAMBA experiment. These are quadratic quantities, so taking square roots, we see that MAP and COBRAS/SAMBA reduce the COBE noise by factors around 20 and 450, respectively. Figure 5 shows that even with 1000 times less noise, a compression factor of 2 is readily attainable. The explanation of this is oversampling. To avoid aliasing problems, the mean pixel separation must be smaller than the beam width by a substantial factor (the Shannon oversampling factor 2.5). This redundancy remains even when the data set itself has excellent signal-tonoise, so the KL-compression can take advantage of the fact that the pixel basis of the map is a sense overcomplete. Al-

4.4 A low noise CMB example


M. Tegmark, A. N. Taylor, & A. F. Heavens
The fact that all eigenvalues are identical means that we would be quite stupid to throw away any modes at all, since they all contain the same amount of information. Simple as it is, this example nonetheless illustrates a di culty with analyzing future CMB maps. Even in the best of all worlds, where we had complete sky coverage, we would encounter a problem of this type. To estimate a multipole C` , we would be faced with (2`+1) coe cients a`m with zero mean and unknown variance, just as in the example above, which means that the KL-method would be of no use at all in compressing this data. In Bunn (1995) and in one of the above examples, it was found that the COBE data set can be compressed down to about 400 numbers without losing much information. Where does this magic number 400 come from? In the absence of a galaxy cut, the number would probably be around 600, since beam smearing has eliminated virtually all information about multipoles ` > 25, and there are about 600 a`m coe cients with ` < 25. The fact that the galaxy cut removes about one third of the sky then reduces the cosmological information by about a third. There is also a more direct way to see where the compression factor 10 came from. As mentioned in Section 4.4, pixelized maps are generally oversampled by a factor 2.5 or more to avoid aliasing problems, which means that the mean pixel separation is considerable smaller than the beam width. Since the sky map is twodimensional, the number of pixels per beam area is roughly the square of this number, of the order of 10. Future CMB missions will probably use similar oversampling rates, which means that a compression factor of 10 is probably the most we can hope for with linear compression only. Fortunately, we are not limited to linear data compression. Let us compress the data set of our previous example into a single number de ned by n X y 1 x2 : (60)

though future CMB experiments will of course have a much higher angular resolution than in our example, the oversampling factor will remain similar, so our conclusion about the usefulness of KL-compression will remain the same.


Above we have shown that the KL-compression technique is in many cases very useful. In this section, we will discuss some of its limitations, as well as outline nonlinear extensions that may make the compression technique feasible even for the next generation of CMB missions and galaxy surveys. The number of pixels in a sky map from a nextgeneration CMB mission is likely to be around 107 . The Sloan Digital Sky Survey (SDSS) will measure the redshifts of 106 galaxies. Is it really feasible to apply KL-methods and likelihood analysis to data sets of this gargantuan size? We will argue that although an orthodox KL-analysis is not feasible, it appears as though the answer to the question may nonetheless be an encouraging yes if an extra nonlinear compression step is added to make the analysis doable in practice. Let us rst note that despite the statistical orthonormality property of a KL-compressed data set, the matrices that need to be inverted to nd the ML-estimate are in general not diagonal. BCB t is only diagonal at the one point in parameter space where = 0 , i.e., the point corresponding to the parameter values that we assumed a priori. The ML-point generally lies elsewhere, and we need to nd it by a numerical search in parameter space, so BCB t will generally not even be sparse. This forces us to resort to Cholesky decomposition. For the n 4000 case of Tegmark & Bunn (1995), this took about 10 minutes per matrix on a workstation. Since the time scales as n3 and the storage as n2 , this would require about 30 years of CPU time for n = 106 , and about one terabyte of RAM. Even allowing for the exponential rate at which computer performance is improving, such a brute force likelihood analysis does not appear feasible for a megapixel CMB map within the next ten years. The crucial question thus becomes how large a compression factor we can expect to achieve. We will argue that a factor of 10 is just about all one can do with linear compression, but that quadratic \pre-compression" may be able to gain another factor of 1000 for a next generation CMB experiment. Consider the following simple one-parameter example: we are given n numbers xi drawn from a Gaussian distribution with zero mean and an unknown variance that we wish to estimate. Thus = 0, (58) C = I, so when we solve equation (29), we nd that all the eigenvalues are identical: k = 1= . Thus if we compress by keeping only the rst n0 modes, the resulting error bar will be r = 2 : (59)

5.2 Quadratic compression


5.1 The limits of linear compression


Let us assume that n is very large, say n > 100. Then y will be very close to Gaussian by the central limit theorem. This means that the mean is hyi = n and the variance is hy2 i?hyi2 = 2n 2 . Substituting this into Equation (15) now gives r r 2 2 = n+4 (61) n ; i.e., the theoretically minimal error bar of equation (59) with n0 = n. (The extra 4 in equation (61), which might seem to indicate that one can attain smaller error bars than the theoretical minimum, should of course be ignored | it merely re ects the fact that the Gaussian approximation breaks down for small n.) In summary, we have found that whereas linear compression was powerless against our simple toy model, quadratic compression made mincemeat of the problem, condensing all the information into a single number. This result is hardly surprising, considering that the compressed data set y that we have de ned in equation (60) is in fact the ML-estimate of the variance. It nonetheless has far-reaching



Eigenvalue Problems
implications for the issue of how to compress future CMB maps. If we have complete sky coverage, and de ne the compressed data set as



1 X ja j2 ; 2` + 1 m=?` `m



then it is easy to show that the new Fisher information matrix will be identical to that of equation (18), which involved using the entire data set. For an experiment with a FWHM beamwidth of 4 arcminutes, there is virtually no information on multipoles above `max = 3000, so this means that in the absence of a galaxy cut, we could compress the entire data set into 3000 numbers without losing any cosmological information at all. Roughly speaking, this works because the compression throws away all the phase information and keeps all spherically averaged amplitude information, and it is the latter that contains all the information about the power spectrum. (For non-Gaussian models such as ones involving topological defects, the power spectrum alone does of course not contain all the relevant information.)

The discussion above already included the e ects of pixel noise and beam smearing. Barring systematic errors, there are two additional complications that will inevitably arise when analyzing future high-quality CMB maps: Foregrounds Incomplete sky coverage Any attempts at foreground subtraction should of course be made before throwing away phase information, so that available spatial templates of galactic dust, radio point sources etc. can be utilized. For a recent discussion of such subtraction schemes, see e.g. Tegmark & Efstathiou (1996). The nal result of the foreground treatment will almost certainly be a map where some regions, notably near the galactic plane, have been discarded altogether. The resulting incomplete sky coverage degrades information in two di erent ways: The sample variance increases. The spectral resolution decreases. The former e ect causes the variance in individual multipole estimates to grow as the inverse of the remaining sky area (Scott et al. 1994), and this increase in variance is automatically re ected in the nal Fisher information matrix. The latter e ect means that the y` de ned by equation (62) is no longer a good estimate of the multipole C` . Rather, it is easy to show that hy` i will be a weighted average of all the multipoles C` . These weights, known as the window function, generally form quite a broad distribution around `, which means that the compressed data y` are e ectively probing a smeared out version of the power spectrum. For a 20 galactic cut, this smearing is found to be around `=` 25%, which clearly destroys information about features such as the exact location of the Doppler peaks.

5.3 Real-world CMB maps: an outline of a compression recipe

5.3.1 How to deal with incomplete sky coverage Fortunately, the problem of incomplete sky coverage can be for all practical purposes eliminated. As described in detail by Tegmark (1996, hereafter T96), it is possible to obtain much narrower window functions by simply replacing the spherical harmonic coe cients in equation (62) by expansion coe cients using a di erent set of functions, obtained by solving a certain eigenvalue problem. If is found that for a 20 galactic cut, the window functions widths can be brought down to ` 1 for all `, corresponding to a relative smearing of less than a percent at ` 200, the scale of the rst CDM Doppler peak. In other words, as long as none of the models between which we are trying to discriminate have any sharp spectral features causing the power spectrum to jump discontinuously from one multipole to the next, then virtually no information at all is lost in our quadratic compression. In general, smoothing only destroys information if there are features on the smoothing scale or below it. If the true power spectrum is similar to a CDM spectrum, it will typically vary on scales ` C` =(dC` =d`) 100, at least for angular scales ` > 50. In other words, even smoothing it with window functions with ` as large as 10 would hardly destroy any information at all about this part of the power spectrum. The quadratic compression of T96 produces a spectral resolution of ` 1= , where is the smallest dimension of the patch of sky analyzed, in radians. In other words, we can allow ourselves even more leeway than the galaxy cut dictates without losing information. This can be used to save CPU-time in practice. Since we can achieve the spectral resolution ` 10 with 6 6 patches of sky, we can form compressed data sets y separately for a mosaic of such patches covering all the available sky, thus radically reducing the size of the matrices that we need to diagonalize for the T96-method, and then simply average these di erent estimates of the power spectrum. 5.3.2 The nal squeeze: KL-method Although the above-mentioned prescription will reduce the size of the data set dramatically, from perhaps 107 numbers to about 3000, there will still be considerable amounts of redundancy, since power spectrum estimates y` for neighboring `-values will be correlated. Because of this, it is worthwhile to subject the new data set y to a regular KLcompression. We thus term the above-mentioned quadratic step pre-compression: it does by no means need to be optimal, and is simply done to reduce the data set down to a small enough size that it can be fed into the KL-machinery without practical di culties. The values y` are Gaussian to an excellent approximation for ` > 50, by the central limit theorem, since they are the sum of 2` + 1 > 100 independent numbers. The remainder, however, are not. Since the Gaussian property makes such a dramatic simpli cation both in the compression step and in the likelihood analysis itself, we therefore recommend discarding the y`-values with ` < 50 and replacing them by the 2500 spherical harmonic coe cients a`m with ` < 50 before proceeding to the KL-compression step. This way, the entire data set y will be Gaussian. In addition, no information whatsoever has been lost about the largest angular


M. Tegmark, A. N. Taylor, & A. F. Heavens
at least (300=3)3 = 106 numbers. Since even the power spectrum in the deeply non-linear regime contains valuable cosmological information, it would not seem justi ed to simply ignore these small scales. Unfortunately, the galaxy survey problem is considerably more di cult than the corresponding CMB problem in that information about, for instance, redshift space distortions lies hidden not merely in the overall power spectrum, but also in the phases, in the di erences between radial and transverse clustering patterns. As we have noted, transforming to weighted harmonic amplitudes is one way of reducing the number of data \points" without losing cosmological information For instance, in the case of the analysis of the 1.2Jy survey by HT95 and BHT95, only the rst 1208 modes where used from the 2000 galaxies, with the limit on modes used being set by the survey size and the smallest scale still in the linear regime. However, if the full range of available modes is desired, we might need higher order compression options. One way to implement quadratic compression, while retaining the important phase information, is to transform to some coordinate basis which is orthogonal in redshift{space. We can then apply the quadratic compression to estimate a \power spectrum" in the transformed space, having lost none of the underlying phase information. Some progress along the lines of nding an orthonormal basis function for redshift space has been made by Hamilton & Culhane (1995). However, these methods still fail to adequately deal with shot{noise and the phase mixing introduced by a nite angular mask function. Thus the issue of whether one can do even better with galaxy surveys, while staying within the realms of numerical feasibility, remains a challenging open question.

scales, where some models in fact do predict rather sharp spectral features which could render the quadratic compression step inappropriate. In summary, we argue that the following prescription for analyzing future 107 pixel CMB map will be fairly close to optimal: (i) Foregrounds are subtracted making maximal use of multifrequency data and spatial templates obtained from other experiments. (ii) The most contaminated regions, for instance the galactic plane, are discarded altogether. (iii) The remaining sky is expanded in spherical harmonics up to ` = 50 and the coe cients saved. (iv) This remaining sky is covered by a mosaic of overlapping regions, whose diameter are of order 5 ? 10 . (v) The angular power spectrum is estimated separately from each of these patches with the method of T96, up to ` = 3000. (vi) All these power spectrum estimates are averaged. (vii) The rst 50 multipole estimates are discarded, and replaced by the 2500 numbers from step (iii). (viii) The resulting data set (about 5500 numbers) is compressed with the KL-method. (ix) All cosmological parameters of interest are estimated jointly from this compressed data set with a likelihood analysis.
5.3.3 Open problems The above prescription for quadratic compression was necessarily rather sketchy, since a detailed treatment of these issues would go well beyond the scope of this paper. Indeed, the discussion above left several important questions unanswered, which are clearly worthwhile topics for further research. Here are two such examples: How can information loss during pre-compression be minimized? Above we merely showed pre-compression to be lossless in the absence of noise (or with identical noise levels in all pixels). In the presence of noise, one would expect lossless compression to involve some form of inverse-variance pixel weighting, i.e., giving less weight to noisier pixels. On the other hand, pushing such noise weighting too far could broaden the window functions to the point where low spectral resolution led to irreversible information loss. How is quadratic precompression best implemented numerically? For instance, are there particularly choices of shapes and sizes of the above-mentioned patches that substantially facilitate the calculation of the nal mean vector and correlation matrix? Is a direct analytic calculation of these quantities feasible (C involves terms quadratic in the power spectrum), or is it faster to resort to Monte Carlo techniques for this step?

5.4 Real-world galaxy surveys

Also for large future galaxy surveys, some form of precompression appears to be necessary before a KL-compression can be done. If we wish to retain all the information on clustering down to say 3h?1 Mpc scales in a 3D volume of typical dimension 300h?1 Mpc, we clearly need to \pixelize" it into

We have given a comprehensive discussion of how to best estimate a set of cosmological parameters from a large data set, and concluded the following: Well-known information-theoretical results roughly speaking state that a brute-force likelihood analysis using the entire data set gives the most accurate parameter determination possible. For computational reasons, this will be unfeasible for the next generation of CMB maps and galaxy surveys. The solution is to use a good data compression scheme. The optimal linear compression method is the Karhunen-Loeve method, of which the so-called signal-tonoise eigenmode method is a special case. Although the standard KL-method applies only when estimating a single parameter, it can be generalized to the multi-parameter case by simply adding a step consisting of a singular value decomposition (SVD). This SVD step also provides a simple way out of the Catch-22 situation that one needs to specify the parameter values before one has measured them. The KL-method produces a compression factor 10 for typically sampled CMB maps, and also for the redshift space distortion analysis of Heavens & Taylor (1995). However, this is not enough to handle a high-resolution next generation CMB map.


Eigenvalue Problems
Fortunately, it appears as though this can be remedied by adding a quadratic pre-compression step without substantial information loss. Cosmology, which used to be a data-starved science, is now experiencing a formidable explosion of data in the form of both CMB maps and galaxy redshift surveys. Around the turn of the millennium, we will probably be equipped with data sets so rich in information that most of the classical cosmological parameters can | in principle | be determined with accuracies of a few percent or better. Whether this accuracy will be attainable also in practice depends crucially on what data-analysis methods are available. We have argued that the prospects of achieving this accuracy goal are quite promising, especially on the CMB side (which is slightly simpler), by using a multi-parameter generalization of the Karhunen-Loeve method together with a quadratic pre-compression scheme. However, much work remains to be done on precompression issues to ensure that we can take full advantage of the wealth of data that awaits us. The authors wish to thank Ted Bunn, Andrew Hamilton and our referee, Uros Seljak, for useful comments and suggestions. ANT is supported by a PPARC research assistantship. MT acknowledges partial support from European Union contract CHRX-CT93-0120 and Deutsche Forschungsgemeinschaft grant SFB-375.




Bennett, C. L. et al. 1996, ApJ, 464, L1 Bond, J. R. et al. 1994, Phys. Rev. Lett., 72, 13 Bond, J. R., 1995, Phys. Rev. Lett., 74, 4369 Bunn, E. F. 1995, Ph.D. Thesis, U.C. Berkeley, ftp pac2.berkeley.edu/pub/bunn Bunn, E. F., Sugiyama N. 1995, ApJ, 446, 49 Bunn, E. F., Scott, D. & White, M. 1995, ApJ, 441, L9 Davis M., Peebles P. J. E., 1983, APJ, 267, 465 Dodelson, S., Gates, E. & Stebbins, A. 1996, ApJ, 467, 10 Feldman H. A., Kaiser N., Peacock J. A., 1994, APJ, 426, 23 Fisher, R. A. 1935, J. Roy. Stat. Soc., 98, 39 Fisher K. B., Huchra J. P., Strauss M. A., Davis M., Yahil A., Schlegel D., 1995, ApJS, 100, 69 Gorski, K. M. 1994, ApJ, 430, L85 Gunn, J., Weinberg, D., 1995, in Wide- eld spectroscopy and the distant universe, proc. 35th Herstmonceux conference, eds S.J. Maddox & A. Aragon-Salamanca, World Scienti c, p3 Hamilton, A. J. S, Culhane, M., MNRAS, 278, 73 Heavens, A. F., Taylor, A. N., 1995, MNRAS, 483, 497 Hu, W. & Bunn, E. F. & Sugiyama, N. 1995, ApJ, 447, L59 Hu, W., Scott, D., Sugiyama, N. & White, M. 1995, Phys. Rev. D, 52, 5498 Jungman, G., Kamionkowski, M., Kosowsky, A. & Spergel, D. N. 1996a, Phys. Rev. Lett., 76, 1007 Jungman, G., Kamionkowski, M., Kosowsky A. & Spergel D. N. 1996b, Phys. Rev. D, 54, 1332 Kaiser, N. 1987, MNRAS, 227, 1 Karhunen, K., Uber lineare Methoden in der Wahrscheinlichkeitsrechnung (Kirjapaino oy. sana, Helsinki, 1947). Kendall M. G., Stuart, A., 1969. The Advanced Theory of Statistics, Volume II, Gri n, London

Kenney, J. F. & Keeping, E. S. 1951, Mathematics of Statistics, Part II, 2nd ed. (Van Nostrand, New York). Knox, L. 1995, Phys. Rev. D, 52, 4307 Lawrence, A., Rowan-Robinson, M., Saunders, W., Parry, I., Xia Xiaoyang, Ellis R. S., Frenk, C.S., Efstathiou, G. P., Kaiser, N., Crawford, J., 1996, to be submitted to MNRAS Liddle, A. R. et al. 1996, MNRAS, 281, 531 Peacock, J. A. & Dodds 1994, MNRAS, 267, 1020 Press, W. H., Teukolsky, S. A., Vetterling, W. T., Flannery, B. P., 1992, Numerical Recipes. Cambridge University Press, Cambridge Saunders, W. et al. 1996, in preparation Scott, D., Srednicki, M. & White, M. 1994, ApJ, 421, L5 Seljak, U. & Bertschinger, E. 1993, ApJ, 417, L9 Seljak, U. & Zaldarriaga, M. 1996, preprint astroph/9603033 Smoot, G. F. et al. 1992, ApJ, 396, L1 Taylor, K., 1995, in Wide- eld spectroscopy and the distant universe, proc. 35th Herstmonceux conference, eds S.J. Maddox & A. Aragon-Salamanca, World Scienti c, p15 Tegmark, M., Bunn, E. F. 1995, ApJ, 455, 1 Tegmark, M. 1995, ApJ, 455, 429 Tegmark, M. 1996, MNRAS, 280, 299 Tegmark, M. & Efstathiou, G. 1996, MNRAS, 281, 1297 Therrien, C. W. 1992, Discrete Random Signals and Statistical Signal Processing (Englewood Cli s: Prentice Hall) Vogeley, M. S. 1995, in \Wide-Field Spectroscopy and the Distant Universe", eds. Maddox & Aragon-Salamanca (World Scienti c, Singapore) Vogeley, M. S. & Szalay, A. S., 1996, ApJ, 465, 34 White, M. & Bunn, E. F. 1995, ApJ, 450, 477 Yamamoto, K. & Bunn, E. F. 1996, ApJ, 464, 8

学霸百科 | 新词新语

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。