9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # On the performance of spectral graph partitioning methods

On the Performance of Spectral Graph Partitioning Methods

Stephen Guattery Gary L. Miller December 1994

CMU-CS-94-228

School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213

To appear in the Sixth Annual ACM/SIAM Symposium on Discrete Algorithms (SODA ’95)

This work was supported in part by NSF Grant CCR-9016641, and in part by the Avionics Laboratory, Wright Research and Development Center, Aeronautical Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, OH 45433-6534 under Contract F33615-90-C-1465, Arpa Order No. 7597. Support also was sponsored by the Air Force Materiel Command (AFMC) and the Advanced Research Projects Agency (ARPA) under contract number F19628-93-C0193. In addition, IBM, Motorola, and the NSF/Presidential Young Investigator Award under Grant No. CCR-8858087, TRW, and the U.S. Postal Service gave their support.

Keywords: Algorithms, Graph Theory, Graph Algorithms, Numerical Linear Algebra, Eigenvalues

graphs for which the algorithm fails to ?nd a separator with a cut quotient within n 4 ? ? 1 of the isoperimetric number. We also introduce some facts about the structure of eigenvectors of certain types of Laplacian and symmetric matrices; these facts provide the basis for the analysis of the counterexamples.

1

Abstract Computing graph separators is an important step in many graph algorithms. A popular technique for ?nding separators involves spectral methods. However, there is not much theoretical analysis of the quality of the separators produced by this technique; instead it is usually claimed that spectral methods “work well in practice.” We present an initial attempt at such analysis. In particular, we consider two popular spectral separator algorithms, and provide counterexamples that show these algorithms perform poorly on certain graphs. We also consider a generalized de?nition of spectral methods that allows the use of some speci?ed number of the eigenvectors corresponding to the smallest eigenvalues of the Laplacian matrix of a graph, and show that if such algorithms use a constant number of eigenvectors, then there are graphs for which they do no better than using only the second smallest eigenvector. Further, when applied to these graphs the algorithm based on the second smallest eigenvector performs poorly with respect to theoretical bounds. Even if an algorithm meeting the generalized de?nition uses up to n for 0 < < 1 eigenvectors, there exist 4

1 Introduction

Spectral methods (i.e., methods using the eigenvalues and eigenvectors of a matrix representation of a graph) are widely used to compute graph separators. Typically, the Laplacian matrix representation B of a graph G is used; the Laplacian B of a graph G on n vertices is the n n matrix with the degrees of the vertices of G on the diagonal, and entry bij ?1 if G has the edge vi ; vj and 0 otherwise. The eigenvector u2 corresponding to 2 (the second-smallest eigenvalue of B ) is computed, and the vertices of the graph are partitioned according to the values of their corresponding entries in u2 [PSL90, HK92]. The goal is to compute a small separator; that is, as few edges or vertices as possible should be deleted from the graph to achieve the partition. Although spectral methods are popular, there is little theoretical analysis of how well they do in producing small separators. Instead, it is usually claimed that such methods “work well in practice,” and tables of results for speci?c examples are often included in papers (see e.g. [PSL90]). Thus there is little guidance for someone wishing to compute separators as to whether or not this technique is appropriate. Ideally, practitioners should have a characterization of classes of graphs for which spectral separator techniques work well; this characterization might be in terms of how far the computed separators can be from optimal. As a ?rst step in this direction, we consider two spectral separation algorithms that partition the vertices on the basis of the values of their corresponding entries in u2 , and provide counterexamples for which each of the algorithms produces poor separators. We further consider a generalized de?nition of spectral methods that allows the use of more than one of the eigenvectors corresponding to the smallest non-zero eigenvalues, and show that there are graphs for which any such algorithm does poorly. The ?rst algorithm is a popular technique that consists of bisecting a graph by partitioning the vertices into two equal-sized sets based on each vertex’s entry in the eigenvector corresponding to the second-smallest eigenvalue. A graph in our ?rst counterexample class looks like a ladder with the top 2=3 of its rungs kicked out (see Figure 1); a straightforward spectral bisection algorithm cuts the remaining rungs, whereas the optimal bisection is made by cutting across the ladder above the remaining rungs. (We refer to this graph as the “roach graph” because its outline looks roughly like the body of a cockroach and two long antennae.) It is possible to modify the spectral bisection algorithm in ways that allow it to generate a better separator for the roach graph. Some examples of possible modi?cations are presented in [HK92]; these examples still use a partition based on u2 . We consider a simple spectral separator algorithm, the “best threshold cut” algorithm, based on the most general of these suggested modi?cations, and present a second class of graphs that defeats this algorithm. This class of graphs consists of crossproducts of path graphs and graphs consisting of a pair of complete binary trees connected by an edge between their roots. Finally, we consider a more general de?nition of purely spectral separator algorithms that subsumes the two preceding algorithms. This general de?nition allows the use of some speci?ed number of eigenvectors corresponding to the smallest eigenvalues of the Laplacian of a graph. For any such algorithm that uses a ?xed number of eigenvectors we show there are graphs for which it does no better than using the “best threshold cut” algorithm. Further, the separator produced when the “best threshold cut” algorithm is applied to these graphs is as bad as possible (to within a constant) with respect to theoretical bounds on the size of the separators produced.

=

(

)

1

V

1

V

k 3 +1

V

2 k+ 1

V

5 k+ 1

V

3k

V

6k

Figure 1: The Roach Graph We also show that if a purely spectral algorithm uses up to n eigenvectors for 0 < < 1 , there 4 exist graphs for which the algorithm fails to ?nd a separator with a cut quotient (i.e., the ratio between the number of edges cut and the size of the smaller set in the vertex partition) within n 14 ? ? 1 of the optimum (the optimum cut quotient is called the isoperimetric number). We also note that the counterexample graphs can be extended to graphs that could conceivably be used as three-dimensional ?nite-element meshes – that is, graphs that could be encountered in practice. This paper makes one additional contribution: Our counterexamples have simple structures and intuitively would be expected to cause problems for spectral separator algorithms. The challenge is to provide good bounds on 2 for these graphs. For this purpose we have developed theorems about the spectra of graphs with particular symmetries that exist in our counterexamples. Speci?cs are given in the text that follows. Section 2 gives some history of spectral methods and details of the algorithms we discuss in this paper. Section 3 gives the graph and matrix terminology that we will use, and presents some useful facts about Laplacian matrices. Section 4 gives the counterexample for the spectral bisection algorithm; Section 5 gives the counterexample for the “best threshold cut” algorithm. Finally, Section 6 discusses our generalized notion of spectral separator algorithms, and shows that there are graphs for which any such algorithm performs poorly.

2 Spectral Methods for Computing Separators

The roots of spectral partitioning go back to Hoffmann and Donath [DH73], who proved a lower bound on the size of the minimum bisection of a graph, and Fiedler [Fie73][Fie75], who explored the properties of 2 and its associated eigenvector for the Laplacian of a graph. There has been much subsequent work, including Barnes’s partitioning algorithm [Bar82], Boppana’s work that included a stronger lower bound on the minimum bisection size [Bop87], and the particular bisection and graph partitioning problems that we are considering in this paper [HK92] [PSL90] [Sim91]. (We note that spectral methods have not been limited to graph partitioning; work has been done using the spectrum of the adjacency matrix in graph coloring [AG84] and using the Laplacian spectrum to prove theorems about expander graph and superconcentrator properties [AM85] [Alo86] [AGM87]. 2

The work on expanders has explored the relationship of 2 to the isoperimetric number; Mohar has given an upper bound on the isoperimetric number using a strong discrete version of the Cheeger inequality [Moh89]. Reference [CDS79] is a book-length treatment of graph spectra, and it predates many of the results cited above.) A basic way of computing a graph bisection using spectral information is presented in [PSL90]. We will refer to this algorithm as spectral bisection. Spectral bisection works as follows: Represent G by its Laplacian B , and compute u2 , the eigenvector corresponding to

2

of B .

Assign each vertex the value of its corresponding entry in u2 . This is the characteristic valuation of G. Compute the median of the elements of u2 . Partition the vertices of G as follows: the vertices whose values are less than or equal to the median element form one part of the partition; the rest of the vertices form the other part. The set of all edges between the two parts forms an edge separator. If a vertex separator is desired, it can be computed from the edge separator as described in the next section. Since the graph bisection problem is NP-complete [GJ79], spectral bisection may not give an optimum result. That is, spectral bisection is a heuristic method. A number of modi?cations have been proposed that may improve on its performance. These modi?ed heuristics can give splits other than bisections. In such cases, we will use the cut quotient or separator ratio (the separator ratio is the ratio between the number of edges cut and the product of the sizes of the two sets in the vertex partition) in judging how close the split is to optimal. Computing the separator with the minimum ratio is NP-hard (see, e.g., [LR88]). The following modi?cations, all of which use the characteristic valuation, are presented in [HK92]: Partition the vertices based on the signs of their values; Look for a large gap in the sorted list of eigenvector components, and partition the vertices according to whether their values are above or below the gap; and Sort the vertices according to value. For each index 1 i n ? 1, consider the ratio for the separator produced by splitting the vertices into those with sorted index i and those with sorted index > i. Choose the split that provides the best separator ratio. Note that the last idea subsumes the ?rst two. We will consider a variant of this algorithm below. Since this algorithm does not consider the case where multiple vertices may have the same value, we will specify that only splits between vertices with different values are considered (we refer to such cuts as threshold cuts). We call this algorithm the “best threshold cut” algorithm; the fact that we’ve slightly changed the de?nition above does not alter its performance with respect to the counterexamples below (other than slightly simplifying the analysis). Also note that the idea of cutting at an arbitrary point along the sorted order can be extended to choosing two split points, where the corresponding partitions are the vertices with values between 3

the split points, and those with values above the upper or below the lower split point. Again, the pair yielding the best ratio is chosen. The algorithms mentioned so far have only used the eigenvector u2 . Another possibility is to look at the partitions generated by the set of eigenvectors for some number of smallest eigenvalues: for each vertex, a value would be assigned by computing a function of that vertex’s eigenvector components. Partitions could then be generated in the same way as they are for u2 in the various algorithms given above. Given the variety of heuristics cited above, it would be nice to know which ones work well for which classes of graphs. It would be particularly useful if it were possible to state reasonable bounds on the performance of these heuristics for classes of graphs commonly used in practice (e.g., planar graphs, planar graphs of bounded degree, three-dimensional ?nite element meshes, etc.). Unfortunately, this is not the case. We start by proving that spectral bisection may produce a bad separator for a bounded-degree planar graph in Section 4; ?rst, however, we need to introduce some terminology and background results.

3 Terminology, Notation, and Background Results

We assume that the reader is familiar with the basic de?nitions of graph theory (in particular, for undirected graphs), and with the basic de?nitions and results of matrix theory. A graph consists of a set of vertices V and a set of edges E ; we denote the vertices (respectively edges) of a particular graph G as V G (respectively E G ) if there is any ambiguity about which graph is referred to. The notation jGj will be sometimes be used as a shorthand for jV G j. When it is clear which graph we are referring to, we will use n to denote jV j. We use capital letters to represent matrices and bold lower-case letters for vectors. For a matrix A, aij or A ij represents the element in row i and column j ; for the vector x, xi or x i represents the ith entry in the vector. The notation with square brackets is useful in cases where adding subscripts to lower-case letters would be awkward (e.g., where the matrix or vector name is already subscripted). The notation x 0 indicates that all entries of the vector x are zero; ~ 1 indicates the vector that has 1 for every entry. For ease of reference, we will index the eigenvalues of an n n matrix in non-decreasing order. 1 represents the smallest eigenvalue, and n the largest. For 1 < i < n, we have i?1 i i+1. We use the notation i A (respectively G ) to indicate the ith eigenvalue of matrix A (respectively of the Laplacian of graph G) if there i is any ambiguity about which matrix (respectively graph) the eigenvalue belongs to. ui represents the eigenvector corresponding to i . We use the term path graph for a tree that has exactly two vertices of degree one. That is, a path graph is a graph consisting of exactly its maximal path. The crossproduct of two graphs ?G and H (denoted G H ) is a graph on the vertex set f u; v j u 2 V G ;v 2 V H g, with u; v ; u0; v0 in the edge set if and only if either u u0 and v; v 0 2 E H or v v0 and u; u0 2 E G . It is easy to see that G H and H G are isomorphic. One way to think of a graph crossproduct is as follows: Replace every vertex in G with a copy of H . Each edge e in G is then replaced by jH j edges, one between each pair of corresponding vertices in the copies of H that have replaced the endpoints of e. That is, for each

( )

( )

( )

]

]

=

( )

( )

( ) ( )

( ) ( )

( ) =

(

( )( ) ) ( )

=

4

edge vi; vj in G, there is a copy i and a copy j of H . For each vertex u in H , add an edge between vertex u in copy i and the vertex u in copy j . An example is shown in the ?gure below.

(

)

G

H

G × H

Figure 2: A Graph Crossproduct Example For a connected graph G, an edge separator is a set S of edges that, if removed, would break the graph into two (not necessarily connected) components G1 and G2 such that there are no edges between G1 and G2 . (We will assume that an edge separator is a minimal set with respect to the particular G1 and G2 .) A vertex separator is a set S of vertices such that if these vertices and all incident edges are removed, the graph is broken into two components G1 and G2 such that there are no edges between G1 and G2 (again, we’ll assume that such a separator is minimal). The goal in ?nding separators is to ?nd a small separator that breaks the graph into two fairly large pieces; often this notion of “large pieces” is expressed as a restriction that the number of vertices in either component be at least some speci?ed fraction of the number of vertices in G. For edge separators, ? this can be stated more generally in terms of the separator ratio , de?ned as jS j= jG1 j jG2 j . The optimum ratio separator opt is the one that minimizes the separator ratio over all separators 5

for a particular graph [LR88]. A related concept (again, for edge separators) is the isoperimetric number i

!

(G), de?ned as:

the minimum separator ratio or with a cut quotient equal to the isoperimetric number is NP-hard. It is well known that an edge separator S can easily be converted into a vertex separator S 0 by considering the bipartite graph induced by S (where the parts of the bipartition are determined by the components G1 and G2 ), and setting S 0 to be a minimum edge cover for that graph. Graphs can be represented by matrices. For example, the adjacency matrix A of a graph G is de?ned as aij 1 if and only if vi ; vj 2 E G ; aij 0 otherwise. (For such representations we will assume that the vertices have been numbered to correspond to the indices used in the matrix.) A common matrix representation of graphs is the Laplacian. Let D be the matrix with dii degree vi for vi 2 V G , and all off-diagonal entries equal to zero. Let A be the adjacency matrix for G as de?ned above. Then the Laplacian representation of G is the matrix B D ? A. The following are some useful facts about the Laplacian matrix:

jSj ? : min S min jG1j; jG2j ? We will refer to the quantity jS j= min jG1 j; jG2 j as the cut quotient for the edge separator S . It is easy to see that n opt > i(G) n opt =2. As noted in the Introduction above, ?nding a cut with

=

(

)

( )

=

=

( )

( )

=

The Laplacian is symmetric positive semide?nite, so all its eigenvalues are greater than or equal to 0 (see e.g. [Moh88]). A graph G is connected if and only if 0 is a simple eigenvalue of the Laplacian of e.g. [Moh88]). The following characterization of

2

G (see

holds (see e.g. [Fie73]):

2

= min xxTAx : x ~

x?1

T

If G is a crossproduct of two graphs G1 and G2 , then the eigenvalues of the Laplacian of G are all pairwise sums of the eigenvalues of G1 and G2 (see e.g. [Moh88]). For any vector x and Laplacian matrix B of the graph G, we have (see e.g. [HK92]):

xT B x

=

X

(

vi ;vj )2E(G)

(xi ? xj )2

(1)

For a graph G and its Laplacian B , 2 of B ?gures in upper and lower bounds on the isoperimetric number for G [Moh89]. In particular, if G is not one of K1 , K2 , or K3 , we have: q 2 iG 2 2 2 ? 2 ; 2 where is the maximum degree of any vertex in G. This implies the size of any bisection is at least n4 2 .

( )

(

)

()

6

The proof of the upper bound in (2) has interesting implications about the threshold cuts based on the second eigenvector. For any connected graph G, consider the characteristic valuation. The vertices of G will receive k n distinct values; let t1 > t2 > . . . > tk be these values. For each threshold ti ; i < k, divide the vertices into those with values greater than ti , and those with values less than or equal to ti . Compute the cut quotient qi for each such cut, and let qmin be the minimum over all qi ’s. The following theorem can be derived from the proof of Theorem 4.2 in [Moh89]: with second smallest Theorem 3.1 Let G be a connected graph with maximal vertex degree eigenvalue 2 . Further, let G not be any of K1 , K2 , or K3 . Let qmin be as de?ned above. Then

2

2

qmin

q

2

(2 ? 2): ( )

This can be extended to the separator ratio for the best u2 cut in the obvious way. A weighted graph is a graph for which a real value wi is associated with each vertex vi , and a real, nonzero weight wij is associated with each edge vi ; vj (we consider a zero edge weight to indicate the lack of an edge). For our analysis, we need a matrix representation for weighted graphs. Fiedler extended the notion of the Laplacian to graphs with positive edge weights [Fie75]; he referred to this representation as the generalized Laplacian. However, we need an even more general representation that can be used for any weighted graph. Hence we de?ne the standard matrix representation B of a weighted graph G as follows: B has bii wi; for i 6 j and vi; vj 2 E G , bij ?wij , and bij 0 otherwise. Note that the Laplacian matrix of a graph is also the standard matrix representation of the graph with vertex weights set to be the degrees of the vertices, and all edge weights set to 1. Note that the standard matrix representation of any weighted graph is a real symmetric matrix, and that any such matrix can be represented as a speci?c weighted graph. Any theorems about symmetric, real matrices apply to standard matrix representations of graphs. The following two theorems about the interlacing of eigenvalues are particularly useful when applied to standard matrix representations. The First Interlacing Property (the following statement is from page 411 of reference [GL89]; the proof is in [Wil65]): If Ar denotes the leading r r principal submatrix of an n n symmetric matrix A, then for r 1 : n ? 1 the following interlacing property holds:

(

)

( )

=

=

=

=

=

r+1(Ar+1 )

...

2

(Ar 1 )

+

r (Ar )

1

(Ar )

r (Ar+1 )

1

(Ar 1 ):

+

...

The Second Interlacing Property (the following statement is a restricted version of a theorem from page 412 of reference [GL89]; the proof is in [Wil65]): Suppose B A ccT , where A is a real symmetric n n matrix, c is a real unit-length vector, and is real and greater than 0. Then for all i; 1 i n ? 1 we have:

= +

i(A)

i (B )

i+1(A)

The Second Interlacing Property implies that if an edge 0 graph G0, then 2 G 2 G .

( )

( )

e is added to a graph G to produce the

7

3.1

The Structure of Laplacian and Standard Matrix Representation Eigenvectors

The theorems and lemmas presented in this section are useful in proving results about the eigenvectors of the families of graphs presented in later sections. The proofs of some of these results are long and technical; a reader who is interested only in understanding the counterexamples and their implications presented later in this report is advised to look at the theorem statements and examples, and to skip the proofs (although the rules for constructing odd and even components at the end of Theorem 3.4 may be of interest to some readers). The ?rst set of results concern eigenvalues of Laplacian matrices of graphs with automorphisms of order 2. A graph automorphism is a permutation on the vertices of the graph G such that vi; vj 2 E G if and only if v (i); v (j) 2 E G . The order of a graph automorphism is the order of the permutation on the vertices. For weighted graphs, we add two conditions to the de?nition of automorphism: the weights of vertices vi and v (i) must be equal for all i, and the weights of edges vi; vj and v (i) ; v (j ) must be equal. The next two theorems concern the structure of eigenvectors with respect to automorphisms of order 2. They hold both for Laplacians of graphs under the standard de?nition of automorphism, and for standard matrix representations of weighted graphs under the de?nition of automorphisms for weighted graphs. Let G be a graph with an automorphism of order 2. Let B be the Laplacian of G. A vector x that has xi x (i) for all i in the range 1 i n is an even vector with respect to the automorphism ; an odd vector y has yi ?y (i) for all i. It is easy to show that for any even vector x and odd vector y (both with respect to ), x and y are orthogonal. Theorem 3.2 [Even-Odd Eigenvector Theorem] Let B be the Laplacian of a graph G that has an automorphism of order 2. Then there exists a complete set U of orthogonal eigenvectors of B such that any eigenvector u 2 U is either even or odd with respect to . This also holds if G is a weighted graph, B the standard matrix representation of G, and a weighted graph automorphism of order 2. Proof: Let P be the permutation matrix that corresponds to the automorphism . Then P T BP B . If u is an eigenvector of B with eigenvalue , then so is P u. We have the following by the de?nition of automorphism:

(

)

( )

(

)

( )

(

)

(

)

=

=

=

P T BP u = B u = u: Since the automorphism is of order 2, P P = I . Multiplying each side of the equation above by P

thus gives

component xeven as follows:

B (P u) = P ( u) = (P u) : For an even vector x, P x = x; for an odd vector y, P y = ?y. P allows us to uniquely decompose any vector x into an odd component xodd and an even

xodd = x ? Px

2

; and xeven = x +2P x :

8

For any non-zero x, at least one of the even or odd parts must also be non-zero.

Let U 0 be any complete set of eigenvectors of B . For an eigenvector u 2 U 0 , it is easy to see that a non-zero even or odd component will be an eigenvector for the same eigenvalue. Since uodd ueven u, the set of odd and even eigenvectors resulting from decomposing all the eigenvectors spans the same space as U . This implies the claimed result. The proof generalizes to weighted graphs.

+

=

Corollary 3.3 Let B be the standard matrix representation of a weighted graph G that has one or more automorphisms of order 2. Then the eigenvector for any simple eigenvalue is either even or odd with respect to every such automorphism. Proof: Let u be the eigenvector for some simple eigenvalue . Consider the decomposition of u into odd and even parts with respect to some automorphism with order 2. If both parts were non-zero, they would be orthogonal and eigenvectors for . Therefore either the odd part or the even part must be zero.

2

2

Since Laplacians can be considered as standard matrix representations given the right weight assignments, the preceding result also holds for Laplacians. Theorem 3.4 [Even-Odd Graph Decomposition Theorem] For every weighted graph G with standard matrix representation B and an automorphism of order 2, there exist two smaller weighted graphs Godd and Geven (the odd and even components respectively) such that the eigenvalues of Bodd (the standard matrix representation of Godd ) are odd eigenvalues of B , the eigenvalues of Beven (the standard matrix representation of Geven ) are even eigenvalues of B , and jV Godd j jV Geven j jV G j. Further, a complete set of eigenvectors of B can be constructed from the eigenvectors of Bodd and Beven in a straightforward way. Proof: We will demonstrate a similarity transform that when applied to B gives a reducible matrix with two blocks corresponding to Bodd and Beven . First we need to introduce some notation. The vertices of G can be divided into two disjoint sets on the basis of how operates on them. i (i.e., the vertices ?xed by ); and let Vm be the Let Vf be the set of vertices vi such that i set of vertices vj such that j 6 j (i.e., the vertices moved by ). Vm consists of vertices in orbits of length 2. We call a subset of Vm that consists of exactly one vertex from each such orbit a representative set and denote it Vr . In the rest of the proof we will assume that we have arbitrarily speci?ed a particular Vr . We use nf , nm , and nr respectively to denote the number of vertices in each of these sets. Without loss of generality, we can assume that the vertices have the following numbering: the vertices in Vf are numbered 1 through nf ; the vertices in Vr are numbered from nf 1 to nf nr . i nr ; that is, the vertices in Vm n Vr are numbered nf nr 1 to If vi 2 Vr , then we set i n in the same order as the vertices in Vr with which they share orbits. Using this ordering and the de?nition of the automorphism, we can write B in the following block form

(

)+ (

) = ( )

()=

()=

( )= +

+

+ + +

F Efr Efr 6 T B = 4 Efr R Er (r) 7 ; 5 T Efr Er (r) R

where 9

2

3

F

is an nf nf submatrix containing the diagonal entries for the vertices in Vf and the entries for edges between pairs of vertices in Vf ;

R is an nr nr submatrix containing the diagonal entries for the vertices in Vr and the entries for edges between pairs of vertices in Vr (recall that the de?nition of an automorphism and Vm n Vr ); Efr contains the entries of B for edges between vertices in Vf and Vr (our numbering of the vertices plus the automorphism condition that (vi; vj ) is an edge of G if and only if (v (i); v (j)) is also an edge imply that the submatrix with entries for edges between Vf and Vm n Vr will be the same); and Er (r) contains the entries of B for edges between vertices in Vr and Vf n Vr (again, our vertex numbering, the automorphism condition that (vi ; vj ) is an edge of G if and only if (v (i); v (j)) also is an edge, and the symmetry of B imply that Er (r) = ErT (r) = E (r)r). We now de?ne the orthogonal matrix T that we will use to transform B . T has the following form: 2 3 Inf 0 0 1 1 6 7 T = 6 0 p(2) Inr p(2) Inr 7 ; 4 5 ? 0 p1 Inr p 1 Inr (2) (2) where the I ’s are identity matrices with the dimension speci?ed in the subscript. We can transform B as follows: B 0 = T T BT 2 32 3 32 0 0 0 Inf F Efr Efr 6 Inf p1 0 1 1 1 6 74 T 7 = 6 0 p(2) Inr p(2) Inr 7 6 Efr R Er (r) 7 6 0 (2) Inr p(2) Inr 7 54 4 5 1 1 ?1 In T E ?1 In 5 Efr r (r) R 0 p Inr p 0 p Inr p r r (2) (2) (2) (2) p 3 2 32 0 0 Inf 2Efr 0 F T 6 0 p1(2) Inr p1(2) Inr 7 6 Efr p1(2) (R + Er (r)) p1(2) (R ? Er (r)) 7 7 76 = 6 5 4 1 ?1 In 5 4 E T p1 (R + E 1 0 p Inr p r (r)) p(2) (?R + Er (r)) r fr (2) (2) 3 2 p (2) 2Efr 0 F 7 T = 6 p2Efr R + Er (r) 0 5: 4 0 0 R ? Er (r)

Note that the resulting matrix is reducible. That is, when viewed as a weighted graph, that graph has two components. We will show that the blocks of this matrix correspond to Beven and Bodd as follows: the numbering we’ve used together imply that the same submatrix applies for the vertices in

Beven = p2F T Efr

"

p

2Efr

#

R

and

Bodd = R ? Er (r):

10

Since B 0 is similar to B , it has a number of useful properties. Because B 0 is reducible, every eigenvalue of Bodd is an eigenvalue B 0 ; likewise every eigenvalue of Beven is an eigenvalue B 0 . By similarity, they are also eigenvalues of B . i nf nr let Now consider an eigenvector u of Beven . De?ne v as follows: for 1 0. We can use the matrix T to vi ui; let vi 0 otherwise. v is obviously an eigenvector of B transform v into an eigenvector w of B :

=

=

+

2

w = Tv = 6 4

6

Inf

0 0

p1 2 Inr p1 2 Inr ? p1 2 Inr p 12 Inr

( ) ( ) ( ) ( )

0

0

32 76 74 5

vf vr

0

3 7 5

2

1 6 = 6 p 2 vr 4

vf

3 7 7 5

p1 2 vr

( )

( )

By our numbering, it is easy to see this is an even eigenvector of B . Since u, v, and w all have the same eigenvalue , the claim about eigenvalues of Beven corresponding to even eigenvalues of B holds. It is easy to show that if two eigenvectors u1 and u2 of Beven are orthogonal, then the corresponding eigenvectors w1 and w2 are also orthogonal. Thus if an eigenvalue of Beven has multiplicity 2, there are two orthogonal even eigenvectors of B with that eigenvalue. Now consider an eigenvector u of Bodd . As before, we can construct an eigenvector v of B 0 : for nf nr 1 i n let vi ui ; let vi 0 otherwise. We again use the matrix T to transform v into an eigenvector w of B :

+ +

=

=

2

w = Tv = 6 4

6

Inf

0 0

p1 2 Inr p1 2 Inr ? p1 2 Inr p 12 Inr

( ) ( ) ( ) ( )

0

0

32 76 74 5

v (r)

0 0

3 7 5

2

1 6 =6 p2 v 4

0

3

( ) ( )

? p 12 v

( )

( )

r r

7 7 5

This is an odd eigenvector of B . Since u, v, and w all have the same eigenvector , the claim about eigenvalues of Bodd corresponding to odd eigenvalues of B holds. It is easy to show that if two eigenvectors u1 and u2 of Bodd are orthogonal, then the corresponding eigenvectors w1 and w2 are also orthogonal. Thus of an eigenvalue of Bodd has multiplicity 2, there are two orthogonal odd eigenvectors of B with that eigenvalue. Note that if we transform all eigenvectors of Beven and Bodd this way we get n orthogonal eigenvectors of B (i.e., a full set). It is easy to construct the components Godd and Geven from G. We now give the rules for these constructions; they are easy to verify from the matrix entries of B 0. Godd is a weighted graph on Vr. Start with the subgraph of G induced by Vr, with the weight of vertex vi equal to the weight of vi in G and the weight of edge vi ; vj equal to its weight in G. Adjust the weights according to the following two rules:

(

)

Degree weight adjustment rule: For each vertex vi 2 Vr , if there is an edge in G from vi to v (i) (i.e., there is an edge from vi to its image under the automorphism), then increase the weight of vi in Godd by wi (i). Edge weight adjustment rule: For each pair of distinct vertices vi ; vj 2 Vr and i < j , if there is an edge vi ; v (j ) in G (i.e., if there is an edge from vi in the representative set to

(

)

11

the image of vj outside of the representative set), add an edge vi ; vj with weight ?wi; (j) to Godd (if there is already an edge vi ; vj in Godd , subtract wi; (j) from its weight). To see that this rule is well-formed with respect to permutations of the vertex numberings, note that since is an automorphism of order 2, the existence of an edge vi ; v (j ) in G implies the existence of an edge v (i); vj in E G with the same weight. That is, for any i < j we have

(

)

(

)

(

)

( )

(

)

G G G wijodd = wij ? wiG (j) = wji ? wjG (i);

where the superscripts on the weights indicate the graph for which they’re de?ned. Thus, if we renumbered the vertices in a way that assigns vj an index less than the new index of vi , the weight of the corresponding edge in Godd would be unaffected. Delete any zero-weight edges from Godd . Geven is a weighted graph on Vr Vf . Start with the subgraph of G induced by Vr Vf , with the weight of vertex vi 2 Vr Vf equal to its degree in G and the weight of each edge vi ; vj equal to its weight in G. Adjust the weights according to the following three rules:

(

)

Degree weight adjustment rule: For each vertex vi 2 Vr , if there is an edge in G from vi to v (i) (i.e., there is an edge from vi to its image under the automorphism), then decrease the weight of vi in Geven by wi (i).

Vr -to-Vf Edge weight adjustment rule: For each edge vi ; vj in Geven such that vi 2 Vr and vj 2 Vf (i.e., for each edge between the ?xed vertices and the representative vertices), p multiply its weight by 2. Vr -to-Vr Edge weight adjustment rule: For each pair of distinct vertices vi ; vj 2 Vr and i < j, if there is an edge vi; v (j) in G (i.e., if there is an edge from vi in the representative set to the image of vj outside of the representative set), add an edge vi ; vj with weight wi; (j) to Geven (if there is already an edge vi; vj in Geven , add wi; (j) to its weight). We leave it to the reader to check that this rule is well-formed with respect to permutations of the vertex numberings; the argument is the same as for the edge weight adjustment rule in the odd case.

(

)

(

)

(

)

(

)

Delete any zero-weight edges from Geven . Since

jV (Godd)j + jV (Geven )j = jVf j + 2jVrj = jVf j + jVmj = jV (G)j;

the claim about the combined size of the two graphs holds.

2

It is useful to consider an example of even-odd graph decomposition before proceeding. We will demonstrate one way in which a complete binary tree of three levels can be decomposed. The initial graph is shown in Figure 3.1 below. The ?rst automorphism we use maps leaves with the same parent to each other. Applying the rules for constructing the odd and even components given at the end of the proof above, we get the result shown in Figure 3.1. The odd component is fully decomposed; we can apply one more step of decomposition to the even component using the automorphism that maps the corresponding vertices at the lowest two levels to each other. The 12

Figure 3: The Initial Graph result is shown in Figure 3.1. This example will also be useful in arguments about bounding the smallest nonzero eigenvalue of complete binary trees below. The following technical lemmas about the eigenvalues and eigenvectors of weighted path graphs will be useful in subsequent results. Lemma 3.5 [Zero Entries of Path Graph Eigenvectors Lemma] Let B be the standard matrix x for representation of a weighted path graph G on n vertices. For any vector x such that B x some real , xn 0 implies x 0. Likewise, x1 0 implies x 0. If there are two consecutive elements xi and xi+1 that are both zero, then x 0. Proof: We will prove the ?rst result by induction. The base case is for a 2 2 matrix with diagonal entries b11 and b22 , and off-diagonal entries b12 b21 ?c. Let x and be as speci?ed by the lemma statement, and assume that x2 0. The second element of the vector resulting from x is ?c x1 x2 0. Since c 6 0 by de?nition (G is a weighted path multiplying B x graph), we must have x1 0, which implies that x 0. k, and consider the standard For the induction step, assume that the result holds for all i matrix representation of a weighted path graph on k 1 vertices. Let the weight of the edge from vk to vk+1 be c. Let x and be as stated, and assume that xk+1 0. For the k 1st entry of B x we have ?c xk xk+1 0. Thus xk 0. Let x0 be the subvector of x consisting of the ?rst k entries. Note that with xk+1 0 we have that x0 and meet the lemma conditions for the principle leading minor Bk of B , and that x0 k 0. But the leading principle minor Bk is the standard matrix representation for the weighted path graph derived from G by deleting the last edge and vertex. Thus, by the induction hypothesis x0 must be 0; because xk+1 0 this implies that x 0 A symmetric argument implies the result for x1 0. Details are left to the reader. Again let B be the standard matrix representation of a weighted path graph G. Let x be a vector meeting the lemma conditions for , and assume that x has two consecutive zero elements xi and xi+1 . If either i 1 or i 1 n, x 0 by the previous argument. Otherwise, xi+1 0 implies that the ?rst i elements of x and meet the lemma conditions for the leading principle minor Bi of B . Note that Bi is the standard matrix representation for some weighted path graph. Thus by our previous result the ?rst i entries of x are zero. By a symmetric argument for the trailing principle minor, the last n ? i entries must also be zero, which gives x 0.

=

=

=

=

=

=

=

=

=

=

=

= +

= =

=

=

= =

=

=

+

=

=

=

=

=

+ =

=

=

=

13

2 1 1

3

3 1 1

2

2

1

1

Odd Component

Even Component

Figure 4: After the First Decomposition Step

2

Since eigenvectors are by de?nition not equal to the zero vector, the theorem above implies that for eigenvectors of the standard matrix representation B of any weighted path graph, neither the ?rst nor the last entry is zero. Likewise, such an eigenvector cannot have two consecutive zero entries. We can use this to give a simple proof of the following lemma (for a different proof, see e.g. pp. 910-911 of [YG73]). Lemma 3.6 All eigenvalues of the standard matrix representation B of a weighted path graph G on n vertices are simple (i.e., have multiplicity one). Proof: Let u and u0 be any two eigenvectors of B for the eigenvalue . By the Zero Entries of Path Graph Eigenvectors Lemma (Lemma 3.5), we have un 6 0 and u0 6 0. Let? be u0 =un ; n n ? ? is non-zero and real. Then B u ? u0 u ? u0 . But the nth element of u ? u0 is 0, so by Lemma 3.5, we must have u u0 , so u must be a scalar multiple of u0; it is not a distinct eigenvector.

=

=

=

=

A path graph on n vertices has exactly one automorphism of order two: i n ? i 1. Thus we can talk about odd and even eigenvectors of a path graph without ambiguity; they are always with respect to this automorphism. Lemma 3.7 The eigenvector u2 corresponding to the second smallest eigenvalue 2 of the Laplacian B of a path graph G on n vertices is odd. Proof: By Lemma 3.6, we know that u2 is simple, so by Corollary 3.3, u2 must be either even or odd. Assume that it is even. We will show that this leads to a contradiction. 14

2

()=

+

2

2

3

3

2

2

1

New Even Component

1

New Odd Component

Figure 5: Result of Decomposing Odd Component Recall the characterization

2

There are two cases we must keep track of: n is odd, and n is even. If n is odd, there is a single center vertex vd n e (we will index the vertices along the path from 1 to n). If n is even, there are 2 two center vertices with indices n and n 1; since we have assumed u2 is even, their entries in 2 2 u2 are equal. Thus, by Lemma 3.5, if n is even the eigenvector entries corresponding to the center vertices are non-zero. If n is odd, u2 is even, and the eigenvector entry for the center vertex is 0, it is easy to check that changing the signs of all eigenvector entries with index less than the center index gives an odd eigenvector with eigenvalue 2 , which contradicts the simplicity of 2 . Thus, our assumption that u2 is even implies that the eigenvector entries corresponding to the center vertex or vertices must be non-zero. Let this value be c. Now consider the vector x ?c ~ u2. Recall that u2 is orthogonal to ~ . It is easy to see 1 1 that x is even, and since c 6 0, we have that

= min xxTAx : x ~

x?1

T

+

=

=( ) +

xT Ax = xT x

T T 1 1 c2 ~ T A~ + u2 Au2 uT u 2 = c2n 2+AuT2u < uuTAu2 : 2 2 2 u2 1 1 (?c) ~ + u2 T (?c) ~ + u2

However, the entries of x corresponding to the center vertex or vertices are 0, so as we did above,

15

we can create an odd vector y such that

yT Ay yT y

= xxTAx x

T

as follows: set yi xi ; i < n and yi ?xi ; i > n . y is orthogonal to ~ , so it meets the criteria 1 2 2 for the characterization of 2 , so our assumption that u2 is even gives 2 < 2 , a contradiction.

=

=

2

3.2

Bounds on

2

for Trees and Double Trees

We now use the results from the preceding section to prove some upper and lower bounds on 2 for the Laplacian of the complete binary tree and for the Laplacian of a graph we refer to as the double tree. We de?ne a double tree as two complete binary trees of k levels for some k > 0 connected by an edge between their respective roots. The bounds developed below will be useful in Section 5. To bound the size of 2 of a complete binary tree, we ?rst apply the Even-Odd Decomposition Theorem (Theorem 3.4) a number of times to show that the eigenvalues of a balanced binary tree can be computed from a few simple types of weighted graphs. We then bound the eigenvalues for these types of graphs using the interlacing theorems and matrix perturbation techniques. We show 1 3, a complete binary tree on k levels with n 2k ? 1 vertices, 2 that for k n . More 1 2 speci?cally, n < 2 < n . For the following argument, we will assume that we are working with a complete binary tree of k > 2 levels. As above, n 2k ? 1. We start by applying the Even-Odd Graph Decomposition Theorem with respect to the automorphisms that each map one pair of neighboring leaves to each other. There are 2 k?1 leaves and 2k?2 such automorphisms. For the odd eigenvalues, we get 1 for each of the 2k?2 single-vertex graphs that comprise the odd components. The result for k 3 was shown above in Figure 3.1. To help in understanding the structure of the even component that results from the preceding sequence of decompositions, we introduce the following terminology: for i 3, the ith odd p collapsed tree graph is a weighted graph on i ? 1 vertices, with every edge having weight 2, vertex v1 having weight 1, and all other vertices having weight 3 (the odd collapsed tree graph for i 5 is shown in Figure 3.2 below). With this terminology, we can describe the resulting Geven in the following way: the even component has the structure of a complete binary tree of k ? 1 levels, except that the leaves of the tree are replaced by the odd collapsed tree graph for i 3; the vertices at level k ? 2 are connected to the pseudo-leaves by edges of weight 1 to the vertices v2 of the odd collapsed tree graphs. This is illustrated for k 3 in Figure 3.1 above. We now repeat the process for the 2 k?3 automorphisms that map neighboring pairs of the odd collapsed tree graphs to each other. The odd eigenvalues found during these steps are the 3, each occurring 2k?3 times, plus the eigenvalues of the odd collapsed tree graph for i eigenvalues of the new even component, a weighted graph that looks like a complete binary tree with k ? 2 levels, except that the leaves are replaced by odd collapsed tree graphs for i 4. Once again, each pseudo-leaf is connected to its parent by an edge of weight 1 from vertex v3 of the odd collapsed tree graph.

=

= ( )

=

= =

=

=

=

=

=

16

3 1

3 3

2

2

2

Figure 6: Odd Collapsed Tree Graph, i

=5

We continue this process; at the j th series of reductions we get eigenvalues for the odd collapsed tree graph for i j 1 and an even component that is a weighted graph that looks like a complete binary tree with k ? j levels, except that the leaves are replaced by odd collapsed tree graphs for i j 2. At the last decomposition step, we start with a weighted graph that looks like a complete binary tree with 2 levels, except that the leaves have been replaced by odd collapsed tree graphs for i k. Applying the Even-Odd Decomposition Theorem one last time, we ?nd that the eigenvalues for this graph are those of the odd collapsed tree graph for i k plus the eigenvalues for a graph on k vertices that consists of the odd collapsed tree graph for i k plus vertex vk with weight 2, and p edge vk?1 ; vk with weight 2 (we will refer to this latter graph as the even collapsed tree graph for i k ; the even collapsed tree graph for i 5 is shown in Figure 3.2 below). Recall that the

= +

= +

=

=

( =

)

=

=

3 1

3

3 2

2

2

2

2

Figure 7: Even Collapsed Tree Graph, i

=5 =

eigenvalues not represented by either of these ?nal components are either 1 (from the ?rst set of decompositions) or eigenvalues for the odd collapsed tree graphs for i; 3 i < k. Let (k) be the smallest eigenvalue of the odd collapsed tree graph for i k. We claim that (k ) . (k) is less than (i) for any i < k by 2 for the complete binary tree on k levels is equal to 17

the First Interlacing Theorem. We will show in the argument below that (k) < 1 for k 3. It is easy to show that the standard matrix representation for the even collapsed tree graph for i k is singular, so its smallest eigenvalue is 0. Thus, an application of the First Interlacing Theorem gives us that the smallest eigenvalue of the odd collapsed tree graph for i k is less than or equal to the ?rst non-zero eigenvalue of the even collapsed tree graph for i k (the standard matrix representation of the former is a leading principle submatrix of the latter). Bounds on (k) can be computed using perturbation techniques. Note that the standard matrix representation for the odd collapsed tree is nonsingular, so the smallest eigenvalue is the one of interest. For the rest of this argument, we will use two matrices. The ?rst, B , is the standard matrix representation for the odd collapsed tree graph for i k. The determinant of B is 1 (this is an easy induction proof on k). The vertex ordering is as given in the description of the graph given above. The second matrix, B 0, has

=

= =

=

and all other entries are the same as for B . The determinant of B 0 is 0 (again, easily shown via an easy induction proof). 0 (we’ll assume that it’s scaled to unit Let v be the eigenvector of B 0 corresponding to length). Then we have

1 1 b011 = b11 ? 2k?1 ? 1 = 1 ? 2k?1 ? 1 = n ? 3 ; n?1

=

1 By the Courant-Fischer Minimax Theorem (see, e.g., [GL89]), 2k?1 ?1 is an upper bound on (k) 2 for B . We will now show that for k 3, this upper bound is strictly less than n . Note that this will immediately imply that (k) is less than 1.

vT B 0 v = vT B v ? k?11 = 0: 2 ?1

v2

v2

2 2 1 Because 2k?1 ? 1 n?1 , we will have that 2k?1 ?1 < n if v1 < n?1 . Assume that this is not 2 n so; we will show that this assumption contradicts the fact that v is unit length. Since B 0 v 0, the de?nition of B 0 gives us that

=

v2

=

b11v1 ?

and by our assumption

p

2 v2

p 1 = n ? 3 v1 ? 2v2 = 0 ! v2 = p2 n ? 3 v1 ; n?1 n?1

n? 2 2 2 v2 = 1 n ? 3 v1 2 1

2 1 2 2

Since v is unit length, our assumption yields

1 2n

(n ? 3)2 :

n?1

1 2

!

k ?1 X 1 = vT v = vi2 i=1

2 v + v = n n ? 1 + (n ? 31) > 1; n?

1

a contradiction (the last inequality holds because for k 18

3, we have n

2 7, and thus (n?3) n?1

> 2).

To get a lower bound on (k) , let (k ) ). We have: (i.e., uT B u

=

u be the unit-length vector such that uT B u is minimized

0:

The last inequality holds by a version of the Second Interlacing Property with < 0; B 0 has a zero eigenvalue, and all its other eigenvalues will be greater than or equal to the (positive) eigenvalues of B . B 0 is thus positive semide?nite. 1 1 The above equation implies that if we can show that u1 > p , we will have (k) > 2k1 2 > n . ? Since B is tridiagonal, this is easily done by using B and (k) to generate a recurrence for the entries of u. p (k ) u1. Since 1 > (k) > 0, we have Consider the ?rst entry of B u. We have u1 ? 2u2 u1 . This provides the base case for a proof by induction that u < pi . The details are left u u2 < p2 i+1 2 to the reader; the previous inequality serves as the induction hypothesis; the matrix B gives us

2

u2 u2 uT B 0 u = uT B u ? k?11 = (k) ? k?11 2 ?1 2 ?1

=

3 ui ?

p

2 ui?1

(

+ ui 1 ) =

+

( )

k

ui;

which can be used to prove the desired result for ui+1 . Since u is unit length, we have 1

= uT u =

k ?1 X i=1

u2 < i

k ?1 X i=1

1 p

2(i 1)

?

2

u 2 = u2 1 1

k?1 1 (i?1) X i=1

2

< 2u2 ; 1

1 which gives the desired result that u1 > p2 . Thus we have proved the following lemma: 3 levels and n 2k ? 1 vertices, we Lemma 3.8 For a complete balanced binary tree on k 2 1 have n < 2 < n . For double trees where each of the component trees has k levels and n 2k+1 ? 2 vertices, we have the following lemma: 4 1 Lemma 3.9 For a double tree on n 14 vertices, we have n < 2 < n . Proof: The proof will make use of a number of facts from the proof of the 2 bounds for complete binary trees given above. We start by noting that we can apply the Even-Odd Decomposition Theorem starting from the leaves of the trees as we did above. The odd eigenvalues determined at each step are the same as for the complete binary tree. At the last decomposition step (the one that divides between the two roots) results in two components: the even component Ge , which is the same as even component for a tree of k levels, and the odd component Go , which is like the k 1st odd collapsed tree graph, except that the weight of vertex k is 4 rather than 3. Again, the standard matrix representation is singular for Ge and non-singular for Go ; an application of the Second Interlacing Theorem thus implies that 2 for the double tree is the smallest eigenvalue for Go . Next, note that the standard matrix representation of the odd collapsed tree for i k is a principle submatrix of the standard matrix representation for Go . By the First Interlacing Theorem

= =

+

=

19

and the analysis of the odd collapsed tree above, we immediately have that 2 < 2k2 1 ; since the ? 4 double tree has n 2k+1 ? 2 vertices, this implies that 2 < n . Finally, note that Go is the odd collapsed tree for i k 1 with 1 added to the weight of vertex k. We can thus apply the Second Interlacing Theorem and the results for the odd collapsed tree (k +1) > 2k+11 ?2 ; above (recall that in the proof we showed that (k) > 2k1 2 ) to show that 2 ? 1 since the double tree has n 2k+1 ? 2 vertices, this implies that 2 > n . This completes the proof of the lemma.

=

= +

=

2

4 A Bad Family of Bounded-Degree Planar Graphs for Spectral Bisection

In this section we present a family of bounded-degree planar graphs that have constant-size separators. However, the separators produced by spectral bisection have size n for both edge and p vertex separators. Since there are algorithms that produce O n vertex separators for planar pn edge separators for bounded-degree planar graphs, spectral bisection graphs, and hence O performs poorly on these graphs relative to other algorithms. The family of graphs is parameterized on the positive integers. Gk consists of two path graphs, each on 3k vertices, with a set of edges between the two paths as follows: label the vertices of one path from 1 to 3k in order (the upper path), and label the other path from 3k 1 to 6k in order (the lower path). For 1 i k there is an edge between vertices 2k i and 5k i. This was shown in Figure 1 in the Introduction. It is obvious that Gk is planar for any k, and that the maximum degree of any vertex is 3. Note that the graph has the approximate shape of a cockroach, with the section containing edges between the upper and lower paths being the body, and the other sections of the paths being antennae. This terminology will allow us to refer easily to parts of the graph. Gk has one automorphism of order 2 that maps the vertices of the upper path to the vertices of the lower path and vice versa. For the rest of this section, the terms “odd vector” and “even vector” will be used with respect to this automorphism. Thus, an even vector x has xi x3k+i for all i in the range 1 i 3k; an odd vector y has yi ?y3k+i for all i, 1 i 3k. We can now discuss the structure of the eigenvectors of the Laplacian of Gk . Lemma 4.1 Any eigenvector ui with eigenvalue i of the Laplacian Bk of Gk can be expressed in terms of linear combinations of even and odd eigenvectors with eigenvalue i. In particular, ui can be expressed as a linear combination of:

( )

( )

()

+

+ +

=

=

an even eigenvector of Bk in which the values associated with the upper path are the same as for the eigenvector with eigenvalue i (if it exists) of a path graph on 3k vertices, and an odd eigenvector of Bk in which the values associated with the upper path are the same as for the eigenvector with eigenvalue i (if it exists) of a weighted graph that consists of a path graph on 3k vertices for which the vertex weights of v2k+1 through v3k have been increased by 2. 20

Proof: The ?rst claim follows by the Even-Odd Eigenvector Theorem applied with respect to the automorphism that maps the vertices of the upper path to the vertices of the lower path and vice versa. The claim about the speci?c structure of the ui ’s follows by a direct application of the construction used in the proof of the Even-Odd Graph Decomposition Theorem with respect to the same automorphism. We now give the proof that spectral bisection gives bad separators for the family of graphs Gk . Theorem 4.2 Spectral bisection produces n edge and vertex separators for Gk for any k. Proof: The ?rst step is to show that u2 is odd. Intuitively, this implies that the spectral method will split the graph into the upper path and the lower path. T Bk minx?~ xxT xx . We will construct an odd vector x such that the quotient Recall that 2 1 xT Bk x is less than yT Bk y for any even eigenvector y orthogonal to ~ (~ is the smallest even 1 1 xT x yT y xT Bk x is less than the second smallest even eigenvector). To do this, we need to show that xT x eigenvalue. From Lemma 4.1 above, we know that the second smallest even eigenvalue of Bk is the same as the second smallest eigenvalue of the Laplacian of a path graph on 3k vertices; it is well-known that this value is 4 sin2 6k (see for example [Moh88]). Let z be the eigenvector corresponding to the second smallest eigenvalue 2 for the Laplacian B of the path graph G on 4k vertices ( 2 4 sin2 8k ). Construct x as follows:

2

()

=

( )

=

( )

1 i 2k; zi xi = > z7k?i+1 3k + 1 i 5k; and : 0 otherwise: That is, we assign the ?rst 2k values from the path G to the upper antenna of the roach, working in the direction towards the body, and we assign the last 2k entries from G to the lower antenna,

8 > <

working from the body outward. Since z and x have the same set of non-zero entries, xT x zT z. Likewise, since z is orthogonal to the “all-ones” vector, so is x. In order to see that xT Bk x < zT B z, recall (1) from Section 3: for Laplacian A and vector y we have X

=

yT Ay =

(

For every edge in G except one, there is an edge in Gk that contributes the same value to this sum. The one exception is the edge v2k ; v2k+1 . Since z is an odd vector by Lemma 3.7, and since z has an even number of entries, z2k ?z2k+1 . By the Zero Entries of Path Graph Eigenvectors Lemma (Lemma 3.5), it is not possible for both z2k and z2k+1 to be zero, so z2k is equal to some non-zero value c, and this edge contributes 4c2 to the value of zT B z. On the other hand, there are two edges in Gk that contribute non-zero values and that do not have corresponding edges in G: v2k ; v2k+1 and v5k ; v5k+1 . Each of these edges contributes c2 to xT Bk x. Thus we have

vi ;vj )2E

(yi ? yj )2:

(

=

)

(

)

(

)

xT Bk x = zT B z ? 4c2 + 2c2 = zT B z ? 2c2 < zT B z:

21

Since xT x

= zT z, this gives us

2

(Gk )

That is, the second smallest eigenvalue of Bk is less than any non-zero even eigenvalue, and is thus odd by the Even-Odd Eigenvector Theorem (Theorem 3.2). There are a few details to ?nish off. In particular, we need to show that there aren’t too many zero entries in u2 (spectral bisection as de?ned in this paper won’t separate vertices with the same value). Since u2 is an odd vector and since the odd component of Gk is a weighted path graph, Lemmas 3.5 (the Zero Entries of Path Graph Eigenvectors Lemma) and 4.1 imply that u2 cannot have consecutive zeros, and the values corresponding to vertices 3k and 6k are non-zero. Thus the edge separator generated by spectral bisection must cut at least half the edges between the upper and lower paths; since none of these edges share an endpoint, the cover used in generating the vertex separator must include at least this number of vertices. The theorem follows.

xT Bk x zT B z < zT z xT x

= 4 sin2 ( 8k ) < 4 sin2( 6k ):

2

5 A Bad Family of Graphs for the “Best Threshold Cut” Algorithm

While the roach graph defeats spectral bisection, the second smallest eigenvector can still be used to ?nd a small separator using the “best threshold cut” algorithm. In particular, Theorem 3.1 implies that considering all threshold cuts induced by u2 will let us ?nd a constant-size cut: If qmin is the minimum cut quotient for these cuts, then

qmin

q

2

(2 ? 2)

p

1 which implies qmin is O n . Since the denominator of qmin is less than or equal to n , the number 2 of edges in this cut must be bounded by a constant. In this section we show that there is a family of graphs for which the “best threshold cut” algorithm does poorly. Recall that in Section 3.2 we de?ned a double tree as a graph that consists of two complete binary trees of k levels for some k > 0, connected by an edge between their respective roots. The tree-cross-path graph consists of the crossproduct of a double tree on p1 vertices and a path graph on p2 vertices. We will show below that there are tree-cross-path graphs that will defeat the “best threshold cut” algorithm. To do so, we will use the bounds from Section 3.2 on 2 for trees and double trees. We can formally state the result for this section as follows: Theorem 5.1 There exists a graph G for which the “best threshold cut” algorithm ?nds a separator S such that the cut quotient for S is bigger than i G by a factor as large (to within a constant) as allowed by the theoretical bounds provided by Theorem 3.1. Proof: Let G be the tree-cross-path graph that is the crossproduct of a double tree of size p and a 1 path of length cp 2 for some c in the range 3:5 c < 4. To insure that we have integer sizes for the tree and the path, we restrict p to integers of the form 2k ? 2 for k > 2. Then we choose c in the 1 range speci?ed such that cp 2 is an integer (by our choice of p there will be an integer in this range).

( )

6 4k

;

( )

22

Recall that the eigenvalues of a graph crossproduct are all pairwise sums of the eigenvalues from the graphs used in the crossproduct operation. Let 2 be the second smallest eigenvalue of 1 the double tree on p vertices, and let 2 be the second smallest eigenvalue for the path on cp 2 vertices. If 2 < 2 , then 2 for the crossproduct will be 2 (i.e., 2 added to the zero eigenvalue 1 of the double tree). But we have that 2 4 sin2 1 , and that 2 is greater than or equal to p .

=

Therefore we need to show that 4 sin2 Reorganizing, simplifying, and noting that sin 2cp

1 2

( 2cp )

2

!

( )<

1

1

2cp

1 2

< 1: p

for 0 <

2,

we want

<

Clearly by our choice of c this inequality holds. Since path graph eigenvalues are simple (Lemma 3.6), the second smallest eigenvalue of G will also be simple. 1 We note that the tree-cross-path graph can be thought of as cp 2 copies of the double tree, each corresponding to one vertex of the path graph. Each vertex in the ith copy of the double tree is connected by an edge to the corresponding vertex in copies i ? 1 and i 1. This description allows us to construct the eigenvector for the second smallest tree-cross-path eigenvalue as follows: Assign each vertex in double tree copy i the value for vertex i in the path graph eigenvector for 2 . Note that this is the only possible eigenvector since the eigenvalue is simple. Now consider any copy of the double tree: every vertex in that copy gets the same value in the characteristic valuation. Thus the cut S made by the “best threshold cut” algorithm must separate at least two copies of the double tree, and thus must cut at least p edges. There is a bisection S 1 of size cp 2 (cut the edge between the roots in each double tree); because this cut is a bisection, the ratio between the cut quotient q for S and i G is at least as large as the ratio between the sizes of these cuts:

2p 2

;

or

< c:

+

( )

q i(G)

2

jSj jS j

p = cp

1 2

p :

1 2

From Theorem 3.1, we have that 2

i(G) q

2

q

2

(2 ? 2): = 5) implies we must have

1 2

This plus the fact that the tree-cross-path graph has bounded degree (

q i(G) 2

2

p

(2 ? 2) = O p1 2

2

=O p :

These two bounds imply that, to within a constant factor, the ratio is as large as possible, and the theorem holds.

23

6 A Bad Family of Graphs for Generalized Spectral Algorithms

The results of the previous section can be extended to more general algorithms that use some number k (where k might depend on n) of the eigenvectors corresponding to the k smallest non-zero eigenvalues. In particular, consider algorithms that meet the following restrictions: The algorithm computes a value for each vertex using only the eigenvector components for that vertex from k eigenvectors corresponding to the smallest non-zero eigenvalues (for convenience, we refer to these as the k smallest eigenvectors). The function computed can be arbitrary as long as its output depends only on these inputs. The algorithm partitions the graph by choosing some threshold t and then putting all vertices with values greater than t on one side of the partition, and the rest of the vertices on the other side. The algorithm is free to compute the break point t in any way; e.g., checking the separator ratio for all possible breaks and choosing the best one is allowed. We will call such an algorithm purely spectral. The following theorem gives a bound on how well such algorithms do when the number of eigenvectors used is a constant: Theorem 6.1 Consider the purely spectral algorithms that use the k smallest eigenvectors for k a ?xed constant. Then there exists a family of graphs G such that G 2 G has a bisection S with jS j k2 n 13 , and such that any purely spectral algorithm using the k smallest eigenvectors will

jS j . produce a separator S for G such that jS j k+1 Proof: We will show that G is the set of tree-cross-path graphs that are the crossproducts of double 1 trees of size p (where p is an integer of the form 2k ? 2 for k 3) and paths of length cp 2 , where c is a constant chosen such that k < c k 1 and cp 12 is an integer. Recall that the eigenvalues of a graph crossproduct are all pairwise sums of the eigenvalues from the graphs used in the crossproduct operation. Assume for the moment that the k smallest nonzero eigenvalues of any G 2 G are the same as the k smallest nonzero eigenvalues of the path graph used in de?ning G. This will clearly hold if we can show that these k eigenvalues are smaller than 2 of the double tree; in that case the k smallest non-zero eigenvalues of the crossproduct will be these eigenvalues from the path graph added to the zero eigenvalue of the double tree. Since the path graph eigenvalues are simple (Lemma 3.6), the corresponding tree-cross-path eigenvalues will also be simple. 1 We note that the tree-cross-path graph can be thought of as cp 2 copies of the double tree, each corresponding to one vertex of the path graph. Each vertex in the ith copy of the double tree is connected by an edge to the corresponding vertex in copies i ? 1 and i 1. This description allows us to construct an eigenvector for each of these k tree-cross-path eigenvalues as follows: Assign each vertex in double tree copy i the value for vertex i in the path graph eigenvector for this eigenvalue. Note that these are the only possible eigenvectors since these eigenvalues are simple. The purely spectral algorithm will produce a cut S with cut quotient q . Recall our assumption about the k smallest eigenvectors and consider any copy of the double tree: since every vertex in

2

( )

+

+

24

that copy gets the same value for each eigenvector, the same value will be assigned to each vertex in this copy by the algorithm. This implies that S must separate at least two copies of the double tree, and thus must cut at least p edges. 1 There is bisection S of size cp 2 (cut the edge between the roots in each double tree); because 3 n cp 2 and c > k we have jS j > k 23 n 13 . It is obvious that

=

jSj

k 1, the claim in the theorem statement holds if our assumption holds. since we have c To prove that the assumption about the form of the k smallest eigenvectors holds for all G 2 G , 1 we still need to prove that a path graph on cp 2 vertices has k nonzero eigenvalues smaller than 2 for a double tree on p vertices. Recall that the eigenvalues of a path graph on l vertices are i 1 4 sin2 2l for 0 i < l, and that 2 for a double tree on p vertices is greater than or equal to p . Therefore we need to show that ! k < 1: 4 sin2 1 p 2cp 2

+

jS j c

2

;

( )

Reorganizing, simplifying, and noting that sin

1 2

( )<

1 2

for 0 <

2,

we have

k < 1 ; 2cp 2p

Clearly this inequality holds.

or

k < c:

2

Note that for the case in which k is constant, we have the following results: the cut quotient qS will be no better than the best cut quotient qmin produced by looking at all threshold cuts for u2 , and the gap between i to Theorem 3.1.

(G) and qmin is as large as possible (within a constant factor) with respect

These results can be shown using techniques from the previous section. Thus, G is a graph for which using k eigenvectors does not improve the performance of the “best threshold cut” algorithm. These results also hold for certain variants of the de?nition of “purely spectral”. For example, Chan, Gilbert, and Teng have proposed using the entries of eigenvectors 2 through d 1 of the Laplacian as spatial coordinates for the corresponding vertices of a graph [CGT94]. The graph is then partitioned using a geometric separator algorithm [MTV91],[GMT95]. If this technique was applied (using a ?xed d) to the counterexample graph used in the proof above, all vertices in a particular copy of the double tree would end up with the same coordinates; the geometric algorithm would then have to cut between copies of the double tree, yielding the same bad cuts as in the proof.

+

25

6.1

Purely Spectral Algorithms that Use More than a Constant Number of Eigenvectors

large. Then p2

There are still a number of open questions about the performance of purely spectral algorithms that use more than a constant number of eigenvectors (in particular, how well can such algorithms do if they are allowed to use all the eigenvectors?). However, just using more than a constant number of eigenvectors is not suf?cient to guarantee good separators. In particular, the counterexamples and arguments in the previous sections can be extended to prove the following theorem: Theorem 6.2 For large enough n and 0 < < 1 , there exists a bounded-degree graph G on n 4 vertices such that any purely spectral algorithm using the n smallest eigenvectors ? produce a will 1 separator S for G that has a cut quotient greater than i(G) by at least a factor of n 4 ? ? 1. Proof: Once again, we will take G to be the tree-cross-path graph. As in the previous two proofs, we choose p1 (the double-tree size) and p2 (the path size) such that the smallest n eigenvalues of the crossproduct are the same as the smallest n eigenvalues of the path graph. Once again, a purely spectral algorithm will separate two adjacent double trees, while a better cut would cut the edges between the roots of the double trees. It remains to choose p1 and p2 such that the claim about the smallest eigenvalues of the crossproduct holds, and to show that the resulting cut is bad. We set p1 to some arbitrary p, subject to the conditions presented below that p be suf?ciently ?

=

p

1 +2 2

. Note that we can choose p large enough such that

p>p

1

?

1 +2 2

+1

> p2 :

This implies that p > n 2 , where n = p1 p2 . Note that this allows us to show easily that n < p2 < p2 (i.e., that there are suf?ciently many eigenvalues from the path graph). We also note that even for ?1 ?3 fairly small p, p2 < 2p 2 +2 , which implies that n < 2p 2 +2 . Now consider the ratio of the cut produced by cutting the double-tree edges versus the cut produced by a purely spectral method under the assumption that the n smallest eigenvalues are the same as for the path graph. As in previous proofs, this ratio is at least as large as the ratio between the number of edges cut: ?

p

? p2

1 + 2

1 > p 2 ?2

? 1:

The inequality will hold for p chosen suf?ciently large. Using the fact that 1 that p > n 2 , we have ? ?1 ?1 1 1 p 2 ?2 ? 1 > n 2 2 ?2 ? 1 = n 4 ? ? 1:

p is also big enough

All that is left to prove is the assumption about the smallest eigenvalues. If we let = 1 ? 2 , 2 then > 0 and one of the inequalities above can be written as n < 2p(2? ). Recall that the i eigenvalues of a path graph on k vertices are 4 sin2 ( 2k ) for 0 i < k, and that 2 for a double tree

26

on p vertices is greater than or equal to 1 . We need to show that for p large enough, p

0 B 4 sin2 B @

2

p

?n 2

1 + 2

1 C< 1 C : A

p

Reorganizing, simplifying, noting that sin( ) < for 0 < above, we want to show that there is a p large enough such that

2,

and applying the inequality

2

p

?n 2

1 + 2

<

2p(2? 2p

?

)

1 +2 2

=

2 p? 2p

1 2

<

1 2p 2

1

;

or

2

<p :

Clearly this inequality holds for large enough p.

2

6.2

A Final Note About Tree-Cross-Path Graphs

While the tree-cross-path graph appears to be very specialized, we can make the following argument that it has some relation to practice: We noted in Section 3 that the Second Interlacing Theorem 0 implies that adding an edge to a graph G gives a new graph G0 with 2 greater than or equal to 2 for G. Therefore the preceding results holds if we replace each tree in the double trees with a graph that has a complete binary tree as a spanning tree (the edge between the two graphs will be between the vertices at the roots of the spanning trees, and connections between copies of the “double graphs” in the crossproduct will be between corresponding vertices in the spanning trees). We could therefore construct a three-dimensional ?nite element mesh from our example that would represent the channel tunnel (the Chunnel) between England and France; the chunnel is a pair of long tubes with a small connection between the center.

References

[AG84] B. Aspvall and J. R. Gilbert. Graph coloring using eigenvalue decomposition. SIAM Journal on Algebraic and Discrete Methods, 5(4):526–538, December 1984.

[AGM87] N. Alon, Z. Galil, and V. D. Milman. Better expanders and superconcentrators. Journal of Algorithms, 8:337–347, 1987. [Alo86] [AM85] [Bar82] N. Alon. Eigenvalues and expanders. Combinatorica, 6(2):83–96, 1986. N. Alon and V. D. Milman. 1 , isoperimetric inequalities for graphs, and superconcentrators. Journal of Combinatorial Theory, Series B, 38:73–88, 1985. Earl R. Barnes. An algorithm for partitioning the nodes of a graph. SIAM J. Alg. Disc. Meth., 3(4):541–550, December 1982. 27

[Bop87]

R. Boppana. Eigenvalues and graph bisection: An average-case analysis. In 28th Annual Symposium on Foundations of Computer Science, pages 280–285, Los Angeles, October 1987. IEEE.

[CDS79] D. M. Cvetkovi? , M. Doob, and H. Sachs. Spectra of Graphs. Academic Press, New c York, 1979. [CGT94] Tony F. Chan, John R. Gilbert, and Shang-Hua Teng. Geometric spectral partitioning. Technical Report CSL-94-15, Xerox Parc, July 1994. Revised January 1995. [DH73] [Fie73] [Fie75] W. E. Donath and A. J. Hoffman. Lower bounds for the partitioning of graphs. IBM J. Res. Develop., 17:420–425, 1973. M. Fiedler. Algebraic connectivity of graphs. Czechoslovak Mathematical Journal, 23(98):298–305, 1973. M. Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czechoslovak Mathematical Journal, 25(100):619–633, 1975. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP–completeness. Freeman, San Francisco, 1979. G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, Baltimore, 1989.

[GJ79] [GL89]

[GMT95] John Gilbert, G.L. Miller, and Shang-Hua Teng. Geometric mesh partitioning: Implementation and experiments. In 9th International Parallel Processing Symposium, Santa Barbara, April 1995. IEEE. to appear. [HK92] Lars Hagen and Andrew B. Kahng. New spectral methods for ratio cut partitioning and clustering. IEE Transactions on Computer-Aided Design, 11(9):1074–1085, September 1992. T. Leighton and S. Rao. An approximate max-?ow min-cut theorem for uniform multicommodity ?ow problems with applications to approximation algorithms. In 29th Annual Symposium on the Foundations of Computer Science, pages 422–431. IEEE Computer Society, October 1988.

[LR88]

[Moh88] B. Mohar. The laplacian spectrum of graphs. In Sixth International Conference on the Theory and Applications of Graphs, pages 871–898, 1988. [Moh89] Bojan Mohar. Isoperimetric numbers of graphs. Journal of Combinatorial Theory, Series B, 47:274–291, 1989. [MTV91] Gary L. Miller, Shang-Hua Teng, and Stephen A. Vavasis. A uni?ed geometric approach to graph separators. In 32th Annual Symposium on Foundations of Computer Science, pages 538–547, Puerto Rico, Oct 1991. IEEE. 28

[PSL90] [Sim91] [Wil65] [YG73]

Alex Pothen, Horst D. Simon, and Kang-Pu Liou. Partitioning sparse matrices with eigenvectors of graphs. SIAM J. Matrix Anal. Appl., 11(3):430–452, July 1990. H. D. Simon. Partitioning of unstructured problems for parallel processing. Computing Systems in Engineering, 2(2/3):135–148, 1991. J. H. Wilkinson. The Algebraic Eigenvalue Problem. Oxford University Press, Oxford, 1965. D. M. Young and R. T. Gregory. A Survey of Numerical Mathematics, volume II of Addison-Wesley Series in Mathematics. Addison-Wesley, Reading, MA, 1973.

29

赞助商链接

- Algorithm Design Using Spectral Graph Theory
- (AGM VRLA) batteries working under 100% depth of discharge conditions
- Analysis of gassing processes in a VRLA-spiral wound battery
- Valve-Regulated Batteries Oxygen Cycle
- On the limits of spectral methods for frequency estimation
- On the Performance of Tensor Methods
- The Performance of Phylogenetic Methods on Trees of Bounded Diameter
- Relaxed Implementation of Spectral Methods for Graph Partitioning
- A Graph Spectral Analysis of the Structural Similarity Network of Protein Chains
- Supervised learning of large perceptual organization Graph spectral partitioning and learni
- The effect of pretreatment methods on the performance of passive DMFCs
- On the optimality of the median cut spectral bisection graph partitioning method
- Co-clustering documents and words using Bipartite Spectral Graph Partitioning
- Algorithms for graph partitioning on the planted partition model. Random Structures and Alg
- Models, methods and performance when estimating the compression ratio based on the cylinder

更多相关文章：
**
(二级起点)新视野视听说第三册第一单元测试
**

A. Secrets*of* successful students. B. *Methods* for academic success and *the*...think about *the* educational system that focuses *on* academic *performance*? A...**
复杂产品研发设计流程管理相关研究
**

Research*methods* Task one Topic: 复杂产品研发设计流程管理相关研究 Task two ...*the* individual and *the* knowledge sharing behavior *on* *the* *performance* *of* ...**
***The* success *of* a training program depends *on*___. _...

C.*the* *performance* *of* *the* workers and technicians trained in *the* program...*on* *the* *methods* used but also *on* *the* quality *of* *the* staff who do *the* ...**
resize2fs文件系统大小调整工具(修改)
**

to manage*the* *performance* *of* an ext2 filesystem...*of* LILO) 2) booting and *partitioning* software ...*The* filesystem *on* /dev/sdb2 is now 1835008 ...**
计算机检索报告
**

Besides serving as a comprehensive source*of* information, *the* paper also provides a critical review *on* *the* relative *performance* *of* *the* selected MPPT *methods*...**
Two training ***methods* *of* new graduate nurses trainin...

Two training*methods* *of* new graduate nurses training effect *on* *the* ...Intellectual and aesthetic quality and *performance* aspects *of* system integration...**
Alloy ***of* zinc and front

*the* zinc and zinc alloy coating *on* *the* metal surface...*The* *performance* *of* *the* three, *the* equipment and ...*methods*; Metal coating construction *methods*; ...**
***The* Lecture Notes *of*

Therefore,*on* studying *the* teaching material and ...I will say something about *the* teaching *methods*...." And arouse *the* Ss' *performance* desire, ...**
Analysis ***of* *the* factors *of* *the* psychological impact...

*on* graduates in their choice for military ...*the* military personnel selection.*METHODS*: A ...to play and work *performance* *of* *the* organization....**
A.***The* *performance* required for a certain job_答案_百度高考

C.*The* training contents and *methods* required.D.*The* costs and *the* quality ...(27) Successful training *performance* depends *on* an understanding *of* *the* ...
更多相关标签：

A. Secrets

Research

C.

to manage

Besides serving as a comprehensive source

Two training

Therefore,

C.