9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Inductive counting below LOGSPACE

Inductive Counting Below LOGSPACE?

Carsten Damm1 and Markus Holzer2

1 FB IV-Informatik, Universitat Trier, D-54286 Trier, Germany 2 Institut fur Informatik, Technische Universitat Munchen,

Arcisstr. 21, D-80290 Munchen, Germany

istic branching programs and prove that complementation on this model can be done without increasing the width of the branching programs too much. This shows that for an arbitrary space bound s(n), the class of languages accepted by nonuniform nondeterministic O(s(n)) space bounded Turing machines is closed under complementation. As a consequence we obtain for arbitrary space bounds s(n) that the alternation hierarchy of nonuniform O(s(n)) space bounded Turing machines collapses to its rst level. This improves the previously known result of Immerman 6] and Szelepcsenyi 12] to space bounds of order o(log n) in the nonuniform setting. This reveals a strong di erence to the relations between the corresponding uniform complexity classes, since very recently it has been proved that in the uniform case the alternating space hierarchy does not collapse for sublogarithmic space bounds 3, 5, 9].

Abstract. We apply the inductive counting technique to nondetermin-

1 Introduction

Independently, Immerman 6] and Szelepcsenyi 12] proved that for space bounds s(n) logn the class NSpace (s(n)) is closed under complement. In the proof method | which is known as \inductive counting" | the bound O(logn) is crucial, because it allows to implement a counter for the number of accessible con gurations. It is not known however if the result remains true for space bounds below log n. A consequence of the Immerman-Szelepcsenyi-result is the collapse of the alternating space hierarchy to its rst level for space bounds s(n) logn. Very recently it has been proved that the alternating space hierarchy does not collapse for sublogarithmic space bounds | bounds between (log log n) and o(log n) 3, 5, 9]. The key argument here is uniformity: The Turing machines perform the same algorithm on any input. This allows to separate the classes by involved crossing sequence arguments. Interestingly the technique did not allow to prove NSpace (s(n)) 6= co-NSpace (s(n)) for sublogarithmic space bounds. Can one throw away both obstacles (the implementation of a con gurationcounter and uniformity) to show closure under complement for a nonuniform

? Partially supported by the Deutsche Forschungsgemeinschaft grant DFG La 618/1-1.

model of computation? We show that this is possible. We perform inductive counting on nondeterministic branching programs without increasing the width of the programs too much. This is another application of the inductive counting technique to a circuit like model (see Borodin et al. 2]). We prove further that width restricted branching programs are equivalent in computational power to a variant of nonuniform Turing machines. This proves NSpace (s(n)) = co-NSpace (s(n)) for sublogarithmic space bounds in a nonuniform setting. We can conclude also that the corresponding alternating hierarchy collapses to its rst level. The nonuniform Turing machines we study are generalizations of the nonuniform nite automata introduced by Barrington 1] and in case s(n) logn coincide with the usual Karp-Lipton model of nonuniformity 7]. In the conclusion we relate our work to previous research.

2 Preliminaries

It is not obvious how to de ne nonuniformity for classes below LOGSPACE. The general scheme is to provide Turing machines with extra information on the algorithm to use based on the length of the input. This is usually done using advices: When started on inputs of length n the machine "`magically"' receives an advice n describing the algorithm. The crucial point is how complex this algorithm may be. If we allow the input head movements to depend on the bits being read, the machine has | via the head's position | (limited) access to log n bits of memory. This makes the machine undesirable strong: It can even be shown that such a type of machine could solve LOGSPACE-complete problems using O(1) space on the work tape only 10]. The solution is to follow the lines of Barrington 1] and restrict the machine to work oblivious: The input head position after t steps depends only on the length of the input, not on the bits read. The drawback is that this model seems to be incomparable to the corresponding uniform model 11]. On the other hand it can be shown that obliviousness is no restriction when O(log n) space on the workspace is available | here we have complete correspondence to the usual Karp-Lipton model of nonuniformity 7]. A nonuniform s(n)-space bounded Turing machine is a machine with nite control, two-way read-only input tape, two-way s(n) bounded storage tape and one-way read-only program tape. On the program tape are instructions of two types | to move the input head or to change machine state and to overwrite the whole content of the storage tape based on the current state, the content, and the bit being read from the input tape. The sequence of instructions written on the program tape, i.e., the program, depends on the length of the input only. Let Q be a nite set of states. Further let f0; 1g s denote the set of 0-1-strings of length at most s (including the empty word). Instructions of the second type can be speci ed by a pair (I 0 ; I 1 ) where each I b consists of two functions - state : Q f0; 1g s(n) ! Q, - content : Q f0; 1g s(n) ! f0; 1g s(n).

The machine starts on an input of length n with program Pn written on its program tape. The work tape is empty and all heads are in their leftmost positions. In each step the machine rst looks for the type of instruction written on the program tape. If it is a move instruction it moves its input head accordingly. Otherwise let the instruction be (I 0 ; I 1 ). The machine now reads the input at the position of the input head. Then according to whether the symbol read is 0 or 1 the next command to be executed is I 0 or I 1 . The machine processes the entire program tape and an input string is accepted if the Turing machine eventually stops in an accepting state. Thus, the run time of the machine on inputs of length n is the number of instructions on Pn. Observe that the storage space of this Turing machine is only used for doing string matching with strings on the program tape. In the nondeterministic version of this model the functions content and state assign sets of values instead of single values. Observe, that Lange 8] has introduced a similar computational model to our nonuniform space-bounded Turing machines. This model, the so called protocol machine, was used to describe the logarithmic alternation hierarchy. The class of languages accepted by deterministic nonuniform Turing machines within space O(s(n)) is denoted by DSpace (s(n)) nonuniform]. We denote the class of languages recognized by nondeterministically (co-nondeterministically, respectively) nonuniform Turing machines within space O(s(n)) by NSpace (s(n)) nonuniform] (co-NSpace (s(n)) nonuniform], respectively). A nondeterministic branching program is a labeled nite directed acyclic graph with a distinguished start node. The non-terminal nodes are either labelled with Boolean variables (test nodes) or with the symbol 9 (split nodes). If all non-terminal nodes are labelled only with Boolean variables, the branching program is called deterministic. The terminal nodes and the edges are labeled 0 or 1 in such a way that each inner node q has exactly one outgoing 0-edge and exactly one outgoing 1-edge. The successors of q along these edges are denoted q0 and q1. Further we assume that the branching program is leveled , i.e., all paths connecting two nodes are of the same length. Any setting of the input variables determines a set of computation paths from the start node to the terminal nodes of the graph: At test nodes the computation proceeds along the proper edge and at 9-nodes the computation splits. The input is accepted by the branching program if at least one of the terminal nodes reached along these paths is labeled 1. The depth of the branching program is the length of the longest path from the start node to one of the terminal nodes. For any number r not larger than the depth of the branching program the set of nodes Lr in distance r from the start node is called the rth level of the branching program. The maximal cardinality of the sets Lr is called the width of the branching program. A single branching program computes a single Boolean function. Sequences of branching programs can be regarded as language recognizer: Let B = (Bn ) be a sequence of deterministic (nondeterministic) branching programs where Bn is a branching program on n inputs that computes function fn . The language S ? accepted by B is the set 1 fn 1 (1). n=1

3 Simulation between Turing machines and branching programs

Theorem 1. 1. Let M be a deterministic (nondeterministic, respectively) nonuniform Turing machine with set of states Q and time and space bounds t(n) and s(n). Then there is a sequence B = (Bn ) of deterministic (nondeterministic, respectively) branching programs of depth 2n t(n) and width jQj (2s(n)+1 ? 1) that accepts the same language. 2. Let B = (Bn ) be a sequence of deterministic (nondeterministic, respectively) branching programs of depth t(n) and width 2s(n)+1 ? 1. Then there is a deterministic (nondeterministic, respectively) nonuniform Turing machine M with time and space bounds 2n t(n) and s(n) and 3 states that accepts the same language.

Proof. We describe here only the simulation for the deterministic case. Obviously

the idea works also for the nondeterministic case. 1. On inputs of length n let Pn be the sequence of instructions on the program tape of M. Without loss of generality we assume that Pn forces the input head to always sweep from left to right and back again across the input tape. This leads to an increase in time by a factor of 2n. The nodes of Bn are tuples of the form v = (i; w; q) 2 f1; : : :; t(n)g f0; 1g s(n) Q: If q is accepting or rejecting v is labelled by 1 or 0. Otherwise it is labelled by xm if M in step i reads the mth input position. In this case for b 2 f0; 1g let Iib = (content b; state b). Then in the branching program Bn there is an edge from v to node (i+1; content b (q; w); state b(q; w)) that is labeled with b. It is not hard to see that Bn accepts an input if and only if M does. 2. Let Q = fqread ; q+; q?g. We assume that the order of the variables in Bn is x1, x2 , : : :, xn?1, xn , xn?1, : : :, x2 , x1, x2 , : : : and so on. This introduces an increase in depth by a factor of at most 2n. Let the nodes of Bn be arranged in an 2n t(n) (2s(n)+1 ? 1)-array whose columns are indexed by 0-1-strings of length at most s(n). The nodes of the ith level determine the instructions Ii0 and Ii1 of Pn in an obvious way: Let b 2 f0; 1g. If a b-edge in Bn leads from (i; w) to (i + 1; w0), then Iib forces M to overwrite string w from the worktape by w0 in step i. The head is moved accordingly and M's state changes from qread to qread if (i + 1; w0) is an inner node of Bn and to q+ or q? if it is an accepting or rejecting node. t u

4 Complementation based on counting

Suppose we have a nondeterministic branching program B that has only two sinks: An accepting one and a rejecting one. Let both sinks be on the tth level of B. Let L1 ; L2; : : :; Lt be the levels of B. For any level i of B let count B (i; x) be the number of nodes on level i that can be reached using input x. Clearly

1 count B (t; x) 2. If we knew count B (t; x) in advance, we could construct a branching program that computes the complement of the function computed by B as follows: If count B (t; x) = 2 then reject (because in this case both sinks are reached, hence there is a computation in B that reaches the accepting sink), and if count B (t; x) = 1 simulate B but accept if B rejects and reject if B accepts (because in this case either all computations in B reject or all computations in B accept). To formulate this idea in branching programs suppose we had already constructed a branching program Count B that has the following properties: Count B has 3 sinks s0 , s1 ; and s2 . Sink s0 is labeled rejecting and on any input x if count B (t; x) = 1 then there is a computation in Count B on x that reaches s1 and no computation in Count B reaches s2 and if count B (t; x) = 2 then there is a computation in Count B on x that reaches s2 and no computation in Count B reaches s1 . Based on Count B a branching program that computes the negation of the function computed by B can be constructed as follows: Label nodes s0 and s2 in Count B are rejecting (this results in branching program Count 0B ), identify s1 with the source of B while relabel the accepting node in B as rejecting and the rejecting node as accepting (branching program B 0 ).

B

Count B rej

s

Count B

0

acc

0

s

1

s

2

rej

B

0

rej acc

rej

Fig.1. The branching programs B , Count B , and the branching program that computes the complement of B .

Obviously, the constructed branching program computes the negation of the function computed by B. We construct Count B inductively in stages. Let Lr be the set of nodes on level r of the branching program B. For r > 1 the rth stage Count B;r will contain nodes that \compute" the numbers countB (r; x). These are the sinks sr , 0 sr ; : : :; srLr j . The node sr is labeled rejecting and in Count B;r sink sr , i 6= 0 is 1 0 i j reachable via x if and only if i = countB (r; x). The branching program Count B;1 consists of the single node s1 . Suppose 1 Count B;r?1 has already been constructed. Count B;r will consist of Count B;r?1

as a subprogram at the source and a linking part Sr . The linking part, which consists of the \wiring" from the nodes sir?1 to the nodes sr , will be described j in the next section. The branching program Count B nally is the program Count B;t where the sinks st are identi ed with si for i 2 f0; 1; 2g. i

5 Inductive counting for branching programs

Based on the idea of Immerman 6] and Szelepscenyi 12] we will describe an algorithm that given the correct number of nodes on level r ? 1 of B reachable via x computes the number of nodes on level r reachable via x. To support imagination we will instead of levels r ? 1 and r speak of the last and the current level. We rst describe this algorithm in a Turing machine like way and describe later a branching program implementation. The machine several times simulates the branching program on input x. While doing this it holds a counter ccurrent for the number of nodes on the current level that have already been discovered as being reachable. Having stored the already computed number clast of nodes reachable on the last level the machine for any node v on the current level tries to guess computations on B to reach any node u of the reachable ones on the last level and checks whether v is reachable from u in one step. The rst part of this task is solved nondeterministically and the second part is solved deterministically. In case v is reached ccurrent is incremented. The key point is that the machine can always check if its guesses were right. This is done using a counter caux of nodes u on the last level that have been reached during the simulations. If after examining all nodes u on the last level caux = clast then during this examination all clast nodes reachable on the last level have been reached by computations. This means in that case ccurrent holds the correct number of nodes reachable on the current level and the machine proceeds with the next level after replacing clast by ccurrent and initializing ccurrent and caux with 0. If caux < clast the machine rejects because not all reachable nodes on the last level have been reached by computations. So the only possibility to end up without rejecting is for any reachable node u on the last level to guess a computation on B that reaches it. Hence the machine either rejects or it stops with ccurrent holding the correct number of nodes on the current level that can be reached via x. The linking part Sr is the branching program implementation of the above idea. Before we describe this part in detail we need some notations. For any nodes u 2 Lr?1 and v 2 Lr let Bu;v be a nondeterministic branching program with three sinks s:u , su;:v , and su;v that is obtained from B in the following way: Cut o B at level r, identify all nodes on level r ? 1 that are di erent from u with s:u and delete all edges emerging from these nodes, identify all nodes on level r that are di erent from v with su;:v , and identify v with su;v . Observe that if the width of B is bounded by W, the counters ccurrent , clast , and caux in each step of the computation hold values between 0 and W. To model the algorithm we have to store all information about the status of the computation in the nodes of the program. Therefore the nodes of Sr are tuples containing

integers i; j; k 2 f0; 1; : : :; W g that stand for the current values of clast , ccurrent , and caux , respectively. For any (i; j; k) 2 f0; 1; : : :; W g3 and any (u; v) 2 Lr?1 Lr the Sr will contain a node i; j; k; u; v] that is the source of a copy of Bu;v . Let Lr?1 = fu1; u2; : : :; uW g and Lr = fv1; v2 ; : : :; vW g. Consider a particular node of the form i; j; k; ul; vm ]. We distinguish four cases: 1. m 6= W; l 6= W { s:u is identi ed with i; j; k; ul+1; vm ] (next node on level Lr?1 will be tested on the current node vm on level Lr ) { su;:v is identi ed with i; j; k + 1; ul+1; vm] (counter caux is incremented, next node on level Lr?1 will be tested on the current node vm on level Lr ) { su;v is identi ed with i; j + 1; 0; u1; vm+1 ] (counter ccurrent is incremented, rst node on level Lr?1 will be tested on the next node on level Lr ) 2. m 6= W; l = W { s:u is identi ed with i; j; 0; u1; vm+1 ] if i = k, otherwise this node is labeled rejecting (in the rst case all reachable nodes on level Lr?1 have been reached, rst node on level Lr?1 will be tested on the next node on level Lr ; in the latter case not all nodes reachable on level Lr?1 have been traversed during the simulations of B) { su;:v is identi ed with i; j; 0; u1; vm+1 ] if i = k + 1, otherwise this node is labeled rejecting (similar reasoning) { su;v is identi ed with i; j + 1; 0; u1; vm+1 ] (counter ccurrent is incremented, rst node on level Lr?1 will be tested on the next node on level Lr ) 3. m = W; l 6= W { s:u is identi ed with i; j; k; ul+1; vm ] (next node on level Lr?1 will be tested on the current node vm on level Lr ) { su;:v is identi ed with i; j; k + 1; ul+1; vm] (counter caux is incremented, next node on level Lr?1 will be tested on the current node vm on level Lr ) { su;v is identi ed with sr+1 j (node vm on level Ls is reachable and j + 1 is the correct number of nodes reachable on level Lr , switch to next level) 4. m = W; l = W { s:u is identi ed with sr if i = k, otherwise this node is labeled rejecting j (in the rst case all nodes reachable on level Lr?1 have been reached and j is the correct number of nodes reachable on level Lr , switch to next level; in the latter case not all nodes reachable on level Lr?1 have been traversed during the simulation of B)

rejecting (similar reasoning) { su;v is identi ed with sr+1 if i = k + 1, otherwise this node is labeled j rejecting (similar reasoning) The above construction is based on the assumption that any level of B has the same number W of nodes but it is clear how to adapt it to varying sizes of the levels. To complete the construction of Sr we rename nodes i; 0; 0; u1; v1] into sir?1 . All rejecting nodes are identi ed with sr . The construction is illustrated for 0 cases 1 and 2 in Figure 2. Inductively one can prove now that any Count B;r has the properties required in Section 4. Hence the construction of Count B is complete. Further one observes that width and length, respectively, of Count B are bounded by O(W 6) and O(t2 W 2 ). The above construction together with the construction in Section 4 yields: Theorem 2. Let f be a function computed by a nondeterministic branching program of width W and length t. Then there is a nondeterministic branching program of width O(W 6 ) and length O(t2 W 2) that computes :f . t u

{ su;:v is identi ed with sr if i = k + 1, otherwise this node is labeled j

6 Conclusions

From Theorems 1 and 2 one obtains: Theorem 3. For any space bound s(n) holds NSpace (s(n)) nonuniform] = co-NSpace (s(n)) nonuniform]: t u In Section 2 we introduced nonuniform Turing machines that work nondeterministically or co-nondeterministically. Both concepts in the usual way can be generalized to alternating computations (see Chandra et al. 4]). Consider a partition of the set of states into universal and existential ones. The congurations (head positions and tape contents) of the machine can now inductively be marked as accepting or rejecting in the following way: A terminal con guration is marked according to whether its state is accepting or rejecting. Nonterminal con gurations with a universal (existential) state are marked accepting if all (one of) its successor con gurations are (is) marked accepting, otherwise it is marked rejecting. The input is accepted, i the initial con guration is marked accepting during this process. The class of languages accepted by O(s(n)) space bounded alternating nonuniform Turing machines is denoted ASpace (s(n)) nonuniform]. A nonuniform Turing machine is said to be a k -machine ( k -machine) if its initial state is existential (universal) and for any computation of the machine there are at most k ? 1 alternations between existential and universal con gurations. The set of languages accepted by O(s(n)) space bounded nonuniform k -machines ( k -machines) is denoted

u;v

Detail:

i; j; i

s

+ 1; uW ; vm ]

rejrej

u; v

:

u;v

i; j; i u

+ 1; ul+1 ; vm ]

i; j; k

i; j; i

? 1;

u

W ; vm ]

W ; vm ]

rej

rej

i; j; k u ; v

i; j; k u

i; j; i

l+1

W

rejrej

m+1

i; j; u u

0; u1 ; vm+1 ]

1

. . .

v

; l m]

; l+1 ; vm ]

l

. . .

u

u

Fig.2. The cases 1 and 2 of the linking part

v

. . .

m

by k Space (s(n)) nonuniform] ( k Space (s(n)) nonuniform]). In a similar way alternating branching programs are de ned. Clearly Theorem 1 carries over to this case. Using the same technique as in Theorem 2 one can inductively reduce the number of alternations in alternating branching programs without increasing the width of the programs too much. Hence together with the simulation result we obtain:

. . .

S

r and the subbranching program Bu;v .

? 2;

u

i; j

+ 1; 0; u1 ; vm+1 ]

B

s

; W ; vm ]

s:

u

Theorem 4. For any k > 0 and any s(n) holds

NSpace (s(n)) nonuniform] = k Space (s(n)) nonuniform] = k Space (s(n)) nonuniform].

t u

Observe that this is contrary to the behaviour of the corresponding uniform complexity classes in the case of sublogarithmic space bounds. It has been

proved by von Braunmuhl et al. 3], Ge ert 5], and Liskiewicz et al. 9] that for space bounds between (log log n) and o(log n) the alternating space hierarchy is in nite.

Acknowledgments

We would like to thank Klaus-Jorn Lange for drawing our attention to the subject and for his encouragement. Thanks also to David Mix Barrington, Neil Immerman, Stasys Jukna, and Rudiger Reischuk for useful remarks and fruitful discussions.

References

1. David A. Mix Barrington. Bounded width polynomial size branching programs can recognize exactly those languages in N C 1 . Journal of Computer and Systems Sciences, 38:150{164, 1989. 2. Allen Borodin, Stephen A. Cook, Patrick W. Dymond, Walter L. Ruzzo, and Martin Tompa. Two applications of inductive counting for complementation problems. SIAM Journal on Computing, 18:559{578, 1989. 3. Burchard von Braunmuhl, Romain Gengler, and Robert Rettinger. The alternation hierarchy for sublogarithmic space is in nite. Computational Complexity, 3:207{230, 1993. 4. Ashok K. Chandra, Dexter C. Kozen, and Larry J. Stockmeyer. Alternation. Journal of the ACM, 28:114{133, 1981. 5. Villiam Ge ert. A hierarchy that does not collapse: Alternations in low level space. RAIRO | Theoretical Informatics and Applications, to appear. 6. Neil Immerman. Nondeterministic space is closed under complementation. SIAM Journal on Computing, 17:935{938, 1988. 7. Richard M. Karp and Richard J. Lipton. Turing machines that take advice. L'Enseignement Mathematique, 28:191{209, 1982. 8. K.-J. Lange. Two characterizations of the logarithmic alternation hierarchy. In Proceedings of the 12th Conference on Mathematical Foundations of Computer Science, number 233 in LNCS, pages 518{526. Springer, August 1986. 9. Maciej Liskiewicz and Rudiger Reischuk. The sublogarithmic space hierarchy is in nite. Technical report, Technische Hochschule Darmstadt, Institut fur Theoretische Informatik, Darmstadt, January 1993. 10. David Mix Barrington and Neil Immerman. Personal communication, 1994. 11. Rudiger Reischuk. Personal communication, 1994. 12. Robert Szelepcsenyi. The method of forced enumeration for nondeterministic automata. Acta Informatica, 26:279{284, 1988. a This article was processed using the L TEX macro package with LLNCS style

- Comparing Counting Classes for Logspace, One-Way Logspace, and First-Order
- Inductive definability with counting on finite structures
- Inductive Power Transfer
- 归纳教学法案例Inductive Approach
- Inductive Reasoning Test9-Questions
- Inductive Reasoning Test8-Questions
- Inductive Reasoning Test5-Questions
- Inductive Reasoning Test6-Questions
- 归纳总结法Inductive Generalization
- Why the inductive and mathematical

更多相关文章：
**
***Inductive* *counting* *below* *LOGSPACE*.pdf

*Inductive* *counting* *below* *LOGSPACE* - Abstract. We apply the *inductive* *counting* technique to nondet...**
Comparing ***Counting* Classes for *Logspace*, One-Way Lo....pdf

Comparing*Counting* Classes for *Logspace*, One-Way *Logspace*, and First-Order_专业资料。We generalize the definition of first-order *counting* classes [SST92] ...**
Symmetric ***Logspace* is Closed Under Complement.pdf

Symmetric*Logspace* is Closed Under Complement_专业资料。one article at a ...Quite surprisingly, the proof of our theorem does not use *inductive* *counting*...**
Symmetric ***Logspace* is closed under complement.pdf

Symmetric*Logspace* is closed under complement_专业资料。We present a *Logspace*...Quite surprisingly the proof of our theorem does not use *inductive* *counting*,...**
Grigni [2b] Monotone Separation of Logarithmic Spac....pdf
**

function in monotone*logspace* require depth (lg2 ...For example the *inductive* *counting* technique used ...The construction *below* shows that fork is in mL...**
Preface.pdf
**

16 20 v Chapter 1 Introduction Nondeterministic and unambiguous*logspace*-...U L. 15 Appendix A A.1 *Inductive* double *counting* method This method is ... 更多相关标签：

Comparing

Symmetric

Symmetric

function in monotone

16 20 v Chapter 1 Introduction Nondeterministic and unambiguous