9512.net
甜梦文库
当前位置:首页 >> >>

On the complexity of Cockett-Seely polarized games


ON THE COMPLEXITY OF COCKETT-SEELY POLARIZED GAMES
J. R. B. COCKETT AND C. A. PASTRO

arXiv:math/0402441v2 [math.CT] 29 Feb 2004

ABSTRACT. In this paper the complexity of provability of polarized additive, multiplicative, and exponential formulas in the (initial) Cockett-Seely polarized game logic is discussed. The complexity is ultimately based on the complexity of ?nding a strategy in a formula which is, for polarized additive formulas, in the worst case linear in their size. Having a proof of a sequent is equivalent to having a strategy for the internal-hom object. In order to show that the internal-hom object can have size exponentially larger than the formulas of the original sequent we develop techniques for calculating the size of the multiplicative formulas. The structure of the internal hom object can be exploited and, using dynamic programming techniques, one can reduce the cost of ?nding a strategy in such a formula to the order of the product of the sizes of the original formulas. The use of dynamic techniques motivates the consideration of games as acyclic graphs and we show how to calculate the size of these graph games for the multiplicative and additive fragment and, thus, the cost of determining their provability using this dynamic programming approach. The ?nal section of the paper points out that, despite the apparent complexity of the formulas, there is, for the initial polarized logic with all the connectives (additives, multiplicatives, and exponentials) a way of determining provability which is linear in the size of the formulas.

Introduction
It is a natural question in a logic to ask how hard is it to decide whether a sequent is provable. This paper examines this question for the Cockett-Seely polarized game logic [CS04] which we will henceforth refer to simply as polarized game logic. We shall show that this question can be answered in linear time. The decision procedure consists of translating the given sequent in the polarized game logic into a single polarized game using “negation” and the “mixed par” operation. As this is an internal-hom game, determining whether a strategy exists in this game corresponds to determining whether a proof exists for the sequent. To determine whether a strategy exists in a game requires, in the worst case, time proportional to the number of edges in the tree representing that game. Thus, it becomes important to estimate the size of this internal-hom game. While, in general, there is no simple expression for this size, it is often possible to estimate the “uniform size” of an internal-hom game. The uniform size is de?ned as the number of binary logical operations required, in the worst case, to determine whether a strategy is present. This makes it
Research partially supported by NSERC, Canada. Diagrams were produced with the X Y-pic package of K. Rose and R. Moore and inferences with M. Tatsuya’s proof.sty. 2000 Mathematics Subject Classi?cation: 03F20, 03F52, 18A15. c J. R. B. Cockett and C. A. Pastro, 2004. Permission to copy for private use granted.

1

2 possible to get a fairly precise estimate for the size of certain internal-hom games. These estimates allow us to demonstrate that there can be an exponential increase in the size of the internal-hom game over the original games. Fortunately it is possible to exploit the structure of the internal hom object and, using dynamic programming techniques, reduce the cost of ?nding a strategy in such an object to be bounded by the product of the sizes of the original game trees. This approach involves regarding games as if they are acyclic directed graphs, a direction already pioneered by Hyland and Schalk [HS02]. When this is done one can give precise expressions for the size of formulas built using the multiplicatives connectives and, thus, for the cost of the dynamic programming approach to ?nding a strategy. The categorical semantics of polarized games is outside the scope of this paper. In order to keep the paper largely self-contained we do, however, provide a description and brief discussion of the basic concepts. The reader who is interested in a more complete story should consult the paper of Cockett and Seely [CS04]. Such a course of action is recommended as, in the last section of this paper, we use this semantics, albeit in a very simple way, to show that for this model one can decide provability in linear time for the full logic with polarized additives, multiplicatives, and exponentials. Before starting this story we should emphasize that the multiplicative structure of this model is not the free structure. This model has “soft” multiplicative and exponential structures (see [CS04]). Provability in free polarized multiplicative structures has the same complexity as provability in the corresponding multiplicative fragment of linear logic (as any multiplicative proof net can be polarized). For multiplicative linear logic with atoms provability is known to be NP-complete [Kan92], and even without atoms this decision problem is just as hard: the multiplicative units can be used to simulate the e?ect of having atoms [LW94]. In the presence of atoms, additives, and exponentials (or even nonlogical axioms) these provability problems become undecidable for ordinary linear logic [LMSS92]. The results we obtain are somewhat curious as they apply to the ?nite fragment of precisely the same model whose depolarization was used by Abramsky and Jagadeesan [AJ94] to obtain full completeness for the (iso-mix) multiplicative fragment of multiplicative linear logic. This seems to highlight the important role units play in providing these logics with their underlying complexity. Note also that using the transformation to Laurent’s version of polarized logic [Lau02], discussed in [CS04], this model also provides, through the coKleisli construction for the !( ) comonad, a non-trivial model of intuitionistic logic which has a linear time provability. It is worth recalling that this is, for the free logic, a P-space complete problem [Stat79]. The outline of this paper is as follows. In Section 1 polarized games are introduced. In Section 2 polarized game logic is introduced. Section 3 describes the multiplicative and exponential structure on polarized games. Section 4 shows how provability can be turned into the question of ?nding a strategy in the internal-hom object. Section 5 discusses the size of the multiplicative formulas including internal-hom objects. In Section 6, we discuss the relationship between the dynamic programming solution to determining provability

3 and graph games. In particular, we describe simply formulas for the size of multiplicatives in graph games. Finally, in Section 7, we show that provability, despite the evident complexity of the underlying additive structures, can be evaluated for this model in linear time for the complete additive, multiplicative, and exponential logic.

1. Polarized games
A polarized game is a ?nite 2-player input-output game of the sort studied by Abramsky and Jagadeesan [AJ94] and Hyland in [Hy97]. The two players are referred to as the “opponent” and the “player”. Games are often discussed using the terminology of processes; in these terms a game is a protocol for interaction while a map between games is a process which interacts through these protocols on the “input” (or domain) and “output” (or codomain) channels. On the output channel, the opponent messages come from the “environment” and the player messages are generated by the process or “system”. On the input channel, however, the roles are reversed: the opponent messages are generated by the “system” and the player messages come from the “environment.” There are two sorts of games: those in which the opponent has the ?rst move and those in which the player has the ?rst move. The games introduced in this paper are the games of the initial polarized game category [CS04]. Games are represented in a number of ways. An opponent game is denoted by O = (b1 : P1 , . . . , bm : Pm ) =
i∈I

bi : Pi =

b1

P1

v bm ss? vvv v sss

···

Pm

where I = {1, . . . , m} and each Pi is, inductively, a player game. A player game is denoted by a1 ss?vvvv an v P = {a1 : O1 , . . . , an : On } = aj : Oj = sss O1 · · · On
j ∈J

where J = {1, . . . , n} and each Oj is, inductively, an opponent game. We allow (opponent and player) games with empty index sets; this gives two atomic games at which the inductive construction of games begins: 1 = () =
?

=?

0 = {} =

?

=?

1.1. Example. A typical game (without branch labels) looks like:
ll???????? lll ???? l l ?? llll ? V s s?uuuu V s VV uuu ss sss s s s s s s ? V ? ? V ?V ?? VVV ?? VVV ?? VVV ? ? ? ? ? ?

?

?

?

?

?

?

4 Notice that this is a opponent start game. The dual of a game G, denoted G, is de?ned as O=
i∈I

bi : Pi =
i∈I

bi : Pi

P =
j ∈J

aj : Oj =
j ∈J

aj : Oj

Thus, taking the dual of a game simply ?ips the black and white nodes while leaving the structure and branch labels (the bi ’s and aj ’s) alone. 1.2. Morphisms of games. There are three types of morphisms: opponent morphisms (from opponent games to opponent games), mixed (or cross) morphisms (from opponent games to player games), and player morphisms (from player games to player games). These are de?ned inductively by: Opponent morphisms: ? where each hi : O channel).
/

? b1 → h1 ? ?: O ··· bm → hm
/

(b1 : P1 , . . . , bm : Pm )

Pi is a mixed morphism (the process listens on the codomain

Mixed morphisms: These are either of the form ← ? bk · f : (b1 : P1 , . . . , bm : Pm ) where k ∈ {1, . . . , m} and f : Pk bk on the domain channel) or:
/ /P

P is a player morphism (the process outputs

? → ak · g : O
/

{a1 : O1 , . . . , an : On }

/ O is an opponent morphism (the process where k ∈ {1, . . . , n} and g : O k outputs ak on the codomain channel).

Player morphisms: ? ? ? a1 → h1 ? ··· : {a1 : O1 , . . . , an : On } ? ? an → hn
/

/

P

where each hi : Oi channel).

P is a mixed morphism (the process listens on the domain

5 There are no morphisms from player to opponent. Viewed as a process, a mixed morphisms initiates the interaction by sending a message down either the domain channel or the codomain channel. Having sent a message down one of these channels the process must then wait until it receives a reply from the environment along the same channel. Having received this reply the process is, once again, in a mixed state and can send a message down either channel. This process discipline which requires that the reply on a channel must be received before any further action is permitted means that processes are completely sequential. In fact, an interacting system of such processes is “polarized” in the very strong sense that at any given time only one processes can ever be active (i.e., between receiving and sending a message). In Hyland’s “games for fun” [Hy97] a process is permitted to simply not respond when it is in a mixed state, in other words partial processes are allowed. The processes we consider here must produce a message whenever it is in a mixed state so that, for us, processes are total. These correspond, for ?nite games, to winning strategies in the sense of [AJ94]. 1.3. Example. The following is a map between two opponent games: → a→? c ·() ? → b → e · () ?V a ?? VVb ? V ? ?u b a sss uuu uuu s s s s ? V ?V c ?? VVd e ?? VVf V V ? ? ? ? ? ?

:

/

?

?

?

?

Notice that this is not the only possible map between these two games, the following are also maps: ? → a → d ·() ? → b → f · () , ? a→← a ·{} ← ? b → b ·{} , → a→? c ·() ? b→← a ·{}

1.4. Composition. There are four types of composition we must consider: opponent opponent composition, opponent mixed composition, mixed player composition, and player player composition. These compositions can be de?ned via rewriting rules as follows: Opponent opponent composition: ? ? ? ? b1 → h1 b1 → f ; h1 ? =? ? ? f ;? ··· ··· bn → hn bn → f ; hn Opponent mixed composition: ? ? b1 → h1 ? ? ?;← ··· bk · v =? hk ; v bn → hn

→ → and f ; ? a · v =? ? a · (f ; v )

6 Mixed player composition: ? ? ? a1 → h1 ? ? → ··· ak · v ; =? v ; hi ? ? bn → hn

and

← ? ← ? b · v ; g =? b · (v ; g )

1.5. Example. Here we compose an opponent map (from Example 1.3) with a mixed map. ?u b a sss uuu uuu s ?V ? V s a ?? VVb ss / / a??? VVb ? V ? VV f V V ? ? V ? c? e? Vd VV ?? ? ?? ? V ?? ?? ? ? ? ? → → → e→? a · () e→? a ·() ← ? a→? c · () ? → = ? e · ( ) ; ; b · ? → ? → ? → b → e ·() f → b · () f → b · () ? → =? ( ) ; a · ( ) → =? ? a · (( ) ; ( )) ? → =? a · ( ) Composition, as it is de?ned here, is exactly the cut elimination rewriting for the polarized game logic, which we introduce in the next section. It can be shown by an inductive argument that this is a con?uent and terminating rewriting which eliminates the composition. (See the paper of Cockett and Seely [CS01] where this is proved for a similar system.) 1.6. Proposition. (i) The above rewriting on morphisms terminates. (ii) The above rewriting is con?uent. (iii) The associative law is satis?ed by all composite triples. Given a opponent game O = (b1 : P1 , . . . , bm : Pm ), the identity opponent morphism / O is de?ned inductively as: 1O : O ? ← ? b1 → b1 · 1P1 ? ? ··· 1O = ? ? ← ? bm → bm · 1Pm ?

Player player composition: ? ? ? ? ? a1 → h1 ? ? a1 → h1 ; g ? ··· ; g =? ··· ? ? ? ? an → hn an → hn ; g

7 Given a player game P = {a1 : O1 , . . . , an : On }, / P is de?ned inductively as: 1P : P ? → a1 · 1O1 ? a1 → ? ··· 1P = ? →·1 an → ? a n On the identity player morphism ? ? ?

Observe that in any possible composition, the identity acts as a neutral element, that is f ; 1 = f = 1 ; f . This gives: 1.7. Proposition. (i) Player games and player morphisms form a category. (ii) Opponent games and opponent morphisms form a category. (iii) Mixed morphisms between the opponent category and the player category form a module between these categories. The polarized categories of [CS04] are precisely modules C : Co ? → Cp and in that paper the details of how these provide the categorical semantics for the proof theory of polarized game logic which we now present is spelled out.

2. Polarized game logic
Proofs in polarized game logic with no atomic types or non-logical axioms correspond precisely to morphisms between polarized games as described above. Below we display this logic as a Gentzen style sequent calculus. It requires three kinds of sequent: Opponent sequents: O ?o O ′ where O and O ′ are opponent propositions. Mixed sequents: O ? P where O is an opponent proposition and P is a player proposition. (This notation di?ers slightly from the notation in [CS04] in that we leave the mixed sequent turnstile unannotated.) Player sequents: P ?p P ′ where P and P ′ are player propositions. The valid inferences of the polarized game logic are as follows, which are a “graded” version of ΣΠ [CS01].
' $

A ?p A (atomic identities) B ?o B {X ? Yi }i∈I (tuple) X ?o Yi
i∈I

{Xi ? Y }i∈I (cotuple) Xi ? p Y
i∈I

&
i∈I

Xk ? p Y X ?o Y k (projection) (injection) Xi ? Y X? Yi
i∈I

%

8 Notice that in the cotuple and tuple rules the index set I may be empty, though not in the injection and projection rules. Also, observe that the inference system is symmetric, that is, it has an obvious ? symmetry. Explicitly, we may ?ip the direction of the sequents while swapping for and “opponent” for “player” to obtain the same system. This symmetry arises from an underlying categorical duality. The logic has four cut rules which correspond to those permitted by the types. The ?rst two arise as cuts in the player and opponent sequents. The last two are the two possible cuts on the mixed sequent. These correspond categorically to the actions expected of a module.
'

X ?p Y Y ?p Z (p-cut) X ?p Z

X ?o Y Y ?o Z (o-cut) X ?o Z

$

It is an easy exercise to prove: 2.1. Theorem. The polarized game logic satis?es cut elimination.

&

X ? Y Y ?p Z X ?o Y Y ? Z (mp-cut) (om-cut) X?Z X?Z

%

In fact, cut elimination is exactly the rewriting steps introduced in the previous section for composition; we can make this explicit by annotating the logic with terms. Following [CS04] we use “::” to denote the term-type membership relation, e.g., t :: U ? V will mean that t is a term of type U ? V , where U (say) may be of the form a : X .
'

1A :: A ?p A (atomic identities) 1B :: B ?o B

$

{hi :: Xi ? Y }i∈I {hi :: X ? Yi}i∈I (cotuple) (tuple) {ai : hi }i∈I :: ai : Xi ?p Y (bi : hi )i∈I :: X ?o bi : Yi
i∈I i∈I

f :: X ?o Yk g :: Xk ?p Y (injection) (projection) ? → ← ? · g :: bk · f :: X ? bi : Yi a ai : Xi ? Y i
i∈I i∈I

As this logic satis?es cut elimination, to determine provability of a sequent it su?ces to establish that there is a cut free proof of the sequent. This is precisely the same as establishing that there is a morphism of games which is in normal form (i.e., one in which the composition (cut) has been removed).

&

f :: X |= Y g :: Y |= Z (cut) f ; g :: X |= Z where |= represents the appropriate type of sequent for each of the four cut rules

%

9

3. Multiplicative and exponential structure
In this section we describe the multiplicative structure on polarized games. This structure will be used in the calculation of the complexity of provability. We shall also introduce the exponential types (the Curien exponential) although we shall not develop their properties in any depth. Suppose we have games:
′ O = (bi : Pi | i ∈ I ), O ′ = (b′i : Pi′ | i ∈ I ′ ), P = {aj : Oj | j ∈ J }, P ′ = {a′j : Oj | j ∈ J ′}

[O

P ] This operation takes an opponent game O and a player game P and produces a player game. The operation is de?ned inductively by: O P = {aj : O Oj | j ∈ J }

where the tensor operation is de?ned below. Notice that the “direction” of the operation points to the game whose type is inherited by O P . [P O ] This operation is a simple left-right dual of the preceding one, taking a player game P and an opponent game O and producing a player game. The operation is de?ned inductively by: P [O O = {aj : Oj O | j ∈ J}

O ′ ] This operation takes in two opponent games and produces an opponent game. It is de?ned inductively by: O O ′ = (bi : Pi O ′, b′k : O
′ Pk | i ∈ I, k ∈ I ′ )

[P

O ] This operation takes a player game P and an opponent game O and produces an opponent game. The operation is de?ned inductively by: P O = (bi : P Pi | i ∈ I )

where the par operation is de?ned below. Again, the direction of operation points to the game whose type is inherited by the compound game. [O P ] This operation is the left-right dual, taking an opponent game O and a player game P and producing an opponent game. The operation is de?ned inductively by: O [P P = (bi : Pi P | i ∈ I)

P ′ ] This operation takes in two player games and produces a player game. It is de?ned inductively by: P P ′ = {aj : Oj P ′ , a′k : P
′ Ok | j ∈ J, k ∈ J ′ }

10 [!O ] The exponential (bang) is de?ned inductively by: !O =
i∈I

(bi :!′ Pi ) for |I | > 0 and !( ) = ( )

where !′ P = {aj :!Oi | j ∈ J }. [?P ] The exponential (whimper) is de?ned inductively by: ?P =
j ∈J

{aj :?′ Oj } for |J | > 0 and ?{ } = { }

where ?′ O = (bi :?Pi | i ∈ I ). Notice that there are some obvious dualities: O O=O O O P =O P P O=P O !O = ?O

There are a number of simple identities that hold of these operations; we list (without proof) several for reference. 3.1. Lemma. O O′ O 1 O 0 1 P ′ (O O ) P (O 1 O 2 ) O 3 ? = ? = ? = ? = ? = ? = O′ O O 0 P O (O ′ P ) O 1 (O 2 O 3 ) P P′ 0 P 1 P O 0 (P P ) P2 ) P3 ? = ? = ? = ? = ? = ? = P′ P 1 O (O P1 P

O (P 1

P) (P 2

P ′) P3 )

These combinatoric operations are functors on the game categories. In the opponent category there is a bijective correspondence O1 O2
/

O2 O1

/O

3

O3

which makes it into a monoidal closed category, see [CS04]. The exponential !( ) gives rise to a comonad in whose Kleisli category the tensor turns into a product.

4. Provability and strategies
There is a natural question to ask when one regards games as a logic, viz, how hard is it to decide whether a sequent is provable. Recall that there are three sorts of sequents: O ?o O ′ , O ? P, and P ?p P ′ .

11 We start by pointing out that it su?ces to determine the complexity of deciding whether an opponent sequent O ?o O ′ is provable. This is because if we are faced with a player sequent we may negate the games and view it as an opponent sequent, i.e., P ?p P ′ = P ′ ?o P Similarly any problem in the mixed setting O ? P may be transformed to a problem in the opponent setting by adding an apr` es vous move to the player game. That is, we simply prepend the player game with an opponent node to turn it into an opponent game, e.g.,
~?eee an a1 ~ ee ~ ~~

? =?
a1

O1 · · · On Logically this is because:

O1 · · · On

~?ee an ~~ ee e ~ ~

O ?o ? P O?P Therefore, it su?ces to determine how di?cult it is to decide if there is a proof of an opponent sequent O ?o O ′. There is a special case where this determination is particularly simple, namely when the game on the left-hand side of the turnstile is ? = ( ). In this / O ′ , which we shall case we are simply asking whether there is a morphism f : ( ) call a strategy in the opponent game O ′. The calculation below will determine whether or not a strategy exists in a game G. Notice that the same procedure can be used to count the number of strategies by turning true to 1, false to 0, meets to products, and joins to sums. The calculation is as follows: ? The empty opponent and the empty player games respectively may be evaluated respectively as having a strategy and not having a strategy respectively, i.e., ( ) = true ? At an opponent node there is a strategy for indicated by:
i∈I

{ } = false

Pi we must ?nd a strategy in each of the Pi , and so P i∈I i if there is a strategy in the meet of the Pi ’s. This is
i∈I

Pi =

i∈I

Pi

? At a player node i∈I Oi we must ?nd a strategy in any one of the Oi’s, and so there is a strategy for i∈I Oi if there is a there is a strategy in the join of the Oi’s. This is indicated by: Oi = Oi
i∈I i∈I

Therefore we obtain a decision procedure which is, in the worst case, linear in the size of the game.

12 4.1. Remark. 1. A counter-strategy corresponds to a proof of a sequent P ?p { } or O ? { }. Notice that the determination of whether there is a counter-strategy is exactly the (de Morgan) dual calculation in which true and false are swapped as are ∨ and ∧. This has the consequence that a strategy exists for a game G if and only if G has no counter-strategy. 2. The caveat that this is a worst case scenario is important as in most cases one does not have to visit the whole tree. Indeed, the more “bushy” the tree the more likely this is! We may do the following very rough calculation to determine the expected time it takes to determine whether a (bushy) tree of maximum depth d has a strategy. We suppose the probability that an (opponent) game tree has a strategy is p. This means the probability that the dual (player) tree has a counterstrategy is also p. At an opponent node we are looking for the ?rst player subtree which does not have a strategy (i.e., has a counter-strategy) as then we can cut o? the evaluation of the conjunct and return false. Dually at a player node we are looking for the ?rst opponent subtree which has a strategy as then we can cut o? the evaluation of the disjunct and return true. If we are sitting at an opponent node with a very large branching factor the expected number of edges we will have to inspect is then the sum of the probabilities that the edge will have to be inspected. This is therefore bounded by:


pi =
i=0

1 1?p

Now this happens at each level of the tree so that the expected cost of evaluating a 1 d tree is ( 1? ) , where d is the depth of the tree. p Thus, rather surprisingly, the fact that a tree is bushy does not a?ect the expected evaluation time as adversely as might have been thought. A reasonable guess at the 1 probability p is, of course, 2 so that to evaluate a tree of depth d might be expected d to take time 2 no matter how bushy it is! We now need to be more precise about our calculations and, in particular, what we mean by “size” of a game tree G. We will use a variety of measures. The ?rst and simplest is the number of edges in the game tree, which we write esize[G]. Note that the edges may be counted ?rst by those which leave opponent nodes, esizeo [G], and then those which leave player nodes, esizep [G]. Very closely related to this measure (for trees) is the number of nodes |G|, which we shall mean when we speak of the size of G. Once again the nodes can be divided into the number of opponent nodes, |G|o , and the number of player nodes, |G|p . It is not hard to see that esize[G] = |G| ? 1 for any game tree. Another way of measuring the size of a game tree is the uniform size which is de?ned as:

13 ? usize [( )] = usize [{ }] = 0 ? usize
i∈I

ai : Gi = usize

i∈I

ai : Gi = n ? 1 +

i∈I

usize[Gi ]

The form of this de?nition immediately tells us that usize[G] ≤ esize[G]. The uniform size of a game tree is important as it measures the number of binary logical operations required to evaluate the tree. To see this note that to evaluate an m-ary conjunction requires m ? 1 binary operations. Yet another way to measure the size of a game tree is by counting its leaves, which is written leaves[G]. Another reason why the uniform size is a simple measure is that it is exactly the number of leaves less one. We may summarize the relationship between all these size measurements as follows: usize[G] = leaves[G] ? 1 ≤ esize[G] = |G| ? 1 = esizeo [G] + esizep [G] = |G|o + |G|p ? 1 The di?erence between usize[G] and esize[G] may be viewed as the cost associated with retrieving the arguments of the binary operations. Notice that the uniform cost of any tree with branching factor one is therefore zero as the complete cost of evaluation is in the retrieval! In fact, it is precisely nodes with branching factor one which cannot be accounted for (upto a constant factor) by the uniform size. In fact, if one knows the depth of the game tree is say d, then each path to a leaf contains at most d edges and so one can bound the edge size by: usize[G] ≤ esize[g ] ≤ d · (1 + usize[G]) This means that the uniform size of a game tree is generally going to give a fairly tight lower bound on the worst case cost of evaluating a game tree for the presence of a strategy. The general case of determining the provability of a sequent can also be organized into the question of whether a strategy exists in a formula, although we have to construct a new game tree to represent this problem. Given an opponent sequent O ?o O ′ , we show how to create a game G (which is in fact O O ′ ) so there is a proof of O ?o O ′ if and only if there is a strategy in G. The translation between a proof of O ?o O ′ and a strategy in G is as follows. Consider a proof for: ? O ?o i∈I bi : Pi . It must start as (bi : ui )i∈I where ui is a proof of O ? Pi . If I = {1, . . . , m} this may be represented as a strategy in the following tree:
b1 qq?www bm w q q q q · · · ww ? ? e e u′ u′ ? 1e ? m e

where u′i is the tree whose strategies correspond to proofs of O ? Pi .

14 ? bi : Pi ? j ∈J aj : Oj . In this case we must make a move in either the left or right game and then ?nd a strategy for the remaining problem. If I = {1, . . . , m} and J = {1, . . . , n} this may be represented as the following tree of choices:
i∈I

???? j jj? ??`` `` ?????an ? ???? ` j ? j j ???? jj ?? bm a1 ``` j j ? j ? jj · · · · · · ?? ?e ?e ?e ?e ′ ′ ′ ′ ? um ? vne ? v1 e e ? u1 e
b1 jjjjjj

where u′i is the tree whose strategies correspond to proofs of Pi ? j ∈J aj : Oj and ′ vj is the tree whose strategies correspond to proofs of i∈I bi : Pi ? Oj . ? aj : Oj ?p P . It must start as {aj : vj }j ∈J where vj is a strategy for Oj ? P . If J = {1, . . . , n} this may be represented as the following tree:
j ∈J a1nnn????a ?n ??? n n n n ··· ?e ?e ′ ′ ? v1 e ? vn e ′ where vj is the tree whose strategies correspond to proofs of Oj ? P .

This means the proof of any opponent sequent O ?o O ′ can be transformed into a strategy in O O ′ using this translation process. From a categorical (and logical) perspective this is just using the fact that: O ?o O ′ ( ) ?o O O′

It would therefore seem that the complexity of provability may depend on the size of this formula. This makes it interesting to know how the size of this formula grows compared to the size of the original expressions.

5. On the size of multiplicatives
To answer the question of how formulas grow when they are combined using the multiplicative (and exponential) operators it is useful to develop some techniques for calculating with game trees. 5.1. Using multiplicities. To calculate with trees a useful technique is to use multiplicities. Intuitively if a subgame occurs multiple times, rather than write the game multiple times, one can simply multiply it by the number of times it occurs. For example, O=
t?tttt t ttt t ??VV ??VVV ? ? ? V ?

= (2 : {2 : ( )})

P =

?

?

?

?

??VV ?? V ? ??VV ?? V

= {1 : ( ), 1 : (2 : { })}

?

?

15 We may then calculate O P = {1 : O ( ), 1 : O (2 : { })} = {1 : O, 1 : (2 : {2 : ( )}) (2 : { })} = {1 : O, 1 : (2 : {2 : ( )} (2 : { }), (2 : {2 : ( )}) = {1 : O, 1 : (2 : {2 : ( ) (2 : { })}, 2 : { })} = {1 : O, 1 : (2 : {2 : (2 : { })}, 2 : { })} = {1 : (2 : {2 : ( )}), 1 : (2 : {2 : (2 : { })}, 2 : { })}

2 : { } )}

which, graphically, is the following tree:
ii?????? ???? iiii i i i ???? ii i i i ?? iii www g R ? g g t?W g g R  g ww gg tttt WWW  RR g g g  g WW wwww ggg R tt g g t w g  g t ? ? ?RR ?Rg ?BB ?BB R   BB BB  RR  RR   B B R R   ?BB ?B ?BB ? ? ? ? ?B B BB BB BB BB BB B B B

O

P =

? ? ? ?? ? ? ?

While this does facilitate the calculation of multiplicatives, it is still too detailed to allow an easy estimation of the size of a tensor or par of two game trees. However, before abandoning this let us make an observation on the exponentials. Consider the following game: !(n : {m : ( )}) It is can be seen that this is the formula: (n : {m : (n ? 1 : {m : (n ? 2 : · · · (1 : {m : ( )}) · · · )})}) The leaf size (which is one more than the uniform size) of this formula is leaves[!(n : {m : ( )})] = n! · mn This provides a lower bound on the possible growth in the size of a game tree as this is a lower bound on the edge size of a particular tree. It is reasonable to wonder what an upper bound on the edge size might be. An alternative way of viewing the exponential bang (!) is as the game tree of opponent initiated backtrackings. This means that on the opponents move one can switch to any opponent move which had been a possibility on the play but had not been chosen. This means a path is a sequence of choices of opponent moves interleaved with a possible player move which may be played after the opponent move. For esizep [G] > 0 there are a maximum of esizeo [G]! · esizep [G]esizeo [G]

16 leaves, and hence paths. As each such path can contain at most 2 · esizeo [G] edges the upper bound on the edge size is 2 · esizeo [G] · esizeo [G]! · esizep [G]esizeo [G] When esizep [G] = 0 there are a maximum of esizeo [G] leaves which is exactly the same as the number of edges in G. For games of depth two or greater (i.e., esizep [G] > 0) however this shows that the exponentials may produce very large game trees! 5.2. Using profiles. If G is an arbitrary game tree its pro?le is a list of the number of nodes which occur at each level. Notice the list always begins with a 1. The two trees above and their tensor have the following pro?les: prof[O ] = [1, 2, 4], prof[P ] = [1, 2, 2], and prof[O P ] = [1, 2, 6, 8, 8]

The notation prof[G]i will be used to indicate the number of nodes of G at the i-th depth (starting at zero). We have the following useful observation: 5.3. Proposition. For O and O ′ opponent games and P a player game (i) prof[O (ii) prof[O (iii) prof[O where
m γn =

O ′]0 = prof[O O ′]n =
n i=0 n ?n/2? ?i/2?

P ]0 = 1
i ′ · γn ?i · prof[O ]i · prof[O ]n?i

P ]n+1 =
i=0

?n/2? ?i/2?

i · γn ?i · prof[O ]i · prof[P ]n?i+1

1 0

if n or m is zero or even otherwise

The rationale for this formula is as follows. To get a node at depth n in O O ′ we must interleave the path to a node of O at depth i with the path to a node of O ′ at n/2? depth n ? i. As we can interleave only at the even nodes there are ? possible such ?i/2? interleavings all of which provide distinct nodes. Finally, we can only produce interleaved paths of the required length if at least one of i or n ? i is even as we have to end in one of the paths having left the other at an even index. This results in the factor γin which is applied to each term. Proof. It is easy to see that the formulas work for the base cases ( ) and { }. Let O = (ai : Pi | 0 ≤ i < n) and O ′ = (bj : Pj′ | 0 ≤ j < m) then O O ′ = (ai : Pi O ′ , bj : O Pj′ | 0 ≤ i < n, 0 ≤ j < m)

For depths of zero and one its pro?le is respectively prof[O O ′ ]0 = 1 and prof[O O ′ ]1 = n + m

17 Checking that the formula is correct for these depths is straightforward. Now consider the inductive step for the last case:
n m

prof[O

O ]r+2 =
i=1 n



prof[Pi
r ?r ? 2 i ? ?2

O ]r +
j =1



prof[O

Pj′ ]r

=
k =1 i=0 m r

· γir?i · prof[O ′]i · prof[Pk ]r?i+1 · γir?i · prof[O ]i · prof[Pj ]r?i+1
n

+
j =1 i=0 r

r ?2 ? i ? ?2

=
i=0 r

r ?2 ? i ? ?2

·

γir?i

· prof[O ]i ·



prof[Pk ]r?i+1
k =1 m

+
i=0 r

?r ? 2 i ? ?2

· γir?i · prof[O ]i ·
j =1

prof[Pj′ ]r?i+1

=
i=0 r

r ?2 ? i ? ?2

· γir?i · prof[O ′ ]i · prof[O ]r?i+2 · γir?i · prof[O ]i · prof[O ′ ]r?i+2

+
i=0

?r ? 2 i r ? ? 2 ??? 2

r i i as ? 2 ? ? ?2 ? = ? r? ? we may reindex the second sum: 2 r
r ? ?2 i ?2 ?

= +

· γir?i+2 · prof[O ′ ]i · prof[O ]r?i+2
? ?r 2

i=0 r +2

i′ =2 r

?i ??1 2
r ?2 ? i ? 2 ??1



· γir′?i +2 · prof[O ′ ]i′ · prof[O ]r?i′ +2 +
?r ? 2 i ? ?2



=
i=2

· γir?i+2 · prof[O ]i · prof[O ′ ]r?i+2

+ + =

r ? ?2 r +2 r +1 · (γ 0 · prof[O ′ ]0 · prof[O ]r+2 + γ1 · prof[O ′]1 · prof[O ]r+1 ) 0 ? ?r 1 ′ 1 ′ 2 · (γ r +1 · prof[O ]r +1 · prof[O ]1 γr +1 · prof[O ]r +2 · prof[O ]0 ) ?r ? 2 r ?+1 ?2 i ?2 ?

r +2

· γir?i+2 · prof[O ]i · prof[O ′]r?i+2

i=0

We now wish to determine how the cost of evaluating tensors of trees can grow. A ?rst guess might be that the tensor always causes an exponential increase in size. However,

18 this is not the case in general: the growth depends on the shape of the tree. Below we consider two examples where an exact computation of the uniform size (hence our interest in this measure) is possible: 1. First we consider a case where the growth is polynomial: A0 = { }, An+1 = {2 : Bn } and B0 = ( ), Bn+1 = (1 : An ) so that, for example,
r?vvvv v rrr r ? ?

A3 = {2 : (1 : {2 : ( )})} =

?

?QQ  Q

?

?

?QQ  Q

?

Notice that usize[A2n ] = usize[A2n?1 ] and so usize[A2n ] = 2n ? 1. Now consider usize[A2n A2m ]. The tree A2n A2m has a uniform maximum depth of 2(n + m) whose leaf count is concentrated in the term
n+m n m n · γm · prof[A2n ]2n · prof[A2n ]2m = n+ · 2n · 2m n ≤ 2 n+m · 2n · 2 m = (2n )2 · (2m )2

Thus, the uniform size is bounded by a (degree 4) polynomial in the uniform sizes of the two games in this case. 2. To demonstrate that an exponential growth is possible we consider a very simple example: L0 = { }, Ln+1 = {1 : L′n } and L′0 = ( ), L′n+1 = (1 : Ln ) This gives |L2n | = |L2n |p + |L2n |o = (n + 1) + n although usize[L2n ] = 0. However usize[L2n L2m ] = prof[L2n = = ≥
n+m n n+m n m

L2m ]2n+2m

· prof[L2n ]2n · prof[L2m ]2m

n + mn 2

This gives a lower bound on the uniform size of the tensored trees which has an exponential increase on the number of nodes in the trees. This gives an indication that the complexity of a naive evaluation can be quite bad.

19

6. Dynamic programming and graph games
We have already established that to determine whether a sequent ( ) ?o U , where U is an arbitrary additive formula, has a proof may be determined in time bounded by (and in the worst case) esize[U ]. Fortunately, for the provability of O ?o O ′, where O and O ′ are opponent games, there is something further we can do as the structure of O ?o O ′ can be exploited. 6.1. Dynamic programming. Notice in the calculation of O O ′ that subexpressions may occur multiple times. This indicates that it may be valuable to attempt a dynamic programming approach where one calculates the values of the sub-problems at most once. This is, in fact, the case and in this section we show: 6.2. Proposition. Using dynamic programming one can evaluate the provability of a sequent O ?o O ′ using usize[O ]|O ′|p + usize[O ′]|O |p + |O |p|O ′|p binary operations. Notice that this is a substantial improvement on a simple evaluation of the game tree resulting from the internal-hom object. The calculation below essentially counts the number of binary operations required but is generous enough to allow for single argument evaluations. An initial observation is: 6.3. Lemma. For any player game P = {ai : (bj : Pij | j ∈ {1, . . . , ni }) | i ∈ {1, . . . , n}}:
n ni

(i)
i=1 j =1

|Pij |p = |P |p ? 1
n n

(ii) usize[P ] = n ? 1 +

(ni ? 1) +
i=1

ni

usize [Pij ]
i=1 j =1

Proof of Proposition 6.2. To determine the evaluation of P O , where O = (ai : Pi | i ∈ {1, . . . , n}), we distribute P inside the opponent game to obtain (ai : P Pi | Pi ] logical operations, where i ∈ {1, . . . , n}). This can be solved in n ? 1 + n i=1 cost[P cost[G] denotes the cost to evaluate G . In what follows we show cost[P so that cost[P O ] = cost[P (ai : Pi | i ∈ {1, . . . , n})]
n n n

Pi ] = usize[P ]|Pi|p + usize[Pi ]|P |p + |P |p |Pi|p

= n ? 1 + usize[P ] ·
i=1

| Pi | i + | P | p ·
i=1

usize[Pi ]i + |P |p ·
i=1

| Pi | i

= n ? 1 + usize[P ]|O |p + |P |p (usize[O ] ? (n ? 1)) + |P |p|O |p ≤ usize[P ]|O | + usize[O ]|P |p + |P |p|O |p

20 For player games P and P ′ we now calculate cost[P P = {ai : Oi | i ∈ {1, . . . , m}} Oi = (aij : Pij | j ∈ {1, . . . , mi }) The calculation will proceed as follows:
m n

P ′ ]. First ?x some notation:

′ P ′ = {bj : Oj | j ∈ {1, . . . , n}} ′ Oj = (bji : Pji | i ∈ {1, . . . , nj })

P

P′ =
i=1 m

Oi
mi

P′


j =1

P
n

′ Oj n′ j

=
i=1 k =1

Pik

P




j =1 l=1

P

′ Pjl

This shows how this value can be calculated using dynamic programming techniques ′ since the computation of Pik Pjl will reoccur. We will now calculate inductively the number of logical operations involved in calculating this value. Note that the base cases are when the index sets are empty; these are handled by the general argument. The following “tree of required operations” may be helpful to visualize the calculation.
m ∨ yyyy mmm yyy m m yyy mm m m y mm vv vv vv xx xx xx
m1 ··· m ···

ss ss ss
mm ··· n1 ···

yy yy yy

n ···

pp pp pp
nn

P11

P′

rr rr rr

P1m1

P′

P

′ Pn 1

yy yy yy

···

qq qq qq

P

′ Pnn n

It takes m ? 1 logical operations to form the join or meet of m terms. The calculations for the left side of the tree
m m mi

(m ? 1) +
i=1

(mi ? 1) +
i=1 k =1

cost[Pik

P ′]

plus the calculations for the right side of the tree
n n nj

(n ? 1) +
j =1

(nj ? 1) +
j =1 l=1

cost[P

Pj′l ]

plus the one join at the root, therefore, gives us the total number of logical operations. Notice that if either or both sides of this calculation reduce to a single argument calculation

21 we still have the cost of the single binary operation into which the retrieval costs can be bundled. ′ In the calculations of the Pik P ′ ’s and the P Pjl ’s it can be seen that we will ′ end up calculating the Pik Pjl ’s twice. Thus, from the overall sum we can subtract the number of calculations required to calculate this value once, which is
m mi n nj

cost[Pik
i=1 k =1 j =1 l=1

′ Pjl ]

What we then have is: cost[P P ′ ] = 1 + (m ? 1) +
i

(mi ? 1) +
ik

cost[Pik cost[P
′ Pjl ]

P ′]

+ (n ? 1) +
j

(nj ? 1) +
jl ′ Pjl ]

?
ik jl

cost[Pik

Applying the theorem inductively to the three embedded terms yields ≤ 1 + (m ? 1) +
i

(mi ? 1) +
ik

(usize[Pik ]|P ′ |p + usize[P ′]|Pik |p + |Pik |p |P ′|p )
′ ′ ′ (usize[P ]|Pjl |p + usize[Pjl ]|P |p + |P |p |Pjl |p )

+ (n ? 1) +
j

(nj ? 1) +
jl

?
ik jl

′ ′ ′ (usize[Pik ]|Pjl |p + usize[Pjl ]|Pik |p + |Pik |p |Pij |p )

Applying Lemma 6.3 (i) and rearranging terms: = 1 + (m ? 1) +
i

(mi ? 1) + usize[P ](|P ′|p ? 1))

+
ik

usize[Pik ]|P ′|p ? usize[Pik ](|P ′|p ? 1) (ni ? 1)
i

+ (n ? 1) + +
jl

′ ′ usize[Pjl ]|P |p ? usize[Pjl ](|P |p ? 1)

+ usize[P ′ ](|P |p ? 1)

+ |P |p |P ′ |p

22 With some more rearranging: = (m ? 1) +
i

(mi ? 1) +
ik

usize[Pik ] + usize[P ](|P ′|p ? 1)
′ usize[Pjl ] + usize[P ′ ](|P |p ? 1) jl

+ (n ? 1) +
i

(ni ? 1) +

+ |P |p |P |p And ?nally, by Lemma 6.3 (ii): ≤ usize[P ]|P ′|p + usize[P ′ ]|P |p + |P |p |P ′|p



Usually to calculate the complexity of a dynamic programming algorithm one counts the number of subproblems and multiplies that number by the cost of computing that problem. As the cost of computation at each node varies according to the structure of the games we have done a more detailed analysis which removes the cost of the common computation. We now return to apply these counting techniques with con?dence! 6.4. Graph games. This dynamic programming approach encourages us to view the games, not as trees, but as acyclic directed graphs. This is, of course, not a new idea and has been explored by Hyland and Schalk [HS02]. The point is that one of these “graph games” will have a strategy if and only if the corresponding “tree game” has such. To evaluate whether a graph game has a strategy requires time proportional to the number of edges in the graph. One of the attractions of graph games is that one can count the number of edges and nodes of any game which is expressed using the multiplicatives (exponentials seem harder to analyze as they use the history of how one reaches a node). This means that one can calculate the cost of evaluating (on this initial model) an arbitrary polarized multiplicative and additive formula using dynamic programming techniques surprisingly easily. We should perhaps remind the reader at this stage that the multiplicative structure of these models is emphatically not free. Provability in the free such structure with multiplicative units is NP-complete as this is the case in the unpolarized setting (due to Kanovich’s results [Kan92]). Recall also that with exponentials and additives in the unpolarized setting these problems become undecidable. It is likely, although we have not done the calculation, that this is also true for the free polarized multiplicative-additive logics with exponentials. In contrast to the above results the problem in the initial polarized game logic is, for all these fragments, quite decidable as a given formula can be “unwrapped” into its purely additive structure. The problem is that unwrapping into purely additive structure can result in a huge increase in the size of the formula. Given the analysis above it is reasonable to suppose one can do much better then naively expanding into a game tree and evaluating.

23 We shall now expect our formula to belong to the polarized additive, multiplicative fragment. In other words to be in the following form: Opponent (bi : Pi | i ∈ I ) O P P O O O′ Player {aj : Oj | j ∈ J } P O O P P P′

It is easy to see that the cost of evaluation in a graph game is dominated by the number of edges in the game. Thus, for example, in the graph game produced for the dynamic calculation of P P ′ there are |P |p|P ′ |p player states. From each of these states one can either make a move in the ?rst or second coordinate resulting in esizep [P P ] = esizep [P ]|P ′|p + esizep [P ′ ]|P |p

We use the fact that the number of edges leaving a node (up to a factor) always dominates the cost of the calculation needed at that node. Similarly there are |P |o|P ′ |p + |P |p |P ′|o opponent states. While the number of opponent edges is given by: esizeo [P We therefore have: 6.5. Lemma. The cost of evaluating multiplicative expressions using dynamic programming (up to a constant factor) is bounded by: ? cost[(bi : Pj | j ∈ {1, .., m})] = m + ? cost[{ai : Oi | i ∈ {1, .., n})] = n + ? cost[O ? cost[O ? cost[P ? cost[P
m j =1 cost[Pj ] n i=1

P ] = esizeo [P ]|P ′|p + esizeo [P ′ ]|P1 |p

cost[Oi ]

O ] = esize[O ]|O ′|o + esize[O ′ ]|O |o P ] = esize[O ]|P |o + esize[P ]|O |o P ′] = esize[P ]|P ′|p + esize[P ′]|P |p O ] = esize[P ]|O |p + esize[O ]|P |p

Note that in this calculation there is no need to assume that we are starting with trees. We may start with acyclic directed graphs and recursively apply the calculation. Indeed, to evaluate an arbitrary expression this is what we must do: a multiplicative expression allows us to build an acyclic graph with a certain number of player and opponent nodes and arrows. As we apply constructors we build new graphs, all we need to keep track of is the numbers of opponent nodes, player nodes and edges. Below is the explicit calculation for doing so. Let size[G] = (|G|o , |G|p, esizep [G]) and size[G1 ] = (n1 , N1 , E1 ) and size[G2 ] = (n2 , N2 , E2 ). Then:

24 ? size[(b1 : G1 , . . . , bn : Gn )] = ? size[{a1 : G1 , . . . , am : Gm }] = ? size[G1 ? size[G1 / / / /
n i=1

size[Gi ] + (0, 1, n) + (1, 0, m)

m j =1 size[Gj ]

G2 ] = (n1 · n2 , n1 · N2 + N1 · n2 , E1 · n2 + n1 · E2 ) G2 ] = (N1 · n2 + n1 · N2 , N1 · N2 , E1 · N2 + N1 · E2 )

The cost of determining whether a formula is provable is bounded by the last entry in this cost calculation. It is worth noting that, even though the dynamic approach to this evaluation has helped, we are still left with an exponential bound for the evaluation of the provability of a multiplicative expression as the complexity for multiplicatives is essentially multiplicative! We may also very easily calculate the pro?le of the graph games which results from multiplicative expressions. The only change needed over our previous calculation is to remove the binomial coe?cients. For pro?les the only distinction between player and opponent is the starting point of the pro?le. It is necessary, therefore, only to state the calculation for tensor and sums: ? prof[{a1 : G1 , ..., an : Gn }]0 = 1 ? prof[{a1 : G1 , ..., an : Gn }]m+1 = ? prof[G1 ? prof[G1 ? prof[G1 G2 ]n = P ]0 = 1 P ]n+1 =
n i=0 i γn ?i · prof[G1 ]i · prof[P ]n?i+1 n i=0 n i=1

prof[Gi ]m

i γn ?i · prof[G1 ]i · prof[G2 ]n?i

This allows an easy calculation of the number of states used by the dynamic algorithm. However, notice that the pro?le does lose the edge information. One can actually bound the number of edges by:
n?1

esize(G) ≤
i=0

prof[G]i · prof[G]i+1

as each edge must end in a level one greater than where it starts. Of course this bound is going to be, in general, very poor; its only advantage is that it can be derived from the pro?le of the acyclic graph which is easily calculated.

7. The complexity of provability
The technique of viewing a game tree as an acyclic graph has certainly led to improvements in the determination of provability. We hope also it has led to the reader having a better feel for the combinatoric structure of these games. However, the story of the complexity of provability is not over! We have yet to really use the categorical structure the initial model of polarized games which is what we now exploit. We open with the following observation:

25 7.1. Proposition. In the initial model of polarized games: (i) A game G has a strategy if and only if there is (a possibly mixed) map ( ) ? → G. (ii) A game G has a counter-strategy if and only if there is a (a possibly mixed) map G ? → { }. (iii) Every game either has a strategy or a counter-strategy but not both. (iv) O O ′ has a strategy if and only if O and O ′ have strategies. Similarly, O a strategy if and only if O and P have strategies. P has

(v) P P ′ has a counter-strategy if and only if P and P ′ have counter-strategies. Similarly, P O has a counter-strategy if and only if O and P have counter-strategies. (vi) G has a strategy (resp. counter-strategy) if and only if G has a counter-strategy (resp. strategy). (vii) !O has a strategy if and only if O has a strategy. (viii) ?P has a counter-strategy if and only if P has a counter strategy. Proof. (i) This follows immediately from the discussion in Section 4. (ii) Dual to (i). (iii) If a game G has a strategy and a counter-strategy then there would be a mix map ( ) ? → { } which there is not! In Remark 4.1 we noted that the calculation to determine the possession of a strategy is exactly the de Morgan dual calculation to that for having a counter-strategy. Thus, if one is true the other is necessarily false. (iv) If ( ) ? →O
s

O ′ then, as ( ) is a ?nal object, we obtain () ? →O
s

O′ ? ? ? →O

1

!

() = O
s s

1 2 and so O and (similarly) O ′ have strategies. Conversely if ( ) ? ? → O and ( ) ? ? → O′ then s1 s2 () = () () ? ? ? ? ? → O O′

provides a strategy for O1 O2 . A similar argument works for the mixed tensor: if s1 s2 () ? ? → O and ( ) ? ? → P then () = ()
s
1 2 () ? ? ? ? ? → O

s

s

P
! 1

P Conversely, if ( ) ? → O P then we may compose with O P ? ? ? ? → () P = P to obtain a strategy in P . As P has a strategy P = { } and so P = {ai : Oi | i ∈ I }

26 where I = ?. This means that in O P = {ai : O Oi | i ∈ I } the strategy s must choose a branch and so there is a strategy in O Oi for some i ∈ I . By the argument above this happens if and only if there is a strategy in both O and Oi . Thus, there is certainly a strategy in O . (v) Dual to (iv). (vi) If ( ) ? → O then we obtain a strategy ( ) =!( ) ? ? →!O Conversely, if ( ) ? ? →!O is a strategy we can use dereliction to obtain a strategy () ? ? →!O ? →O (vii) Dual to (v). This means that we may completely redo the our calculation of the cost of provability! To determine whether O ?o O ′ is provable we must determine whether there is a strategy in O O ′ . But this is the case only if there is no counter-strategy for O O ′ . There is a counter-strategy for O O ′ only if there is a counter-strategy for O and O ′, that is there is no strategy for O and there is a counter-strategy for O ′. In other words, O ?o O ′ is provable if and only if either there is no strategy for O or a strategy for O ′. Clearly we may now show that the cost of provability is linear for an arbitrary sequent in the polarized additive, multiplicative, and exponential fragment by successively using the reductions provided by the proposition. Therefore we have: 7.2. Corollary. In initial polarized game logic the complexity of provability is linear in the size of the sequents. Proof. We shall explicitly provide a procedure based on the proposition for determining the provability of a formula. Let P p be the value of provability of P ?p { } and O o the value of provability of ( ) ?o O . Then: O P P = P P′ O O
p p p p s′ ? s′ !s s

p

= P p ∨ P′ p = O o∧ P p = ? O o =
i∈I

{ai : Oi | i ∈ I } O = P O′ O P

Oi

o

o o o o

O

P

o

= O o ∧ O′ o = O o∨ P p = ? P p =
j ∈J

(bj : Pj | j ∈ J }

Pj

p

27 There are dual rules for determining whether there is a counter-strategy. Clearly this procedure is linear in the size of the term. Rather surprisingly provability has therefore reduced to the cost of looking at the formulas. As, in the worst case, this is the very least one must do this is a hard bound.

References
[AJ94] [CS01] [CS04] [Hy97] [HS02] S. Abramsky and R. Jagadeesan. Games and full completeness for multiplicative linear logic. Journal of Symbolic Logic, 59(2):543–574, 1994. J.R.B. Cockett and R.A.G. Seely. Finite sum-product logic. Theory and Applications of Categories, 8(5):63–99, 2001. J.R.B. Cockett and R.A.G. Seely. Introduction to polarized category theory and game semantics. 2004. M. Hyland. Game semantics. In Semantics and Logics of Computation, Eds. A. Pitts and P. Dybjer, pages 131–184. Cambridge University Press, 1997. M. Hyland and A. Schalk. Games on graphs and sequential realizable functionals. In Proc. IEEE symposium on Logic in Computer Science 2002, pages 257–264. IEEE Computer Science Press, July 2002. M. Kanovich. Horn programming in linear logic is NP-complete. In Proc. IEEE Symposium on Logic in Computer Science 1992, pages 200–210. IEEE Computer Science Press, June 1992.

[Kan92]

[LMSS92] P. Lincoln, J. Mitchell, A. Scedrov, and N. Shankar. Decision problems for propositional linear logic. Annals of Pure and Applied Logic, 56:239–311, 1992. [LW94] [Lau02] P. Lincoln and T. Winkler. Constant-only multiplicative linear logic is NPcomplete. Theoretical Computer Science, 135:155–169, 1994. O. Laurent. Polarized games (extended abstract). In Proc. IEEE symposium on Logic in Computer Science 2002, pages 265–274. IEEE Computer Society Press, July 2002. Also at http://www.pps.jussieu.fr/?laurent/. R. Statman. Intuitionistic propositional logic is polynomial-space complete. Theoretical Computer Science, 9:67–72, 1979.

[Stat79]

Department of Computer Science, University of Calgary 2500 University Drive NW, Calgary, AB, Canada T2N 1N4 Email: robin@cpsc.ucalgary.ca pastroc@cpsc.ucalgary.ca


赞助商链接

更多相关文章:
更多相关标签:

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。
甜梦文库内容来自网络,如有侵犯请联系客服。zhit325@126.com|网站地图