9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # Sequent Calculus and Abstract Machines

Sequent Calculus and Abstract Machines

Aaron Bohannon University of Oregon 1 June 2004

Abstract The sequent calculus proof system, as an alternative to natural deduction and Hilbert-style systems, has been well-appreciated for its structural symmetry and its applicability to automated proof search. Over the past decade, it has additionally been shown to have a computational signi?cance, perhaps most notably as a type system for the λ-calculus, designed by Hugo Herbelin. It is the purpose of this thesis to demonstrate that the structural properties of the λ-calculus (derived from the underlying type system) are ideally suited for describing the states and transitions of several abstract machines that have been developed for executing programs written in a functional style. We explore the surrounding issues and o?er formal translations of several abstract machines into versions of the λ-calculus. In particular, we consider the Krivine machine, the SECD machine, and the ZINC abstract machine.

Acknowledgements

The basic idea of translating ZINC machine code and states into the λ-calculus, along with the ?rst sketch of a translation, was given to me by Hugo Herbelin. He guided me through earlier versions of translation and simulation. I would like to thank him for his guidance and his help in procuring the ?nancial support of INRIA FUTURS in the summer of 2003. I would very much like to thank my advisor at the University of Oregon, Zena Ariola, for being so helpful with my work on this thesis. She has been a great source of both theoretical and practical guidance and helped me to arrange my course schedule to make it possible to ?nish my thesis in a timely manner. Furthermore, she also played an essential role in arranging my summer support with INRIA in 2003. I would also like to mention Peter Boothe, who talked over some ideas I had about machines and computation. Of course, any innacurracies or errors in this thesis are entirely my own.

2

Contents

Introduction 1 Logic and Proof Terms 1.1 Natural Deduction . . . . . . . . . . 1.1.1 Prawitz’ Natural Deduction . 1.1.2 Natural Deduction in Sequent 1.1.3 Structural Rules . . . . . . . 1.2 Sequent Calculus . . . . . . . . . . . 1.3 Proof Terms . . . . . . . . . . . . . . 1.3.1 Natural Deduction . . . . . . 1.3.2 Sequent Calculus . . . . . . . 1.3.3 The λ-calculus . . . . . . . . 5 8 8 9 9 11 12 14 14 14 15 19 19 19 20 21 22

. . . . . . . . Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

2 Weak Reduction and the λσw -calculus 2.1 Reduction in the λ-calculus . . . . . . 2.1.1 Weak Reduction Strategies . . 2.1.2 Closures . . . . . . . . . . . . . 2.2 De Bruijn Indices . . . . . . . . . . . . 2.3 The λσw -calculus . . . . . . . . . . . .

3 Abstract Machines 24 3.1 Measuring Abstractness . . . . . . . . . . . . . . . . . . . . . . . 25 3.2 An Addition Machine . . . . . . . . . . . . . . . . . . . . . . . . 25 3.3 Compilation and Decompilation . . . . . . . . . . . . . . . . . . . 27 4 A Calculus for Simulating Abstract Machines 4.1 The Structure of Applicative Terms . . . . . . 4.2 Merging λ and λσw . . . . . . . . . . . . . . . . 4.2.1 Simultaneous Substitutions . . . . . . . 4.2.2 Weak Reduction . . . . . . . . . . . . . 4.3 The λσw -calculus . . . . . . . . . . . . . . . . . 4.4 Simulating the Krivine Machine . . . . . . . . . 4.4.1 De?nition of the Machine . . . . . . . . 4.4.2 Compiling and Loading . . . . . . . . . 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 28 30 30 30 30 31 33 34

4.4.3 4.4.4 4.4.5

Decompiling to the λσw -calculus . . . . . . . . . . . . . . The Krivine Strategy in the λσw -calculus . . . . . . . . . Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34 35 37 40 40 41 44 46 46 47 50 53 53 54 55 56 58 63 66

5 Call-by-value Machines 5.1 The ? Operator . . . . . . . . . . . . . . . . . . . ? 5.2 SECD Abstract Machine . . . . . . . . . . . . . . 5.2.1 De?nition . . . . . . . . . . . . . . . . . . 5.2.2 Compiling and Loading . . . . . . . . . . 5.2.3 Decompilation to the λ?σw -calculus . . . ? 5.2.4 The SECD Strategy in the λ?σw -calculus ? 5.2.5 Simulation . . . . . . . . . . . . . . . . . 5.3 ZINC Abstract Machine . . . . . . . . . . . . . . 5.3.1 De?nition . . . . . . . . . . . . . . . . . . 5.3.2 Compiling and Loading . . . . . . . . . . 5.3.3 Decompilation to the λ?σw -calculus . . . ? ? 5.3.4 The ZINC Strategy in the λ?σw -calculus 5.3.5 Simulation . . . . . . . . . . . . . . . . . Conclusion Bibliography

4

Introduction

The study of computation is connected to the ?eld of logic on many di?erent levels. One of the most striking examples of this connection is the relationship known as the Curry-Howard isomorphism [12]. The core of this relationship is a correspondence between formal proofs in a logical inference system and terms of a programming language, and the most basic instance is the connection between intuitionistic natural deduction and the λ-calculus. Gri?n has been extended it to classical natural deduction, showing that the corresponding language must contain operations to modify the control ?ow [8]. Curien and Herbelin extended the correspondence to sequent calculus in [4]. They also show that di?erent ways of executing a program can be observed at the level of the logic. This thesis builds on their work. We show that sequent calculi naturally embed the data structures used in abstract machines and that these structures are a key element in decomposing the process of λ-term reduction to steps of minimal size. Therefore, we claim that sequent calculi are better suited than natural deduction for describing abstract machines. The issues to be addressed in this thesis revolve around the observation that the order, manner, and size of the steps used to carry out a computation vary signi?cantly depending upon what system is used. For instance, the traditional λ-calculus treats β-reduction as an operation that occurs in a single step, which would be considered a high-level operation relative to the capabilities of a computer modeled after a Turing machine. Non-determinism is another source of complexity, and reduction rules in most computational calculi are nondeterministic. Even with a deterministic reduction strategy, a single reduction step may make changes arbitrarily far from the top of the syntax tree, which has a computational overhead. Smaller factors also play a role in the complexity of operations, such as the use of symbolic names in contrast to the use of numbers. Abstract machines are a tools for studying computation at a lower level, closer to the level at which physical machines work. They are useful both to those who study the theory and design of programming languages and to those who are interested in the practicalities of implementing languages. In particular, those who study functional programming languages based upon the λ-calculus have bene?ted from the development of abstract machines that can break up the rule of β-reduction into manageable components. Some of the earliest abstract machines designed for normalizing λ-terms were the Krivine machine and the SECD machine. The former is a simple but e?ective call-by5

name machine designed by Jean-Louis Krivine.1 The latter is a right-to-left call-by-value machine studied by G.D. Plotkin [18]. On the other hand, there has also been work directed at modifying the λcalculus so that β-reduction can be simulated with smaller reduction steps. An important step in this direction was the investigation of explicit substitutions in [1]. Lescanne [16] o?ers a comparison of several versions of calculi with explicit substitutions. Curien, Hardin, and L?vy achieved an important goal e in [3] by proving that the λσ? -calculus is con?uent on both closed and open terms. Furthermore, they introduce the notion of weak and strong calculi of explicit substitutions. In [9], a weak calculus of explicit substitution (the λσ w ) is proposed as a useful “calculus of closures” for bridging the gap between abstract machines and the λ-calculus. They use it to prove the correctness of several abstract machines by developing translations from machine states to terms in the calculus. This previously mentioned work on calculi with explicit substitutions was, generally speaking, motivated by the goal of deconstructing β-reduction into smaller steps. However, another calculus containing explicit substitutions was designed with an entirely di?erent motivation. This was the λ-calculus of Herbelin [10], which was conceived as a term assignment for sequent calculus proofs. The explicit substitutions in this calculus are present precisely for the purpose of encoding a particular proof structure. The subsequent work of Curien and Herbelin [4] of Wadler [21] in the area of sequent calculus proof terms has elucidated some of the symmetries inherent in computation, including the duality of the call-by-name and call-by-value reduction strategies. There are now several versions of the λ-calculus. The version in [4] does not contain explicit substitutions. (A more recent version of the λ-calculus with explicit substitutions has been studied in [11].) However, one key and distinctive feature of all versions of the λ-calculus is that application occurs between a term and an argument list, rather than a single argument as it would in the λ-calculus. On ?rst consideration, it may seem as though this feature would make the calculus operate at a higher level, taking large steps that would need to be simulated by smaller steps in the λ-calculus. However, it turns out that almost the opposite is true. This will be investigated in Chapter 4. The use of argument lists in the λ-calculus actually grants a greater range of expressiveness in structuring terms than is available in the λ-calculus, and this more re?ned level of expressiveness is, in our opinion, an essential tool for capturing the low level details of β-reduction. We attempt to demonstrate this by using a version of the λ-calculus to simulate the operation of three abstract machines. Not only do we simulate the operation of the abstract machines, but we show how this simulation can be done using a reduction strategy in the λ-calculus for which the computational complexity of the λ-term rewriting is constant.2

1 More than 20 years after its informal introduction, the Krivine machine has not yet been o?cially presented in publication. Jean-Louis Krivine has recently written an article to remedy this situation [14]. 2 With a small exception in the case of the ZINC machine.

6

In the following three chapters of this thesis, we will begin by introducing various proof systems for minimal logic, the λ- and λ-calculi, and abstract machines. Thereafter, we will present an enhanced version of the λ-calculus, which we will call the λσw -calculus. Then we will use this calculus to simulate the Krivine abstract machine. In the ?nal chapter, we will introduce the ? operator ? ? into the calculus and use the resulting λ?σw -calculus to simulate the execution steps of the SECD and ZINC abstract machines.

7

Chapter 1

Logic and Proof Terms

In this chapter, we will present natural deduction (N D) and the sequent calculus (both LJ [7] and LJT [5]) along with their corresponding term assignments. We will also review a few of the structural subtleties involved in these logical inference systems. Only the implicational fragment of minimal propositional logic will be considered. Although simple, this logic still has an important relationship with computation and will allow a cleaner presentation of the main points. The syntax of the implicational fragment of minimal propositional logic is given in Figure 1.1. A formula is built from a set of atomic types (X), which we will leave unspeci?ed, and a single logical connective (→), which joins two logical formulas. We will use A, B, C, . . . as meta-variables that range over the set of formulas. For the semantics of the formulas, we will assume a functional interpretation of the propositional formulas (also known as Brouwer-HeytingKolmogorov semantics).

1.1

Natural Deduction

Although the sequent calculus proof system could be presented alone, we would like to present it in comparison with the more widely used system of natural deduction [20]. Natural deduction has become popular, especially in the area of computer science, for several reasons. One reason is that it has an important connection with the λ-calculus, which will be addressed shortly. This proof system is also popular due to the fact that its proofs can be read and constructed in a manner that is often considered more “natural” for humans than using

Figure 1.1: Syntax of the implicational fragment of minimal propositional logic A ::= X | A → A

8

Figure 1.2: Prawitz’ natural deduction [A]x . . . . B →x i A→B

A→B B

A →e

Hilbert-style systems or sequent calculus.

1.1.1

Prawitz’ Natural Deduction

The inference rules for Prawitz’ version of natural deduction are given in Figure 1.2. There is one rule for the introduction of the implication connective and one for the elimination of the connective. Proofs correspond to trees where leaves represent the assumptions. A leaf can be open or closed. An open leaf would mean that the assumption is active. A closed leaf would correspond to an assumption that could have potentially been used in the proof but has been discharged by the end of it. Whereas the elimination rule is fairly clear, since it corresponds to the traditional modus ponens, the notation of the implication introduction rule is fairly complex. In the elimination rule, the dots from the formula A to the formula B indicate a proof of B, which can refer to the assumption A zero or more times; the brackets indicate that, after the introduction of the connective, the assumption A may be discharged; and the variable x associates the use of this rule with the corresponding discharged assumption. If one wants to know active assumptions at a point in the proof, one would need to travel up the tree to the leaves. To remedy this, we present natural deduction in sequent form. This also provides a more elegant way to clarify some details about using the implication introduction rule that have been left unspeci?ed.

1.1.2

Natural Deduction in Sequent Form

A sequent is a syntactic construct for asserting a relation between propositions— in our case between a collection of formulas and a single formula—which we will write Γ A In this example, Γ is the antecedent and A is the succedent. When using sequents, it is necessary to specify what sort of “collection” the antecedent is. There are several possibilities: sets, mutlisets, sets of named formula, and sequences are the primary candidates. A sequent is most naturally understood as a statement about derivability with respect to the Prawitz’ form of natural deduction. Γ A may be interpreted as, “There exists a proof tree whose conclusion is A, and all of whose 9

Figure 1.3: Natural deduction in sequent style Ax Γ, A B →i Γ A→B

Γ, A Γ A→B Γ Γ B

A

A →e

open assumptions are contained within Γ.” When interpreted as a statement about the existence of a proof, a sequent becomes a meta-proposition. This may be useful as a conceptual guide; however, the speci?cation of a logical system using sequents is not, in general, formally dependent upon any previously-de?ned proof system. Using sequents to formulate the rules of natural deduction allows clearer distinctions between the possible methods of managing assumptions. No longer must we work with open (or closed) assumptions at the leaves of the proof tree; instead, the leaves will contain an instance of an axiom in the inference system, and assumptions will be internalized into the antecedents of the sequents. Therefore, if one is interested in knowing the collection of active assumptions, one would simply look at the left-hand side of the sequent. In reformulating natural deduction, the ?rst step is to choose the nature of the collection. In order to note the di?erences, we will present three systems. The ?rst is given in Figure 1.3. In this system, the antecedent of the sequents should be interpreted as a set of formulas.1 The correspondence with the previous form of natural deduction should be fairly clear. This system corresponds to a version of Prawitz’ natural deduction in which: 1. a proof tree may have associated “leaves” that are not directly connected to its branches 2. implication introduction may not occur if there are no open assumptions of the formula associated to the tree 3. no discharge need actually take place 4. all equivalent open assumptions in the tree must be simultaneously discharged (and hence marked with the same name) if any of them are discharged The ?rst point arises from the fact that the axiom allows an arbitrary set of extra assumptions Γ. The second point is clear from the implication introduction rule. The third point can be observed by noting that, in the implication

1 As commonly done, we will use the comma to indicate the union of two collections of formulas (as in Γ, ?) or the union of a collection and a single formula (as in Γ, A). When the comma is used for sequences of assumptions, it should be interpreted as appending one to the other.

10

introduction rule, Γ may actually contain A, in which case Γ, A = Γ. The last point is inherent in the use of sets to maintain assumptions. Here is a simple proof in the system: Ax A A →i A A → A →i A→A→A In the ?rst implication introduction step, note how the assumption A is not deleted. If it were deleted, the second implication introduction step would not be possible. If we decided to manage assumptions with multisets, then we could simply reinterpret the rules in Figure 1.3. In this new interpretation, the third and fourth statements above would be replaced with 3. exactly one open assumption in the tree must be discharged The proof of A → A → A in this system would be: Ax A, A A →i A A → A →i A→A→A Although these two versions of natural deduction in sequent form provide some clari?cation on the use of implication introduction, they both lack some expressive power with respect to Prawitz’ version. Each of the proofs above could correspond to either of the two below: [A]x →x i A→A →y i A→A→A [A]x →y i A→A →x i A→A→A

Using sets of named assumptions solves half of the problem. We could then tell which open assumption was discharged during an implication introduction, but would still not know which assumption corresponds to the succedent in the axiom rule. One solution to this would be to allow the attachment of names to the succedent in axiom rules. Another solution is simply to use sequences to manage assumptions. This requires a modi?cation to the axiom, and the resulting system is presented in Figure 1.4. We will return to this system later on, when de Bruijn indices are discussed.

1.1.3

Structural Rules

When working with multisets or sequences in a proof system, it is sometimes necessary to add structural rules. The three structural rules typically considered are weakening, contraction, and exchange (Figure 1.5). Clearly, in a system that uses sets of assumptions, only weakening is relevant, and in a system that uses multisets, only weakening and contraction are relevant. 11

Figure 1.4: Natural deduction in sequent style with assumptions maintained as sequences Ax Γ, A B →i Γ A→B

Γ, A, Γ Γ A→B Γ Γ B A →e

A

Figure 1.5: Structural rules Γ Γ, A B W eak B Γ, A, Γ , B, Γ Γ, B, Γ , A, Γ Γ, A, A Γ, A B Cont B

C Exch C

Adding these structural rules to a system may not alter the set of sequents that is provable in the system; in that case, the structural rules are admissible (but not derivable). This is the case for the three systems of natural deduction presented. It is worth noting that the precise formulation of the inference rules, especially the axiom, can in?uence the admissibility of various structural rules. For instance, if we had used the axiom A A Ax

in any of the inference systems, then weakening would be necessary to make the inference system complete and would no longer be an admissible rule.

1.2

Sequent Calculus

Having considered natural deduction, we now turn to the sequent calculus, which has been appreciated, since its introduction by Gentzen [7], for its symmetry and its applicability to automated proof search. Instead of having introduction and elimination rules as natural deduction does, the sequent calculus has right introduction rules and left introduction rules. Additionally, it has the signi?cant Cut rule. We ?rst consider a sequent calculus proof system that manages assumptions as sets. The inference rules are given in Figure 1.6. This system is complete; all of the sequents that are derivable in the systems of natural deduction that we previously considered are also derivable in this system. Furthermore, it is complete without the cut rule; that is to say, the cut rule is admissible. (Demonstrating this fact is not trivial, as it would be to demonstrate the admissibility of weakening in one of our previous systems.) 12

Figure 1.6: Sequent calculus using sets of assumptions Ax Γ, A B →r Γ A→B B Cut

Γ, A Γ A Γ, B C →l Γ, A → B C Γ

A

A Γ, A Γ B

Remark 1.2.1. Even though the cut rule is admissible, its presence gives the sequent calculus a certain expressiveness that is quite valuable. For instance, it is relatively trivial to give a translation of natural deduction into sequent calculus when the cut rule is present. This expressiveness is one of the primary factors that makes the sequent calculus so useful a as basis for computational structures. Now let us turn our attention to ?nding a version of the sequent calculus that manages its assumptions as multisets. In natural deduction, the presentation of the systems using sets and multisets required no typographical changes to the inference rules. Thus, we may be inclined to believe the same holds true with the sequent calculus. However, if we wish to maintain the property of cut admissibility, this is not the case. Consider the following proof: Ax Ax A→B A→B A → B, A A →l (A → B) → A, A → B A (A → B) → A, A → B, B (A → B) → A, A → B B

B

Ax →l

This is a valid proof in the system of Figure 1.6 where assumptions are managed as sets. Note that in the left introduction step just before the conclusion, there is an implicit contraction step. In the application of the rule, we have Γ = (A → B) → A, A → B However, the newly formed formula on the left is A→B, which is already present as an assumption. So we get Γ, A → B = Γ. There is a similar contraction in the ?rst left introduction rule. These contractions would not be automatic if assumptions were kept as a multiset. There is no way to remedy the situation without changing the inference rules. We may either alter the left introduction rule or add a contraction rule to the system. We will take the latter choice, and the resulting system is presented in Figure 1.7. 13

Figure 1.7: Sequent calculus with multisets of assumptions Ax Γ, A B →r Γ A→B

Γ, A Γ A Γ, B C →l Γ, A → B C Γ, A, A Γ, A Γ

A

B Cont B B Cut

A Γ, A Γ B

Figure 1.8: Syntax of λ-calculus terms t ::= x | (λx.t) | (t t )

1.3

1.3.1

Proof Terms

Natural Deduction

We would now like to present the λ-calculus as a means for encoding proofs. There is a a natural bijection between the terms of the basic λ-calculus and proofs in natural deduction. The syntax of the λ-calculus is presented in Figure 1.8, and the method for matching proofs to λ-terms is described by the inference rules in Figure 1.9. In particular, whereas λ-terms correspond to proofs, formulas correspond to types. This means that from now on, when we refer to a term calculus we actually mean a typed calculus a la Curry [2]. In this case, ` we are encoding proofs from a system that manages assumptions with sets of named formulas with a provision in the axiom to assign a name to the formula in the succedent of the sequent. Once we can encode proofs as terms, it becomes much easier to work with transformations on proofs. The Curry-Howard isomorphism is properly an “isomorphism” because the reduction rules of the λ-calculus correspond to correct proof normalization steps. Reduction in the λ-calculus is done by means of the β-rule: (λx.t) u → t[u/x] Here we use the notation t[u/x] as meta-syntax to describe the term t, with all free occurrences of x replaced by the term u.

1.3.2

Sequent Calculus

There have been many attempts to ?nd proof terms for the sequent calculus proofs that have similar properties to those for natural deduction proofs. A 14

Figure 1.9: Assignment of λ-terms to proofs in natural deduction Ax Γ, x : A t:B →i (λx.t) : A → B

Γ, x : A Γ

x:A

t:A→B Γ u : A →e Γ (t u) : B

Γ

couple of problematic issues arise in this regard, though. In the case of natural deduction, we needed to include all the expressive structure of Prawitz’ version into the sequents. This meant that names needed to be added to assumptions and to the succedent in the axiom. However, mimicking this in the sequent calculus is more di?cult because the assumptions are manipulated by the left introduction rule. What name is to be given to the newly created formula at that point? Another issue is that, although the Cut rule is admissible, there is often multiple cut-free proofs of the same sequent. This arises from the inherent permutability of the left- and right-introduction rules in the sequent calculus. We present the example from [10]: A, C Ax Ax A A, C, B B →l A → B, A, C B →r A → B, A C →B Ax A, C, B B →r Ax A A, B C →B →l A → B, A C →B

A

In proving this sequent, the right introduction rule that creates C → B and the left introduction rule that creates A → B are permutable. In some general sense, though, the two proof are the same. So, should these proofs have the same proof term or di?erent ones? The solution o?ered by Herbelin to both of these problems is the use of a restricted form of sequent calculus called LJT [5]. We will use two di?erent types of sequents: Γ A Γ|A B Γ is still a collection of formulas, but now there is potentially a distinguished formula on the left, separated by the vertical bar. Now we may reformulate the rules of sequent calculus using both of these types of sequents. Figure 1.10 shows the rules other than cut. They are constructed so as to allow only one cut-free proof of any proposition. However, this new type of sequent o?ers four di?erent ways of reformulating the cut rule (Figure 1.11). We will consider the system that uses all four. We will call the ?rst two cut rules “head-cut” rules and the second two “mid-cut” rules.

1.3.3

The λ-calculus

Finally, we may present a term language for representing proofs in this permutation-free version of the sequent calculus. Herbelin calls this the λ-calculus, and 15

Figure 1.10: A version of sequent calculus with two types of sequents Ax Γ, A Γ|A B →r B

Γ|A Γ

A

A Γ|B C →l Γ | A→B C

Γ, A | A B Cont Γ, A B

Figure 1.11: Four new types of cut A Γ|A B Cut1 Γ B Γ|C A Γ|A B Cut2 Γ|C B Γ Γ A Γ, A Γ B A Γ, A | C Γ|C B B Cut3 B Cut4 Γ

Figure 1.12: Syntax of λ-terms t ::= e ::= s ::= x {e} | (λx.t) | t {e} | t [ s ] ? | (t · e) | (e @ e ) | e [ s ] x := t

16

Figure 1.13: Assignment of λ-terms to proofs Ax Γ, x : A | . : A . {e} : B Cont Γ, x : A x {e} : B

Γ|.:A

. {?} : A

Γ

t:A Γ|.:B . {e} : C →l Γ | .:A→B . {t · e} : C Γ

Γ, x : A t:B →r Γ (λx.t) : A → B

t:A Γ|.:A . {e} : B Cut1 Γ t {e} : B . {e} : A Γ | . : A . {e } : B Cut2 Γ|.:C . {e @ e } : B u : A Γ, x : A t:B Cut3 Γ t [ x := u ] : B

Γ|.:C

Γ

Γ

u : A Γ, x : A | . : C . {e} Cut4 Γ|.:C . {e [ x := u ]} : B

its syntax is given in Figure 1.12. (We have made some very slight modi?cations to the original de?nition of the syntax to accommodate enhancements later on.) The syntax de?nes terms (t), contexts (e), and substitutions (s). Contexts are lists of arguments made from the constructors nil (?), cons (·), and append (@), possibly with an associated substitution.2 substitutions bind more tightly than the The terms will be used to represent proofs whose concluding sequent is of the form Γ A, and the contexts will represent proofs whose concluding sequent is of the form Γ | A B. In a sense, terms are producers of values whereas contexts are consumers. Substitutions will be used for recording the use of the mid-cut rules. The assignment of terms to proofs in the sequent calculus is given in Figure 1.13. Herbelin [10] also o?ers reduction rules for the λ-calculus (Figure 1.14)3 and proves that these are strongly normalizing as a means for cut elimination. These rules o?er a nice, orthogonal system for rewriting terms (i.e. proofs) [13]; however, they do not allow a simulation of β-reduction in the λ-calculus as one might hope. This will be necessary for our endeavor; so one of the next goals of this thesis is the development of a set of reduction rules for this term system

2 We will assume that substitution binds more tightly than (·) or (@) and omit parentheses when no ambiguity results. 3 We must assume that all variable names in an expression are unique or provide some other means to avoid variable capture during substitution propagation.

17

Figure 1.14: Reduction rules of the λ-calculus (βnil ) (λx.t) {?} (βcons ) (λx.t) {u · e} (Cvar ) x {e} {e } (Cnil ) ?@e (Ccons ) (u · e) @ e (Syes ) x {e} [ x := u ] (Sno ) y {e} [ x := u ] (Sλ ) (λy.t) [ x := u ] (Snil ) ?[s] (Scons ) (t · e) [ s ] → → → → → → → → → → (λx.t) t [ x := u ] {e} x {e @ e } e u · (e @ e ) u {e [ x := u ]} y {e [ x := u ]} (λy.t [ x := u ]) ? t[s] · e[s]

that can simulate β-reduction.

18

Chapter 2

Weak Reduction and the λσw -calculus

2.1 Reduction in the λ-calculus

The λ-calculus is an important theoretical model of computation. Moreover, it is also the archetypal minimal functional programming language. As a programming language, concerns arise with respect to implementation. The reduction relation induced by the β rule is not a functional relation. Since the λ-calculus is con?uent, one possible implementation of reduction would be to follow all reduction paths simultaneously until one of them reaches a normal form. However, this is clearly unacceptable from a practical standpoint. Thus, we need to consider methods of reducing λ-terms that can be reasonably implemented.

2.1.1

Weak Reduction Strategies

An important factor related to e?ciency is that weak reduction of λ-terms is often su?cient in programming languages. If we give an explicit speci?cation of the reduction relation on λ-terms as in Figure 2.1, then we can de?ne weak reduction by omitting the rule (ξ) from this inference system. In other words, reduction does not occur under abstractions. This is typically quite acceptable in most languages. Terms that cannot be reduced in this modi?ed system are in weak normal form. However, as noted in [3, 9], weak normal forms are not unique, i.e. this reduction system is not con?uent. A reduction strategy is a deterministic (functional) subrelation of a general reduction relation. Since machines (under the typical model) operate deterministically, it is exponentially more e?cient to specify a single reduction strategy than it is to simulate non-determinism with a deterministic machine. Furthermore, since the weak λ-calculus is not con?uent, the latter is not even a useful option in a theoretical sense. The three most basic weak reduction strategies are given in Figures 2.2, 2.3, 19

Figure 2.1: De?nition of the reduction relation on λ-terms (λx.t) u → t[u/x] t→t (ξ) λx.t → λx.t t→t tu→t u (β) u→u tu→tu

Figure 2.2: De?nition of call-by-name reduction (λx.t) u → t[u/x] t→t tu→t u

and 2.4. In the call-by-value reduction strategies, there is a constraint that a term be a value, which means that it cannot be further reduced by the strategy.

2.1.2

Closures

The previous alterations to the reduction relation were made with the goal of limiting the number of times that the β rule would need be applied in reducing a term. However, there is still a good deal of e?ciency to be gained or lost in the implementation of the β rule itself. Implementing β-reduction primarily involves the propagation of substitutions through a term. One important technique for e?cient implementation is suspending the propagation of substitutions when possible. This is done by pairing an environment with a term to form a closure. The environment maps variable names either to terms or (more likely) to other closures. From a static point of view, a closure is a term with free variables that are de?ned in the associated environment. From a dynamic point of view, a closure is a term in which the propagation of substitutions has been suspended until it is clearly useful to propagate them further. The process of suspending substitutions and working with closures cannot be described by the λ-calculus. Instead, we would need a calculus with more intensional properties—a sort of calculus of closures. Such a calculus could be useful for proving properties of mechanical implementations of λ-reductions. In [15], Leroy uses the λenv-calculus for such a purpose, and in [9], Hardin, Maranget, and Pagano propose the λσw -calculus as ideally suited for this task.

Figure 2.3: De?nition of left-to-right call-by-value reduction (λx.t) v → t[v/x] t→t tu→t u v is a value (β)

u → u v is a value v u→v u 20

Figure 2.4: De?nition of right-to-left call-by-value reduction (λx.t) v → t[v/x] v is a value (β)

t → t v is a value tv→t v

u→u tu→tu

Figure 2.5: Syntax of λDB -terms t ::= n | (λt) | (t t )

It is a simpli?ed version of the λenv that is only capable of weak reduction. Furthermore, the λσw -calculus is con?uent, thereby repairing, in a sense, the problem created by removing the ξ rule from the λ-calculus.

2.2

De Bruijn Indices

Before proceeding with a description of the λσw -calculus, we will ?rst introduce de Bruijn indices—a method of using natural numbers as the names of variables in λ-terms. Perhaps the best way to understand the motivation of Bruijn indices is to think about them from the perspective of logic. In the previous chapter, we introduced a method of maintaining assumptions in sequences rather than sets. If we would like to assign terms that can represent proofs in this system, we need to slightly modify the λ-calculus. We will call these new terms λDB -terms and their syntax is given in Figure 2.5. The assignment of terms to proofs (also the typing rules of λDB -terms) is given in Figure 2.6. (We use the notation |Γ| here to represent the number of assumptions in the sequence Γ.) This typing system is essentially a fragment of a typing system presented in [1]. The bene?t of de Bruijn indices is that they eliminate the extra e?ort of managing names for the assumptions. Of particular importance is the fact that they remove con?icts in names. The uniqueness of names (both locally and globally) is no longer something that must be enforced because it is now built into the naming scheme. Since a machine (abstract or physical) would naturally

Figure 2.6: Assignment of λDB -terms to proofs in natural deduction (|?| = n) Ax Γ, A, ? n+1:A Γ t:A→B Γ u : A →e Γ (t u) : B Γ Γ, A t:B →i (λt) : A → B

21

Figure 2.7: Syntax of λDB -terms with explicit weakening t ::= ? | (λt) | (t t ) | t ↑

Figure 2.8: Assignment of λDB -terms to proofs in natural deduction with explicit weakening Ax Γ, A t:B →i Γ (λt) : A → B

Γ, A Γ

?:A

t:A→B Γ u : A →e Γ (t u) : B Γ Γ, B

t:A W eak t↑:A

use numbers to represent the in?nite set of variable names, this simply provides an organized manner of doing so. As mentioned by several authors (e.g. [16] and [3]), we don’t actually need to depend upon an externally-de?ned set of natural numbers; instead, we can integrate them into our calculus if we have another term construction. Consider the new syntax in Figure 2.7. There are no natural numbers here. Instead we may de?ne them:

n?1 times

n := ? ↑ · · · ↑ The new terms correspond to a logic with an explicit weakening rule. These typing rules are given in Figure 2.8. We do not deal with reduction rules on this system, since we will presently discuss a method of reducing terms with de Bruijn indices in the λσw -calculus.

2.3

The λσw -calculus

The λσw -calculus [3, 9] is a version of the λ-calculus with de Bruijn indices and simultaneous explicit substitutions. The syntax of λσw -terms is given in Figure 2.9, and the reduction rules are given in Figure 2.10. Substitutions are now part

Figure 2.9: Syntax of λσw -terms t ::= s ::= n | λt | (t t ) | t [ s ] id | t · s | ↑ | s ? s

22

Figure 2.10: Reduction rules of the λσw -calculus (Beta) (App) (FVar) (RVar) (Clos) (AssEnv) (MapEnv) (ShiftCons) (IdL) (λt) [ s ] t (t t ) [ s ] 1[t · s] n + 1[t · s] (t [ s ]) [ s ] (s ? s ) ? s (t · s) ? s ↑ ? (t · s) id ? s → → → → → → → → → t[t · s] (t [ s ] t [ s ]) t n[s] t[s ? s ] s ? (s ? s ) t [ s ] · (s ? s ) s s

of the syntax. The substitution [ t · u · v · id ] means that t will be substituted for the variable 1, u will be substituted for the variable 2, and so on. This almost corresponds to [ 1 := t ] [ 2 := u ] [ 3 := v ] in the λ-calculus, except that the substitutions in the λσw -calculus are simultaneous; so the substitution of u for 2 will never be applied to the term t in the λσw -calculus as would eventually occur during reduction of the nonsimultaneous substitutions in the λ-calculus. The use of simultaneous substitutions necessitates a provision for substitution composition, which is found in the λσw -calculus. As mentioned before, one of the important features of this calculus is that this calculus natively implements weak β-reduction. In comparison with the λ-calculus, this is possible because, among other things, substitutions can be propagated through applications. If we were to provide a typing system for this calculus, it would follow be similar to the basic one found in [1]. However, we will refrain from providing a typing system for a calculus with simultaneous explicit substitutions until we present the λσw -calculus.

23

Chapter 3

Abstract Machines

Broadly speaking, an abstract machine is a formal model of a computer. The key idea to be captured by an abstract machine is that, at any given point in time, a computer has a state, and, based upon the current state, the computer will progress to another state. Thus, an abstract machine is a transition system (as de?ned by Plotkin [19]). However, we will only be interested in deterministic abstract machines and will assume this to be an essential property. A set of ?nal states may be described along with the transition system. If a machine cannot progress according to the transition relation, then it is either in a ?nal state or it is stuck. If the set of ?nal states is not explicitly given, then we assume that all states without a transition are valid ?nal states. A deterministic ?nite automaton and a Turing machine would both be examples of abstract machines,1 but programming language researchers study abstract machines to ?gure out how computation is performed instead of simply what can be computed by various machines. This leads to abstract machines that look somewhat di?erent from those used in automata theory. Using abstract machines to study how computation can be performed is especially important with respect to functional programming languages because they operate in a manner very di?erent from the natural behavior of physical machines. Another important use is a practical one: Implementing an abstract machine on multiple physical computer systems makes it possible to write just a single compiler for a language that outputs abstract machine instructions. An overview of the use of abstract machines with respect to programming languages can be found in [6].

1 The state of a machine, in our terminology, will correspond to the concept of a machine con?guration in automata theory. A set of states, in the terminology of automata theory would correspond to a set of atomic symbols that are used in de?ning the states of an abstract machine.

24

Figure 3.1: Syntax of arithmetic expressions e ::= n ::= n | (e + e) 0 | (n + 1)

3.1

Measuring Abstractness

The de?nition of “abstract machine” given so far allows for a large range in the level of abstractness. Abstractness arises when certain implementation details of the computational steps are ignored. Examples of abstract machines include the strategies for reducing λ-terms that were given in the previous chapter. (In that case the λ-term is the machine state.) However, these seem to be somewhat more “abstract” than we would expect of a typical machine. We will try to make this notion of abstractness more precise. Since any useful machine will have an in?nite number of potential states (i.e. “con?gurations”), the state transitions cannot be enumerated, but must be given by a set of rules. Certainly, using these rules to move the machine forward one step must be an e?ectively computable procedure; otherwise we have a machine that is not useful for studying the process of computation. But more practically, we can make a distinction between machines that can always be moved one step forward in constant time (i.e. O(1)) and those that cannot. It is important to note that this is as much a property of the data structure used to represent the machine state as it is of the rules governing the machine steps.2 If a machine’s steps can always be executed in constant time then we will say that the machine takes steps of constant size, or steps of strictly bounded size. Sometimes a machine does not take constant-sized steps in a strict sense, but we can put a bound on the step size for all future states after statically examining the current state. If we can do this for all possible states, then we will say that the machine takes steps of statically bounded size. This is a much weaker notion than steps of strictly bounded size.

3.2

An Addition Machine

As a ?rst example, we will consider two machines that evaluate simple arithmetic expressions on natural numbers (in unary representation). The syntax of the source language is given in Figure 3.1, and we include addition as the only operation. The ?rst machine uses arithmetic expressions as its states, and its

2 It is also dependent upon the model of computation being used. For our purposes, we will assume that the data structures being used are term trees and that manipulation of the branches of a tree within a ?xed distance from the root can always be done in constant time. We will not assume the existence of random-access vectors by default, although, later on, we will mention how they may be important.

25

Figure 3.2: Arithmetic machine 0+n→n (m + 1) + n → m + (n + 1) e→e e+e →e +e

e→e n+e→n+e

transition relation is de?ned by the inference system in Figure 3.2. This machine could be called a small-step operational semantics on the language [19]. In fact, any small-step operational semantics really de?nes an abstract machine. Furthermore, any strategy in a calculus can be called a “machine” by this definition. However, this addition machine does not have constant sized steps. In general, if it is necessary to use premises in an inference system in order to de?ne a machine’s transition rules, then the machine does not take constantsized steps. (There are various ways in which the converse may fail to be true, though.) Next, we present another machine that operates at a lower level and does take constant-sized steps. This machine will work with a control stack and a value stack. A control stack (C) is a list of expressions or an Add instruction. A value stack (S) is a list of integers. C S ::= ::= ε | Add; C | e; C ε | n·S

A machine state is simply a pair consisting of the control stack and the value stack: M ::= (C / S)

The transition rules of the machine are: (n; C / S) → (C / n · S)

((e + e ); C / S) → (e; e ; Add; C / S) (Add; C / 0 · n · S) → (C / n · S) (Add; C / (m + 1) · n · S) → (C / m · (n + 1) · S) Now we need a means con?gure the starting state of the machine. A function (L) to load an expression can be de?ned as: L(e) = (e; / ε) We may also want a function to unload a value from the machine after it has stopped. This simply takes the result out of the machine. In this case, it will su?ce to say that the result is the item on top of the value stack when the control stack is empty. 26

3.3

Compilation and Decompilation

In both machines given above, the source expression was put directly into the machine. Often, though, an abstract machine is designed to operate on a source expression that has been compiled. (The notion that programs are compiled into a list of instructions is sometimes useful, but except for very simple languages, the instructions in compiled code will often refer to other pieces of compiled code, in which case the actual structure of the code bears more resemblance to a tree or graph rather than a list.) As an example of compilation, here is a function that will compile expressions for a modi?ed version of the arithmetic machine. [[n]] = n [[e + e ]] = [[e]]; [[e ]]; Add It is easy to see that this function just turns the source expression into post?x notation, which is a typical thing to do during compilation. Here is the modi?ed machine that will execute compiled arithmetic expressions. (n ; C / S) → (C / n · S)

(Add; C / 0 · n · S) → (C / n · S) (Add; C / (m + 1) · n · S) → (C / m · (n + 1) · S) Finally, we may provide a compile-and-load function (L) for running the machine on an expression: L(e) = ([[e]]; / ε) Now we may compare the machine that executes compiled code with the one that operates on source expressions. The di?erence is that exactly one rule has been removed. This raises the question as to whether this is really a di?erent machine or not. In essence, the original machine integrated a compilation of terms on-the-?y during execution. When we present the SECD machine later, we consider a version that operates on compiled code instead of source terms as the original SECD machine did [18]. The changes made to the SECD machine correspond exactly to the changes made to this arithmetic machine. Decompilation indicates the translation of code, or an entire machine state to an output language. If de?ned on machine states, it will usually be a total function, and not just restricted to ?nal states (as unloading may be). In the later chapters, we will de?ne several decompilation functions from abstract machine states to a version of the λ-calculus that will be presented in the next chapter.

27

Chapter 4

A Calculus for Simulating Abstract Machines

As discussed in Chapter 2, if we are to ?nd a calculus that operates at the same level as an abstract machine, then it should be able to perform weak reduction and to represent and manage closures. The λσw -calculus satis?es these conditions, but there is another factor that we cover presently.

4.1

The Structure of Applicative Terms

Consider the application of a term to three arguments in the λ-calculus and the λ-calculus: t u 1 u2 u3 t {u1 · u2 · u3 · ?} While the expressions look somewhat similar when read on paper, their abstract syntax trees are quite distinct. Compare the two trees in Figure 4.1. In the case of the λ-term, the subterm t is at the bottom of the tree, and in the case of the λ-term, it is at the top. This is very signi?cant because, in the de?nitions of all the useful reduction strategies for the λ-calculus, we ?nd a rule of the form: t→t tu→t u An implementation of reduction based upon this rule requires a search of arbitrary depth into the λ-term. Such a search would also be required simply to discover whether the term has a redex at all. However, there is no way to avoid this situation when working directly with λ-terms—it is a necessity given their structure. The applicative λ-term, on the other hand, always has a redex at the top of the term. If t is an abstraction (e.g. t = (λx.t )), then we have: (λx.t ) {u1 · u2 · u3 · ?} → 28 t [ x := u1 ] {u2 · u3 · ?}

Figure 4.1: The structure of applicative terms in the λ-calculus and the λcalculus u ?d ? d ? u d ? u3 d ? d ? u d u2 ?d ? d ? d t u1 u ?d

t

? ?

d

u1

? ?

du ?d

d

u2

? ?

u d ?d

d

u3

? ?

u d ? d

d d

?

And if t is another application (e.g. t = t {e}), then we have: t {e} {u1 · u2 · u3 · ?} → t {e @ (u1 · u2 · u3 · ?)}

The case where t = x {e} is e?ectively the same, and in the case where t = t [ s ], the substitution can be propagated. Hence, in an applicative term in the λcalculus, there is always a redex within a ?xed distance from the top of the syntax tree. This property is inherited from the process of cut reduction in the sequent calculus LJT . This is not the case in the λ-calculus nor in the λσw -calculus, both of whose terms are related to proofs in natural deduction. This property is of central importance in the design of a calculus that can simulate abstract machines. The λ-calculus becomes even more useful when we realize that there is also a means to express the applicative structure of the λ-calculus with λ-terms. This is pointed out by Herbelin in [10]. Consider the syntax trees of the following two expressions: t u 1 u2 u3 t {u1 · ?} {u2 · ?} {u3 · ?}

The translation from the ?rst structure to the second is straightforward and actually represents a translation from a proof in natural deduction to one in the sequent calculus. As noted by Herbelin, the λ-term reduces to the form t {u1 · u2 · u3 · ?} Moreover, it is also important to note that it can make this transformation with a series of reduction steps that takes place entirely within a ?xed distance from the top of the syntax tree. The reduction is shown in Figure 4.2. The ability of the λ-calculus to express application in two ways is really the same expressiveness that was noted in Chapter 1 with respect to the sequent calculus. 29

Figure 4.2: Normalization of a natural-deduction style applicative term in the λ-calculus t {u1 · ?} {u2 · ?} {u3 · ?} → t {u1 · ?} {(u2 · ?) @ (u3 · ?)} → t {u1 · ?} {u2 · (? @ (u3 · ?))} → t {u1 · ?} {u2 · u3 · ?} → t {(u1 · ?) @ (u2 · u3 · ?)} → t {u1 · (? @ (u2 · u3 · ?))} → t {u1 · u2 · u3 · ?}

4.2

4.2.1

Merging λ and λσw

Simultaneous Substitutions

After observing the structures of applicative terms in the λ- and λ-calculi, we must turn attention to the structures of terms with explicit substitutions. We have already described the use of simultaneous substitutions in the λσ w -calculus, so let’s present a term from the λ-calculus and the corresponding one from the λσw -calculus side by side: t [ x := u1 ] [ y := u2 ] [ z := u3 ] t [ u1 · u2 · u3 · id ]

Ignoring the di?erences in meaning and the fact that only one uses variable names, we can note that the structures of these two terms are completely analogous with the structures observed in the previous section. In this case, though, it is the λ-calculus whose potential redex is buried within the syntax tree. A redex for the λσw -calculus would exist at the top of its term.

4.2.2

Weak Reduction

With respect to the operations of abstract machines that implement reduction on λ-terms, the λσw -calculus surpasses the λ-calculus in another way: the λcalculus has no direct support for closures or weak reduction. Moreover, its treatment of substitution propagation prevents it from even simulating the βreduction of the λ-calculus, as noted by Herbelin. Thus, we present a new calculus with the intention of bringing together the desirable traits of the λcalculus and the λσw -calculus.

4.3

The λσw -calculus

The λσw -calculus is a calculus with λ-abstractions, argument lists, simultaneous explicit substitutions, de Bruijn indices, and a form of weak β-reduction. 1 Its

1 Herbelin [10] describes a version of the λ-calculus with de Bruijn indices and explicit weakening, also. However, it did not have simultaneous substitutions, and its reduction rules

30

Figure 4.3: Syntax of the λσw -calculus t ::= e ::= s ::= ? {e} | λt | t {e} | t [ s ] ? | (t · e) | (e @ e ) | e [ s ] id | (t · s) | (s ? s ) | ↑

syntax is given in Figure 4.3. As in the λ-calculus, there are terms (t), contexts (e), and substitutions (s). The primary changes in the syntax are the removal of variable names from terms, and the expanded constructs for substitutions. There is now only one variable name (?), which is similar to a de Bruijn index of 1. We will de?ne our indices in this calculus in terms of this base variable and the weakening substitution via the following abbreviations:

n times

[↑]

n

:= :=

[↑]···[↑] ? {?} [ ↑ ]

n?1

n

We also provide typing rules (i.e. a proof system) for the terms of this calculus. They are given in Figure 4.4. The typing of substitutions is based on the examples in [1] and [11]. We present the reduction rules for the λσw -calculus in Figure 4.5. These are somewhat di?erent from the reduction rules of the λcalculus. They are instead modeled after the reduction rules of the λσw -calculus. The key features are that substitutions can no longer be propagated under λabstractions and they can be propagated through applications. We conjecture that this calculus shares the con?uence properties of the λσw -calculus. We will now use this calculus to simulate the reduction steps of the Krivine machine, as the ?rst example of its correspondence to abstract machines.

4.4

Simulating the Krivine Machine

The Krivine machine is a simple machine for reducing λ-terms according to the call-by-name strategy [14]. It has been described many times. One correctness proof can be found in [22]. Curien and Herbelin [4] describe a version that operates at a higher-level, performing substitution in one step and thereby eliminating the closures managed by most versions, but their example of the Krivine machine o?ered a clear motivation for the use of applicative contexts in the λ-calculus. In this chapter, we will try to extend the correspondence to a machine that does maintain closures and our λσw -calculus, which supports reduction on closures.

were equivalent to those of the original λ-calculus.

31

Figure 4.4: Assignment of types to λσw -terms Γ, A | . : A . {e} : B Γ, A ? {e} : B Γ, A t:B λt : A → B

Γ|.:A Γ

. {?} : A

t:A Γ|.:B . {e} : C Γ | .:A→B . {t · e} : C Γ|.:C

Γ

. {e} : A Γ|.:A . {e } : B Γ|.:C . {e @ e } : B t:A Γ Γ|.:A . {e} : B t {e} : B

Γ

s : (Γ ? ?) Γ|.:A . {e} : B ?|.:A . {e [ s ]} : B id : (Γ ? Γ) ↑ : (Γ ? Γ, A)

s : (Γ ? ?) Γ t:A ? t[s] : A

s : (Γ ? ?) ? t:A t · s : (Γ, A ? ?) s : (Γ ? ?) s : (? ? Σ) s ? s : (Γ ? Σ)

32

Figure 4.5: Reduction rules of the λσw -calculus (BetaNil) (BetaCons) (AppApp) (ConcatNil) (ConcatCons) (ConcatAssoc) (SubVarNil) (SubVarCons) (SubApp) (SubSubTerm) (SubNil) (SubCons) (SubSubContext) (SubConcat) (CompNil) (CompCons) (CompShift) (CompAssoc) (λt) [ s ] {?} (λt) [ s ] {t · e} t {e} {e } ?@e (t · e) @ e (e @ e ) @ e ? {?} [ t · s ] ? {e} [ t · s ] t {e} [ s ] t[s][s ] ?[s] (t · e) [ s ] e[s][s ] (e @ e ) [ s ] id ? s (t · s) ? s ↑ ? (t · s) (s ? s ) ? s → → → → → → → → → → → → → → → → → → (λt) [ s ] t [ t · s ] {e} t {e @ e } e t · (e @ e ) e @ (e @ e ) t t {e [ t · s ]} t [ s ] {e [ s ]} t[s ? s ] ? t[s] · e[s] e[s ? s ] e[s] @ e [s] s t[s ] · (s ? s ) s s ? (s ? s )

4.4.1

De?nition of the Machine

Our de?nition of the Krivine machine closely follows that of [9]. The machine code comprises three instructions: C ::= | | Acc(n) Grab; C Push(C); C

The basic values2 that are manipulated by the machine are closures. In this case, a closure is any code that is paired with an environment, an environment simply being a list of values. V E ::= ::= (C/E) ε | V.E

During execution of the code, the machine will manipulate a stack of values, which we will call the argument stack: A ::= ε | V.A

2 Calling these closures “values” with respect to the machine is traditional, although they are not necessarily values with respect to the call-by-name strategy.

33

Figure 4.6: Rules for the Krivine abstract machine (Acc(1) / (C/E).E (Acc(n + 1) / V.E (Grab; C / E / (Push(C); C / E / A) → / A) → V.A) → / A) → (C / E / A) (Acc(n) / E / A) (C / V.E / A) (C / E / (C/E).A)

The total state of the machine is represented by three components: the current code, the current environment, and the current argument stack: M ::= (C / E / A)

The machine takes steps according to the transition rules in Figure 4.6. By examining these transitions, we can see exactly two cases in which the machine cannot progress. One case occurs when there is an Acc instruction but the environment is empty. We will say that the machine is stuck in this case. This cannot occur when running the machine on an expression without free variables. The other case occurs when there is a Grab instruction but the argument stack is empty. In this case, we are at a ?nal state.

4.4.2

Compiling and Loading

This machine will use λDB -terms as its basic source language, in which case the compilation function is quite trivial. (A more complex function could compile λ-terms with named variables.) When viewed as unlabeled trees the source term and the compiled code are isomorphic. [[n]] = Acc(n)

[[λt]] = Grab; [[t]] [[(t u)]] = Push([[u]]); [[t]] The compile-and-load function L simply puts a compiled version of the λ-term into a machine with an empty environment and argument stack. L(t) = ([[t]] / ε / ε)

After running the machine, we can unload the code (which begins with Grab) and the environment as we wish. (The argument stack must be empty.)

4.4.3

Decompiling to the λσw -calculus

It is interesting to note that although the Krivine machine structure can be easily de?ned by an inductive de?nition as it is above, it is typical to describe decompilation functions that do not operate on this natural structure, but instead treat the argument stack as a sequence [9, 22]. Some di?culty arises from 34

the fact that the structure of the argument stack is contrary to the natural structure of λ-terms. Yet, it is exactly the structure of the argument stack that allows the Krivine machine to progress smoothly with constant-sized steps. However, if we consider a decompilation to the λ-calculus, this issue with the structure of the argument is immediately resolved. We now present a decompilation function from the states of the Krivine machine to the λσw -calculus and then prove that execution can be simulated without loss of detail. The decompilation will be purely compositional following the natural syntactic structure of the machines components. First, we de?ne how to decompile machine code (decompilation of code is denoted C ): Acc(n) Grab; c Push(c); c = n = λ c = c { c · ?}

We will denote the decompilation function for closures, environments, argument stacks, and machine states as M , using a horizontal bar.3 Closures have a straightforward decompilation. (C/E) = C E

Environments are also easily decompiled. ε V.E = = id V ·E

The decompilation of argument stacks is similarly compositional. ε = ? V.E = V · E And ?nally the decompilation of machine states simply puts these pieces together. (C / E / A) = C E A

Remark 4.4.1. Note that this translation is an injection. Hence, a partial function for re-compilation could be de?ned from λσw -terms to Krivine machine states.

4.4.4

The Krivine Strategy in the λσw -calculus

Now that we can translate states of the Krivine machine into the λσw -calculus, we would hope that machine steps correspond to reduction steps in the calculus. Not only can we demonstrate this, but we can demonstrate a stronger proposition:

3 We would also have liked to use the more standard notation of a horizontal bar for the decompilation of code, but later on, we will need to add parameters to the decompilation function. Thus, the notation C will be more suitable.

35

Figure 4.7: The V strategy Using rightmost outermost priority: t [ ↑ ? (t · s) ] ?→ t [ s ] t [ s ] [ s ] ?→ t [ s ? s ]

V V

t {e} [ s ] ?→ t [ s ] {e [ s ]}

V

V

? {?} [ t · s ] ?→ t

Figure 4.8: The P strategy ? [ s ] ?→ ?

P

(t · e) [ s ] ?→ t [ s ] · e [ s ]

P

t · ? [ s ] ?→ t · ?

P

Proposition 4.4.2. There exists a deterministic strategy K in the λσ w -calculus such that a. the reduction steps performed by K are con?ned to a ?xed distance from the top of the syntax tree b. if M is a ?nal state in the Krivine machine, then M is a K-value c. if M → M in the Krivine machine, then M ?→n M , for some constant n First we present the strategy. We decompose the strategy K into substrategies to make the presentation easier, but it is crucial to note that there are no inference rules that create a recursive de?nition of a relation. Hence, all reductions speci?ed by the strategy are con?ned to a ?xed distance from the top of the syntax tree. The V strategy is shown in Figure 4.7 and propagates substitutions through terms. A priority must be assigned among these rules in order to make the strategy deterministic. We have visually presented the rules so that rules toward

K

Figure 4.9: The J strategy e ?→ e e @ e ?→ e @ e ? @ e ?→ e

J J P

(t · e) @ e ?→ t · (e @ e )

J

t · (? @ e) ?→ t · e

J

36

Figure 4.10: The K strategy Using rightmost outermost priority: t {e} {e } ?→ t {e @ e } e ?→ e t {e} ?→ t {e } t ?→ t t {e} ?→ t {e}

K V K K J K

(λt) [ s ] {t · e} ?→ t [ t · s ] {e}

the top of the page have higher priority; however, it should be clear if we specify that the changes made by these rules must occur in a rightmost outermost manner. The P strategy is shown in Figure 4.8 and propagates substitutions though contexts. The J strategy propagates context concatenation and is shown in Figure 4.9. The rules of the J strategy do not overlap, although an examination of the P strategy is necessary to con?rm this. Finally, the K strategy is shown in Figure 4.10. Its rules do overlap, and again, we choose a rightmost outermost priority.

4.4.5

Simulation

Part a. of Proposition 4.4.2 can thus be veri?ed by examination of the strategy given. To con?rm part b., let M = (Grab; C / E / ε) Then, M = (λ C ) E {?} which is irreducible by the strategy K. To con?rm part c., we now provide the speci?c steps in the λσw -calculus that simulate the steps of the Krivine machine. The choice of the Krivine machine rule is determined by the current instruction; so we consider cases according to the next instruction to be executed. ? Assume the next instruction is Acc(1), and let M = (Acc(1) / (C/E).E / A) Then we have M →M 37 and M = (C / E / A)

and M =

K

? {?} (C/E) · E

A

?→ (C/E) A = M

? Assume the next instruction is Acc(n + 1), and let M = (Acc(n + 1) / V.E / A) Then we have M →M and M = ?→ ?→ =

K K

and

M = (Acc(n) / E / A)

n[↑] V · E n ↑? V ·E n E M A

A A

? Assume the next instruction is Grab. M = (Grab; C / E / V.A) Then we have M →M and M = ?→ =

K

and

M = (C / V.E / A)

(λ C ) E C M V ·E

V ·A A

? Assume the next instruction is Push. (Push(C); C / E / A) Then we have M →M 38 and (C / E / (C/E).A)

and M = ?→ ?→ ?→ ?→ ?→ ?→ =

K K K K K K

C C C C C C C M

{ C · ?} E E E E E E E

A A

( C · ?) E

( C · ?) E @ A C C C C E ·? E E ·? @A E · ?@A E ·A @A

Since the K-strategy is deterministic and cannot progress when the corresponding Krivine machine state cannot progress, the simulation shown above also supports the following proposition: Proposition 4.4.3. If M ?→? t, then there exists a Krivine state N such that K M →? N and t ?→? N .

K

39

Chapter 5

Call-by-value Machines

We have thus demonstrated an excellent correspondence between the λσ w calculus and a call-by-name abstract machine. Is it possible to accomplish a similar task with a call-by-value machine? The answer is yes, but we need an additional feature in the λσw -calculus in order to maintain the nice properties of the call-by-name case.

5.1

The ? Operator ?

The reformulation of the λ-calculus by Curien and Herbelin in [4] introduced the use of two control operators: ? and ?. The ? operator was borrowed ? directly from Parigot [17] and was, for Curien and Herbelin, a key element of constructing terms representing proofs in classical sequent calculus (just as Parigot had used it for classical natural deduction). The ? control operator is ? the dual of the ? operator and allows an encoding of λ-terms into the λ-calculus that permits call-by-value reduction (left-to-right or right-to-left) to take place at the top of the term. The λσw -calculus does not have an explicit ? control operator because λσw terms correspond to proofs in the sequent calculus for minimal propositional logic instead of classical logic. The use of the ? operator for controlling the substitution of contexts for context variables is not necessary, since there can be only one context variable in scope at any given time (corresponding to ? in the λσw -calculus).

Figure 5.1: Syntax of the λσw -calculus t ::= e ::= s ::= ? {e} | (λt) | t {e} | t [ s ] ? | (t · e) | (e @ e ) | (?t) | e [ s ] ? id | (t · s) | (s ? s ) | ↑

40

However, the ? control operator does have an important function, even in the ? case of a calculus corresponding to minimal logic: it allows a simulation of callby-value abstract machines, while maintaining the property that all reduction steps can be performed in the top of the syntax tree. So we present a version of ? ? ? the λσw -calculus with ? (calling it the λ?σw -calculus). The ? provides one new syntactic case for contexts (Figure 5.1), and one new typing rule (Figure 5.2). There are two new reduction rules to accommodate the ? operator and these ? are shown in Figure 5.3. With these changes, we are ready to de?ne translations for the SECD and ZINC abstract machines.

5.2

SECD Abstract Machine

We will examine a version of the SECD machine that has been modi?ed somewhat from Plotkin’s presentation [18]. The original machine operated on terms that required no compilation, but performed an on-the-?y compilation step of exactly the same nature as the one observed in the addition machine in Chapter 3: changing an operation from in?x to post?x. In this case the operation is application instead of addition. We have removed this step from the machine and added a compilation function to perform this structural change in advance. We have also added an accumulator, i.e. a position for a single value, in order to delay the process of putting a value on the stack. A piece of code that would have been put directly onto the stack in the original machine will now go to the accumulator ?rst. Then it may immediately be put on top of the stack with a Push instruction, or it may be applied to the arguments on top of the stack with an Apply instruction. Thus, with respect to the original machine, the accumulator simply represents what would have been the top item on the stack.

41

Figure 5.2: Assignment of types to λ?σw -terms ? Γ, A | . : A . {e} : B Γ, A ? {e} : B Γ, A t:B λt : A → B

Γ|.:A Γ

. {?} : A

t:A Γ|.:B . {e} : C Γ | .:A→B . {t · e} : C Γ|.:C

Γ

. {e} : A Γ|.:A . {e } : B Γ|.:C . {e @ e } : B t:A Γ Γ|.:A . {e} : B t {e} : B t:B .(?t) : B ?

Γ

Γ, A Γ|.:A

s : (Γ ? ?) Γ|.:A . {e} : B ?|.:A . {e [ s ]} : B id : (Γ ? Γ) ↑ : (Γ ? Γ, A)

s : (Γ ? ?) Γ t:A ? t[s] : A

s : (Γ ? ?) ? t:A t · s : (Γ, A ? ?) s : (Γ ? ?) s : (? ? Σ) s ? s : (Γ ? Σ)

42

Figure 5.3: Reduction rules of the λ?σw -calculus ? (BetaNil) (BetaCons) (MuTilde) (MuTildeConcat) (AppApp) (ConcatNil) (ConcatCons) (ConcatAssoc) (SubVarNil) (SubVarCons) (SubApp) (SubSubTerm) (SubNil) (SubCons) (SubSubContext) (SubConcat) (CompNil) (CompCons) (CompShift) (CompAssoc) (λt) [ s ] {?} (λt) [ s ] {t · e} t {(?t ) [ s ]} ? t {(?t ) [ s ] @ e} ? t {e} {e } ?@e (t · e) @ e (e @ e ) @ e ? {?} [ t · s ] ? {e} [ t · s ] t {e} [ s ] t[s][s ] ?[s] (t · e) [ s ] e[s][s ] (e @ e ) [ s ] id ? s (t · s) ? s ↑ ? (t · s) (s ? s ) ? s → → → → → → → → → → → → → → → → → → → → (λt) [ s ] t [ t · s ] {e} t [t · s] t [ t · s ] {e} t {e @ e } e t · (e @ e ) e @ (e @ e ) t t {e [ t · s ]} t [ s ] {e [ s ]} t[s ? s ] ? t[s] · e[s] e[s ? s ] e[s] @ e [s] s t[s ] · (s ? s ) s s ? (s ? s )

43

These changes have a two-fold purpose. First, they allow a simpler presentation of the decompilation to the λ?σw -calculus. Second, these changes highlight ? the underlying similarity between the SECD and the ZINC machines.

5.2.1

De?nition

The machine code comprises ?ve instructions, divided here into two syntactic categories: Ct Ce ::= | ::= | | Acc(n); Ce Closure(Ct ); Ce Push; Ct Apply; Ce Return

Using the syntactic categories is useful when working with an accumulator. The instructions in the ?rst category (Ct ) are operations that expect no value in the accumulator. The instructions in the second category (Ce ) operate on the value in the accumulator. The transition rules are designed so that the accumulator always has a value when it should. Such an approach also eliminates a class of stuck states. The compilation function will respect this syntactic structure for the code. The other machine components include the machine values (V ), environments (E), the working stack (S), and the return stack (or dump) (D). The de?nitions of these are given in Figure 5.4 and should be reminiscent of those for the Krivine machine. However, instead of the Krivine machine’s single stack, we now have a stack for values and one for continuations. These must be distinct in a call-by-value machine, but they can be one and the same in the call-by-name case. The transitions of the SECD machine are given in Figure 5.5. The set of ?nal states are those in which the next instruction is Return, but the return stack is empty. The other two cases in which the machine cannot progress are when the instruction is Acc(n), but the environment does not have n values, or when the instruction is Apply, but the working stack is empty. Both of these can be shown to be impossible if compiling closed terms with the compilation function that will be presented. We must note that the SECD machine does not take steps of strictly bounded size. The reason is that the Acc(n) instruction is executed in a single step for any value n. This, in turn, is a result of the fact that the structure of the SECD machine does not naturally allow destructive use of the current environment, as was possible in the Krivine machine. We could add a new component to the SECD machine state for holding a temporary environment that could be walked through one item at a time if we felt it were necessary to do so. However, in practice, it is often possible to execute this step in constant time by means of pointer arithmetic; so the level of abstractness of this machine is somewhat a matter of perspective, and we will work with the machine as it is. 44

Figure 5.4: Components of the SECD abstract machine (Machine Values) (Environment Stacks) (Working Stacks) (Return Stacks) (Machine States) V E S D M ::= (Ct /E) ::= ε | V.E ::= ε | V.S ::= ε | (Ce , S, E) :: D ::= (? / Ct / S / E / D) | (V / Ce / S / E / D)

Figure 5.5: Transition rules for the SECD abstract machine (? / Acc(n); Ce / S / V0 ..Vn .E / D) → (Vn / Ce / S / V0 ..Vn .E / D) (? / Closure(Ct ); Ce / S / E / D) → ((Ct /E) / Ce / S / E / D) (V / Push; Ce / S / E / D) → (? / Ct / V.S / E / D) ((Ct /E) / Apply; Ce / V.S / E / D) → (? / Ct / ε / V.E / (Ce , S, E ) :: D) (V / Return / S / E / (Ce , S, E) :: D) → (V / Ce / S / E / D)

45

5.2.2

Compiling and Loading

As for the Krivine machine, we will use λDB -terms as the source language for simplicity. Compiling λDB -terms to SECD machine code is done as follows: [[n]] = Acc(n)

[[λt]] = Closure([[t]]; Return) [[(t u)]] = [[u]]; Push; [[t]]; Apply Abstractions are compiled into a Closure instruction. In the SECD machine, we will be dealing with proper call-by-value closures, which pair an environment with an abstraction. The function L is the compile-and-load function: L(t) = (? / [[t]]; Return / ε / ε / ε)

The extra Return instruction is necessary, as it is the only way to end a block of code according to the grammar. Our use of Return is identical to the use of an empty code sequence in the original machine.

5.2.3

? Decompilation to the λ?σw -calculus

We now present the decompilation from SECD machine states to λ?σw -terms. ? The idea behind this translation is that we will use the top of the current environment (and explicit substitution in the calculus) to store the values on the working stack. So, the decompilation will be parameterized by an integer p, which will represent the size of the working stack. Unfortunately, this means that the translation is not purely compositional, as was the one for the Krivine machine. Decompilation of code is as follows: Acc(n); Ce Closure(Ct ); Ce Push; Ct Apply; Ce Return

p p

= =

n+p

Ce

p p

(λ Ct 0 ) [ ↑ ]

p+1

Ce

p

p p p

= ? Ct ? = =

(? {?}) · Ce ?

p?1

[↑]

The rest of the components of the machine can be decompiled without a parameter. Here is the decompilation of machine values (i.e. closures): (Ct /E) = (λ Ct 0 ) E

Next we de?ne the translation for sequences of values (such as the working stack 46

or the environment) and also a notation for describing their size: ε V.S = = id V ·S

|ε| = 0 |V.S| = 1 + |S| We will use the notation S1 , S2 for representing the concatenation of these two sequences of values. The return stack has the following decompilation: ε = (Ce , S, E) :: D = ? Ce

|S|

S, E @ D

And the decompilation of the machine states puts these components together: (? / Ct / S / E / D) (V / Ce / S / E / D) = = Ct V

|S|

S, E

|S|

D

Ce

S, E @ D

5.2.4

? The SECD Strategy in the λ?σw -calculus

Our correspondence between the SECD machine and the λ?σw -calculus is un? fortunately not quite as direct as was that for the Krivine machine. The ?rst issue is that, after the machine reaches a ?nal state, the corresponding λ?σ w ? term will be able to make one additional reduction step. This could actually be remedied by the addition a new type of machine state, to which the current ?nal machine states would progress. Then we would translate this new machine ? state to the corresponding ?nal λ?σw -term. However, instead of doing this, we will simply restate the proposition about ?nal states. The second issue is that our method of managing the working stack prevents us from strictly bounding the number of λ?σw -reduction steps that will be taken ? for any given machine state. However, we can put a static bound on this number because we can statically determine that maximum value of p, the parameter used to decompile code. In light of these considerations, here is this proposition about the translation: ? Proposition 5.2.1. There exists a deterministic strategy S in the λ?σ w -calculus such that a. the reduction steps performed by S are con?ned to a ?xed distance from the top of the syntax tree b. if M is a ?nal state in the Krivine machine, then M reduces in one step to a S-value 47

Figure 5.6: The U strategy e [ ↑ ? (t · s) ] ?→ e [ s ] e [ s ] [ s ] ?→ e [ s ? s ] Figure 5.7: The V strategy t [ ↑ ? (t · s) ] ?→ t [ s ] t [ s ] [ s ] ?→ t [ s ? s ]

V V U U

t {e} [ s ] ?→ t [ s ] {e [ s ]}

V

V

? {?} [ t · s ] ?→ t

S

c. if M → M in the SECD machine, then M ?→n M , for some value n that is dependent upon the machine’s original starting state As was done for the K strategy, we decompose the strategy S into substrategies to make the presentation easier, but the strategy still speci?es reductions only in a ?xed part of the term tree. The sub-strategies must have a priority assigned to the rules that overlap, and similarly, we present the rules so that those rules that are visually higher on the page have precedence over the ones below them. The V strategy is unchanged. We now have a U strategy (Figure 5.6), which performs a similar function on contexts. The Q strategy (Figure 5.8) is a modi?ed version of the P strategy in the last chapter. The strategy J (Figure 5.9) is identical to the previous J strategy, except that is makes use of the Q instead of P . Finally, the S strategy is shown in Figure 5.10. In comparison with the K strategy, it changes the priorities of the rules and adds a rule for

Figure 5.8: The Q strategy ? [ s ] ?→ ?

Q

(t · e) [ s ] ?→ t [ s ] · e [ s ] e ?→ e t · e ?→ t · e t ?→ t t · e ?→ t · e 48

Q V Q U

Q

Figure 5.9: The J strategy e ?→ e e @ e ?→ e @ e ? @ e ?→ e

J J Q

(t · e) @ e ?→ t · (e @ e )

J

t · (? @ e) ?→ t · e

J

Figure 5.10: The S strategy t {e} {e } ?→ t {e @ e } t ?→ t t {e} ?→ t {e} (λt) [ s ] {t · e} ?→ t [ t · s ] {e}

J S S V S

t {(?t) [ s ] @ e} ?→ t [ t · s ] {e} ?

S

e ?→ e t {e} ?→ t {e }

S

49

the ? operator. ?

5.2.5

Simulation

To con?rm Proposition 5.2.1, we will check the translation of ?nal states. If M = (V / Return / S / E / ε) then M = V {? @ ?} which reduces to V {?} in one step. Now we may con?rm the rest of the proposition by tracing the simulation of the SECD machine steps. Again, we consider cases according to the current machine instruction. ? Assume the next instruction is Acc, and let M = (? / Acc(n); Ce / S / V0 ..Vn .E / D) and M = (Vn / Ce / S / V0 ..Vn .E / D) Then we have M →M and M = ?→ ?→ ?→ =

S S S

n + |S| n + |S| n + |S| Vn M Ce

Ce

|S|

S, V1 ..Vn .E Ce Ce

|S| |S|

D S, V1 ..Vn .E D

S, V1 ..Vn .E S, V1 ..Vn .E

|S|

S, V1 ..Vn .E @ D

S, V1 ..Vn .E @ D

? Assume the next instruction is Closure, and let (? / Closure(Ct ); Ce / S / E / D) and ((Ct /E) / Ce / S / E / D) Then we have M →M 50

and M = ?→ ?→ ?→? =

S S S

(λ Ct 0 ) [ ↑ ] (λ Ct 0 ) [ ↑ ] (λ Ct 0 ) [ ↑ ] (λ Ct 0 ) E M

|S| |S| |S|

Ce S, E S, E Ce

|S|

S, E Ce Ce

|S| |S|

D S, E D

S, E @ D

|S|

S, E @ D

? Assume the next instruction is Push, and let M = (V / Push; Ce / S / E / D) and M = (? / Ct / V.S / E / D) Then we have M →M and M = ?→ =

S

V Ct M

? Ct ?

|S|+1

|S|+1

S, E @ D D

V · S, E

? Assume the next instruction is Apply, and let M = ((Ct /E) / Apply; Ce / V.S / E / D) and M = (? / Ct / ε / V.E / (Ce , S, E ) :: D) Then we have M →M 51

and M = ?→ ?→ ?→ ?→ ?→ ?→ =

S S S S S S

(λ Ct 0 ) E (λ Ct 0 ) E (λ Ct 0 ) E (λ Ct 0 ) E (λ Ct 0 ) E (λ Ct 0 ) E Ct M

0

(? {?}) · Ce

|S|

[↑]

V · S, E @ D

|S| |S| |S|

(? {?}) V · S, E · Ce (? {?}) V · S, E · Ce (? {?}) V · S, E · Ce V · Ce V · Ce Ce

|S| |S| |S|

[ ↑ ] V · S, E ↑ ? V · S, E S, E @D

@D @D

S, E

@D

S, E @ D

V ·E

S, E @ D

? Assume the next instruction is Return, and let M = (V / Return / S / E / (Ce , S, E) :: D) and M = (V / Ce / S / E / D) Then we have M →M and M = ?→ ?→ =

S S

V V V M

? S ,E ?@ Ce Ce

|S|

@

|S|

Ce

|S|

S, E @ D

S, E @ D

S, E @ D

In several cases, the number of steps taken is dependent upon the size of the working stack, S. A maximum value for this can be statically determined. As in the case of the Krivine machine, this simulation supports the complimentary one that has to be quali?ed to take into account the issue with the ?nal states. Proposition 5.2.2. If M ?→? t, then there exists an SECD machine state N such that, either ? N is a ?nal state and N ?→ t, or ? M →? N and t ?→? N . 52

S S S

5.3

ZINC Abstract Machine

Leroy [15] described the ZINC abstract machine as “Krivine’s machine with marks specialized to call-by-value only, and extended to handle constants as well.” We will be ignoring the features of the ZINC machine that handle constants, and instead work with a portion that manipulates closures as its only values. However, after examining the modi?ed version of the SECD machine in the last section, it is perhaps easier to describe the ZINC machine as a version of the SECD machine that has been extended for handling multiple applications. This extension also allows an optimization on the environments. The ZINC machine’s use of multiple application is one of its distinguishing features, and we will see how this can be nicely translated to the argument lists derived from the λ-calculus. Unfortunately, the optimization on environments does not have special signi?cance in the calculus and will, in fact, complicate both the compilation and decompilation. In the ZINC machine, there is a near environment and a far environment. The near environment keeps the bindings of the arguments to formal parameters that result from the most recent (multiple) application. When necessary, these are moved to the far environment. There is a machine instruction for accessing the near environment (Acc) and one for accessing the far environment (EnvAcc). During the decompilation to λ?σ w ? terms, the distinction between the near environment and far environment will be lost.

5.3.1

De?nition

The machine instructions are essentially a superset of those of the SECD machine: Ct ::= | | Ce | ::= | | Acc(n); Ce EnvAcc(n); Ce Closure(Ct ); Ce Grab; Ct Push; Ct Apply(n); Ce Return

The instruction Apply now takes a parameter. Apply(1) corresponds to the Apply instruction of the SECD machine.1

1 The original machine presented by Leroy in [15] had an Apply instruction that took no argument. It simply applied a function to all the arguments that were pushed since the previous Pushmark instruction. The actual implementation of the ZINC machine as the bytecode interpreter of Objective Caml uses both forms of Apply. We chose to work with the Apply(n) form because it makes our decompilation much simpler. Similarly, the actual machine uses a Return instruction that takes an argument, and we could have simpli?ed our decompilation

53

Figure 5.11: Components of the ZINC abstract machine

(Machine Values) (Environment Stacks) (Working Stacks) (Argument Stacks) (Return Stacks) (Machine States)

V, W ::= E, F ::= S ::= A ::= D ::= M ::= |

(Ct /E/F ) ε | V.E ε | V.S ε | V.S ε | (Ce , S, E, F, A) :: D (? / Ct / S / E / F / A / D) (V / Ce / S / E / F / A / D)

The components of the ZINC machine are similar to those of the SECD machine and are shown in Figure 5.11. In general, we will use E for the current near environment and F for the current far environment. The closures will now have to keep both parts of the environment. The argument stack (A) is new with respect to the simpler SECD machine. Since a function can be applied to more arguments than it takes, the argument stack acts as a bu?er to hold the arguments that have already been “applied” but not yet “grabbed.” It has a strong relationship to the stack used by the Krivine machine. The transition rules for the ZINC machine are given in Figure 5.12. The ?nal states are almost the same as those of the SECD machine: A state is ?nal if there is a Return instruction but the return stack is empty and the argument stack is empty. In this machine there are now two types of steps whose size cannot be strictly bounded: the environment access steps (as before) and the application step, which operates upon multiple arguments.

5.3.2

Compiling and Loading

The compilation of λDB -terms is more di?cult now that we must manage two environments. Thus, it is parameterized by the length of the near environment q. [[n]]q [[n]]q [[λn t]]q [[(t u1 . . . un )]]q = = = = Acc(n) if n < q

EnvAcc(n ? q) if n ≥ q Closure(Grabn?1 ; [[t]]n ; Return) [[un ]]q ; Push; . . . ; [[u1 ]]q ; Push; [[t]]q ; Apply(n)

The compile-and-load function is similar to the previous ones. L(t) = (? / [[t]]; Return / ε / ε / ε / ε / ε)

even further by using this. However, we decided to leave the Return alone to highlight the similarity between the ZINC and SECD machines.

54

Figure 5.12: Transition rules for the ZINC abstract machine

(? / Acc(n); Ce / S / V1 ..Vn .E / F / A / D) → (Vn / Ce / S / V1 ..Vn .E / F / A / D) (? / EnvAcc(n); Ce / S / E / W1 ..Wn .F / A / D) → (Wn / Ce / S / E / W1 ..Wn .F / A / D) (? / Closure(Ct ); Ce / S / E / F / A / D) → ((Ct /?/E, F ) / Ce / T / ε / E, F / A / D) (? / Grab; Ct / ε / E / F / ε / (Ce , S, E, F, A) :: D) → ((Ct /E /F ) / Ce / S / E / F / A / D) (? / Grab; Ct / ε / E / F / V.A / D) → (? / Ct / ε / V.E / F / A / D) (V / Push; Ct / S / E / F / A / D) → (? / Ct / V.S / E / F / A / D) ((Ct /E /F ) / Apply(n); Ce / V1 ..Vn .S / E / F / A / D) → (? / Ct / ε / V1 .E / F / V2 ..Vn / (Ce , S, E, F, A) :: D) (V / Return / S / E / F / ε / (Ce , S, E, F, A) :: D) → (V / Ce / S / E / F / A / D) ((Ct /E /F ) / Return / S / E / F / V.A / D) → (? / Ct / ε / V.E / F / A / D)

5.3.3

Decompilation to the λ?σw -calculus ?

The decompilation follows that of the SECD machine, except that now we must add an extra parameter to keep track of the size of the near environment so that we can properly decompile the EnvAcc instructions. Notice the use of the argument lists in the decompilation of the Apply instruction. Acc(n); Ce EnvAcc(n); Ce Closure(Ct ); Ce Grab; Ct Push; Ct

q p q p q p q p

= = = =

n+p n+p+q λ Ct λ Ct

1 0

Ce

q p

Ce [ ↑p ]

q p

Ce

0 p

q+1 p q p+1 q p?n

q p q Apply(n); Ce p q Return p

= ? Ct ? = = ?

(1 · · · n · ?) @ Ce

[ ↑n ]

Decompilation of closures joins the two parts of the environment together: (Ct /E/F ) = λ Ct 55

|E|+1 0

E, F

There is no change in the decompilation of sequences of values: ε V.S = = id V ·S

Decompilation of the extra argument stack is the same as the decompilation of the Krivine machine stack: ε V.A = = ? V ·A

The return stack must now account for the argument stack component: ε (Ce , S, E, F, A) :: D = = ? Ce

|E| |S|

S, E, F @ A @ D

Finally, machine states are decompiled as follows: (? / Ct / S / E / F / A / D) (V / Ce / S / E / F / A / D) = = Ct V

|E| |S|

S, E, F

A@D

|E| Ce |S|

S, E, F @ A @ D

5.3.4

? The ZINC Strategy in the λ?σw -calculus

When trying to describe the correspondence between the machine and calculus, we encounter the same issues that arise in the SECD machine. However, there is a more signi?cant issue, too. Because of the need to propagate substitutions through a list of arguments in the step corresponding to the Apply instruction, we can no longer claim that our strategy will only make changes to a ?xed part of the term. Thus, our strategy can no longer be a machine with steps of strictly-bounded size. There must be one recursive rule added to the strategy to propagate substitutions. Here is our proposition for the ZINC machine: Proposition 5.3.1. There exists a deterministic strategy Z in the λ?σ w -calculus ? such that a. if M is a ?nal state in the Krivine machine, then M reduces in two steps to a Z-value b. if M → M in the ZINC machine, then M ?→n M , for some value n that is dependent upon the machine’s original starting state The components of the strategy are presented in Figures 5.13, 5.14, 5.15, 5.16, and 5.17. The recursively de?ned part of the transition relation is the R strategy in Figure 5.15. 56

Z

Figure 5.13: The U strategy e [ ↑ ? (t · s) ] ?→ e [ s ] e [ s ] [ s ] ?→ e [ s ? s ] Figure 5.14: The V strategy t [ ↑ ? (t · s) ] ?→ t [ s ] t [ s ] [ s ] ?→ t [ s ? s ]

V V U U

t {e} [ s ] ?→ t [ s ] {e [ s ]}

V

V

? {?} [ t · s ] ?→ t Figure 5.15: The R strategy ? [ s ] ?→ ?

R

(t · e) [ s ] ?→ t [ s ] · e [ s ] t ?→ t t · e ?→ t · e e ?→ e t · e ?→ t · e

R R R V

R

Figure 5.16: The Y strategy

(e @ e ) [ s ] @ e ?→ (e [ s ] @ e [ s ]) @ e e2 ?→ e2

U

Y

(e @ e ) @ e ?→ e @ (e @ e )

Y

e1 @ (e2 @ e3 ) ?→ e1 @ (e2 @ e3 ) e ?→ e e @ e ?→ e @ e ? @ e ?→ e

Y Y R

Y

(t · e) @ e ?→ t · (e @ e )

Y

t · (? @ e) ?→ t · e

Y

57

Figure 5.17: The Z strategy t {e} {e } ?→ t {e @ e } t ?→ t t {e} ?→ t {e} (λt) [ s ] {t · e} ?→ t [ t · s ] {e}

Y Z Z V Z

t {(?t) [ s ] @ e} ?→ t [ t · s ] {e} ?

Z

e ?→ e t {e} ?→ t {e }

Z

5.3.5

Simulation

The ?rst part of Proposition 5.3.1 concerns the ?nal states. We can check that if M = (V / Return / S / E / F / A / ε) then M = V {? @ (? @ ?)} and M ?→2 V {?} The rest of the proposition can be con?rmed by considering each possible machine step. ? Assume the next instruction is Acc, and let M = (? / Acc(n); Ce / S / V1 ..Vn .E / F / A / D) and M = (Vn / Ce / S / V1 ..Vn .E / F / A / D) Then M →M and M = ?→ ?→ ?→? =

Z Z Z Z

n + |S| n + |S| n + |S| Vn M Ce

Ce

n+|E| |S|

S, V1 ..Vn .E, F Ce Ce

n+|E| |S| n+|E| |S|

A@D S, V1 ..Vn .E, F A@D

S, V1 ..Vn .E, F S, V1 ..Vn .E, F

n+|E| |S|

S, V1 ..Vn .E, F @ A @ D

S, V1 ..Vn .E, F @ A @ D

58

? Assume the next instruction is EnvAcc, and let M = (? / EnvAcc(n); Ce / S / E / W1 ..Wn .F / A / D) and M = (Wn / Ce / S / E / W1 ..Wn .F / A / D) M = ?→ ?→ ?→? =

Z Z Z |E| |S|

n + |S| + |E| n + |S| + |E| n + |S| + |E| Wn M Ce

|E| |S|

Ce

S, E, W1 ..Wn .F Ce Ce

|E| |S| |E| |S|

A@D S, E, W1 ..Wn .F A@D

S, E, W1 ..Wn .F S, E, W1 ..Wn .F

S, E, W1 ..Wn .F @ A @ D

S, E, W1 ..Wn .F @ A @ D

? Assume the next instruction is Closure, and let M = (? / Closure(Ct ); Ce / S / E / F / A / D) and M = ((Ct /?/E, F ) / Ce / T / ε / E, F / A / D) Then M →M and M = ?→ ?→ ?→? =

Z Z Z

λ Ct λ Ct λ Ct λ Ct M

1 0 1 0 1 0 1 0

[↑] [↑] [↑]

|S| |S| |S|

Ce

0 |S|

S, E, F Ce Ce

0 |S| 0 |S|

A@D S, E, F A@D

S, E, F S, E, F Ce

0 |S|

S, E, F @ A @ D

E, F

S, E, F @ A @ D

? Assume the next instruction is Grab and no arguments are left in the argument stack. If we let M = (? / Grab; Ct / ε / E / F / ε / (Ce , S, E, F, A) :: D) and M = ((Ct /E /F ) / Ce / S / E / F / A / D) Then M →M and M = λ Ct

|E |+1 0

E ,F

Ce

|E| |S|

S, E, F @ A @ D

= M 59

? Assume the next instruction is Grab, and the argument stack is not empty. If we let M = (? / Grab; Ct / ε / E / F / V.A / D) and M = (? / Ct / ε / V.E / F / A / D) Then M →M and M = ?→ ?→ =

Z Z

λ Ct λ Ct Ct M

|E|+1 0 |E|+1 0

E, F E, F

V ·A @D V · A@D A@D

|E|+1 0

V · E, F

? Assume the next instruction is Push, and let M = (V / Push; Ct / S / E / F / A / D) and M = (? / Ct / V.S / E / F / A / D) Then M →M and M = ?→ =

Z

V Ct M

? Ct ?

|E| |S|+1

|E| |S|+1

S, E, F @ A @ D A@D

V · S, E, F

? Assume the next instruction is Apply, and let M = ((Ct /E /F ) / Apply(n); Ce / V1 ..Vn .S / E / F / A / D) and M = (? / Ct / ε / V1 .E / F / V2 ..Vn / (Ce , S, E, F, A) :: D) Then M →M 60

and M = λ Ct

|E |+1 0 |E| |S|

E ,F [↑]

n

(1 · · · n · ?) @ Ce ?→

Z

V1 ..Vn .S, E, F @ A @ D

λ Ct

|E |+1 0

E ,F

|E| |S|

(1 · · · n · ?) V1 ..Vn .S, E, F @ Ce ?→

Z

[↑]

n

V1 ..Vn .S, E, F

@ A@D

λ Ct

|E |+1 0

E ,F Ce

|E| |S|

(1 · · · n · ?) V1 ..Vn .S, E, F @ ?→

Z

[↑]

n

V1 ..Vn .S, E, F @ A @ D

λ Ct

|E |+1 0

E ,F Ce

|E| |S|

(1 · · · n · ?) V1 ..Vn .S, E, F @ ?→?

Z

S, E, F @ A @ D

λ Ct

|E |+1 0

E ,F

|E| |S|

V1 · · · V n · ? @ ?→ V1 · ?→

Z Z

Ce

|E |+1 0

S, E, F @ A @ D

λ Ct

E ,F Ce

|E| |S|

V2 · · · V n · ? @ Ct

|E |+1 0

S, E, F @ A @ D

V1 · E , F Ce

|E| |S|

V2 · · · V n · ? @ = M

S, E, F @ A @ D

? Assume the next instruction is Return and no arguments are left on the argument stack. If we let M = (V / Return / S / E / F / ε / (Ce , S, E, F, A) :: D) and M = (V / Ce / S / E / F / A / D) Then M →M and M = ?→ ?→ =

Z Z |E| |S|

V V V M

? S ,E ,F ?@ Ce Ce

|E| |S| |E| |S|

@

Ce

S, E, F @ A @ D

S, E, F @ A @ D

S, E, F @ A @ D

61

? Assume the next instruction is Return, and the argument stack is not empty. If we let M = ((Ct /E /F ) / Return / S / E / F / V.A / D) and M = (? / Ct / ε / V.E / F / A / D) Then M →M and M = ?→ ?→ ?→ ?→ =

Z Z Z Z

λ Ct λ Ct λ Ct λ Ct Ct M

|E |+1 0 |E |+1 0 |E |+1 0 |E |+1 0

E ,F E ,F E ,F E ,F

? S, E, F @ ?@

V ·A @D

V ·A @D

V ·A @D V · A@D A@D

|E |+1 0

V · E ,F

Finally, we can make the corresponding complimentary proposition: Proposition 5.3.2. If M ?→? t, then there exists an ZINC machine state N such that, either ? N is a ?nal state and N ?→2 t, or ? M →? N and t ?→? N .

Z Z Z

62

Conclusion

This thesis has demonstrated that the level of detail found in abstract machines designed for weak β-reduction can be simulated within a term reduction system based upon sequent calculus cut elimination. The idea that a calculus can function at the level of an abstract machine is incipient in the work of Curien and Herbelin [4] but could not be fully realized without a version of the λcalculus supporting simultaneous explicit substitutions and closures for weak reduction. We believe that that λσw - and the λ?σw -calculi demonstrate the ? accessibility of this goal. It is typical to think of abstract machines as a means by which one can break down large tasks into small steps; however, the simulations in this thesis demonstrate how each machine step is further broken down into smaller steps in the λσw -calculus. If we see the deterministic strategy for the rewriting of λσw -terms as an abstract machine (which it literally is), then the operation of the Krivine, SECD, and ZINC abstract machines can be seen as the result of optimizing a lower-level machine by joining smaller instructions into larger macro-instructions. Our translations show that using these macro-instructions often does not result in more than a constant-time speed increase over the execution of steps according to the smaller instructions. We believe that all weak reduction strategies for the λ-calculus can be en? coded into strategies with constant-sized reduction steps in the λ?σ w -calculus. However, strong reduction is an entirely di?erent issue—one that we have not considered. There may be a version of the λ-calculus analogous to the λσ ? calculus, but it is unclear what could be said about the step sizes of strong reduction strategies in such a calculus. Another direction for future research would be to use the type system of the λ-calculus to assign types to machine code or to machine states. Potentially, a type system based on LJT could be ideal for typing low-level code. This proposal is supported by the fact that some machine instructions in the call-by-value machines were naturally translated into λ-terms while others were naturally translated into λ-contexts. Additionally, we have yet to investigate the classical sequent calculus, which would likely o?er a useful correspondence with machines that implement control operators. In this work, we have thus demonstrated that an important connection exists between the LJT sequent calculus and some abstract machines. The research of Herbelin [10], Curien and Herbelin [4], and Hardin, Maranget, and Pagano 63

[9] has been the central foundation for this. We believe that our research has elucidated some important new points and has demonstrated that this is a fruitful direction for further study.

64

Bibliography

[1] Mart? Abadi, Luca Cardelli, Pierre-Louis Curien, and Jean-Jacques L`vy. ?n e Explicit substitutions. In Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, San Francisco, California, pages 31–46. ACM, 1990. [2] H. Barendregt. Lambda calculi with types. In Abramsky, Gabbay, and Maibaum, editors, Handbook of Logic in Computer Science, volume 2. 1992. [3] Pierre-Louis Curien, Th?r`se Hardin, and Jean-Jacques L?vy. Con?uence ee e properties of weak and strong calculi of explicit substitutions. Journal of the ACM, 43(2):362–397, 1996. [4] Pierre-Louis Curien and Hugo Herbelin. The duality of computation. In Proceedings of the Fifth ACM SIGPLAN International Conference on Functional Programming, pages 233–243. ACM Press, 2000. [5] V. Danos, J.-B. Joinet, and H. Schellinx. A new deconstructive logic: linear logic. In J.-Y. Girard, Y. Lafont, and L. R?gnier, editors, Proceedings of e the Workshop on Linear Logic. Cornell, 1993. [6] Stephan Diehl, Pieter Hartel, and Peter Sestoft. Abstract machines for programming language implementation. Future Generation Computer Systems, 16(7):739–751, 2000. [7] G. Gentzen. Investigations into logical deduction. In M.E. Szabo, editor, Collected Papers of Gerhard Gentzen, pages 68–131. North-Holland, 1969. [8] Timothy G. Gri?n. The formulae-as-types notion of control. In Conference Record of the 17th Annual ACM Symposium on the Principles of Programming Languages, POPL’90, San Francisco, CA, USA, 17–19 Jan 1990, pages 47–57, New York, 1990. ACM Press. [9] Th?r`se Hardin, Luc Maranget, and Bruno Pagano. Functional back-ends ee within the lambda-sigma calculus. Technical Report RR-3034, INRIA, November 1996. [10] Hugo Herbelin. A lambda-calculus structure isomorphic to sequent calculus structure, draft. Available from the author’s homepage: http://coq.inria.fr/~herbelin/LAMBDA-BAR-FULL.dvi.gz, 1994. 65

[11] Hugo Herbelin. Explicit substitutions and reducibility. Journal of Logic and Computation, 11(3):429–449, 2001. [12] W. Howard. The formulae-as-types notion of construction. In J. R. Hindley and J. P. Seldin, editors, To H. B. Curry: Essays in Combinatory Logic, Lambda Calculus and Formalism, pages 479–490. Academic Press, 1980. [13] J. W. Klop. Term rewriting systems. In S. Abramsky, D. Gabbay, and T. Maibaum, editors, Handbook of Logic in Computer Science, volume II, pages 1–116. Oxford University Press, 1992. [14] Jean-Louis Krivine. A call-by-name lambda calculus machine. To appear in Higher Order and Symbolic Computation, 2004. [15] Xavier Leroy. The ZINC experiment : an economical implementation of the ML language. Technical Report RT-0117, INRIA, February 1990. [16] Pierre Lescanne. From λσ to λv a journey through calculi of explicit substitutions. In Conference Record of POPL ’94: 21ST ACM SIGPLANSIGACT Symposium on Principles of Programming Languages, Portland, Oregon, pages 60–69, New York, NY, 1994. [17] M. Parigot. Classical proofs as programs. Computational Logic and Theory, 713:263–276, 1993. [18] G. D. Plotkin. Call-by-name, call-by-value, and the lambda-calculus. Theoretical Computer Science, 1(2):125–159, December 1975. [19] G. D. Plotkin. A structural approach to operational semantics. Technical Report DAIMI FN-19, University of Aarhus, 1981. [20] D. Prawitz. Natural Deduction, a Proof-Theoretical Study. Almquist and Wiksell, Stockholm, 1965. [21] Philip Wadler. Call-by-value is dual to call-by-name. In Proceedings of the Eighth ACM SIGPLAN International Conference on Functional Programming, 2003. [22] Mitchell Wand. On the correctness of the krivine machine. Submitted for publication, October 2003.

66

- Abstract λ-Calculus Machines
- Abstract Fuzzy Sets and Systems 121 (2001) 371–395 Sequent calculus and data fusion
- Algebra and Sequent Calculus for Epistemic Actions
- Multi lingual sequent calculus and coherent spaces
- Sequent Calculus and the Specification of Computation Lecture Notes
- Abstract Model Checking UML State Machines and Collaborations
- A functional correspondence between monadic evaluators and abstract machines for languages
- Abstract A Chart Semantics for the Pi-Calculus
- Abstract Tribe A Simple Virtual Class Calculus 1
- USING TECHNOLOGY AS A TOOL IN ABSTRACT ALGEBRA AND CALCULUS THE MTSU EXPERIENCE Seventh Ann

更多相关文章：
**
***Sequent* *Calculus* *and* *Abstract* *Machines*.pdf

*Sequent* *Calculus* *and* *Abstract* *Machines* - The *sequent* *calculus* proof system, as an alternative to ...**
***Abstract*SequentialList.txt

*Abstract*SequentialList (Java 2 Platform SE 5.0) function windowTitle() { parent.document.title="*Abstract*SequentialList (Java 2 Platform SE 5.0)"; } 概述...**
***Abstract* Automatic Parallelization of Sequential C Code.pdf

*Abstract* Automatic Parallelization of Sequential C Code_专业资料。As the cost of hardware declines *and* the demand for computing power increases, it is ...**
***Sequent* *Calculus* *and* *Abstract* *Machines*.unkown

*Sequent* *Calculus* *and* *Abstract* *Machines* Aaron Bohannon University of Oregon 1 June 2004 *Abstract* The *sequent* *calculus* proof system, as an alternative to ...**
...***Calculus* Supporting Shallow Embeddings of *Abstract* *Machines*.unkown

is to draw a formal connection between steps taken by*abstract* *machines* *and* reductions in a system of proof terms for a version the *sequent* *calculus*. ...**
***Abstract* Sequential *and* distributed simulations using Java ....pdf

*Abstract* Sequential *and* distributed simulations using Java Threads_IT认证_资格...threaded simulator can benefit from a powerful multiprocessor *machine* if ...**
***ABSTRACT* Creating Sustained Tones with the Cicada’s Rapid ....pdf

*ABSTRACT* Creating Sustained Tones with t**
***ABSTRACT* MULTI-STEP SEQUENTIAL BATCH SELF-ASSEMBLY OF THREE-....pdf

*ABSTRACT* MULTI-STEP SEQUENTIAL BATCH SEL**
Introduction to ***Sequent* *Calculus* *and* *Abstract* Logic Programming.unkown

Introduction to*Sequent* *Calculus* *and* *Abstract* Logic Programming Course Notes Alessio Guglielmi These notes refer to an old course, they are not meant for ...**
***Sequent* *calculus* of constructions with linear types (Extended....unkown

*Sequent* *calculus* of constructions with linear types (Extended *Abstract*) Paolo Torrini Department of Computer Science, Swansea University February 10, 2015 ...**
Introduction to ***Sequent* *Calculus* *and* *Abstract* Logic Programming.unkown

Introduction to*Sequent* *Calculus* *and* *Abstract* Logic Programming Homework Exercises Alessio Guglielmi Informatik - Technische Universita¨t Dresden Hans-Grundig-Str...**
***Sequent* *calculi* *and* *abstract* *machines*..unkown

Ariola Retrieved on: 02 July 2015*Sequent* *Calculi* *and* *Abstract* *Machines* ZENA... Mathematical LogicLambda *calculus* *and* related systems; D.1.1 [Programming...**
Sequential ***and* Parallel *Abstract* *Machines* for Optimal Reduction.unkown

Sequential*and* Parallel *Abstract* *Machines* for Optimal Reduction Marco Pedicini ...*Calculus* Alonzo Church in 1930's introduced lambda-*calculus* as an alternative...**
FLAT CONCURRENT PROLOG SEQUENTIAL ***ABSTRACT* *MACHINE* ....unkown

Computer Science Department Technical Report University of California Los Angeles, CA 90024-1596 FLAT CONCURRENT PROLOG SEQUENTIAL*ABSTRACT* *MACHINE* CHARACTERISTICS...**
Notes on evaluating λ-***calculus* terms *and* *abstract* *machines*.unkown

Notes on evaluating λ-*calculus* terms *and* *abstract* *machines* J.R.B. Cockett Department of Computer Science, University of Calgary Calgary, T2N 1N4, ...**
The computable kernel of Sequential ***Abstract*-State *Machines*.unkown

Reisig November 20, 2004 Contents Contents 1*Abstract* 1 1 Introduction 1 2 Sequential, bounded *Abstract*-State *Machines* 3 3 The overall picture of ASM ...**
...generation for sequential nets of ***Abstract* State *Machines* ....unkown

*Abstract* State *Machines* with information**
Sequential ***Abstract* State *Machines* Capture Sequential ....unkown

1 Sequential*Abstract* State *Machines* Ca**
...logic circuits. It is conceived as an ***abstract* *machine* t.unkown

It is conceived as an*abstract* *machine* that can be in one of a finite number of states. The *machine* is in only one state at a time; the state ...**
Sequential ***and* Parallel *Abstract* *Machines* for Optimal Reduction.unkown

In particular, some*abstract* *machines* [12,3] have been proposed as a tool for studying the theory *and* implementation of optimal reduction of λ-*calculus*... 更多相关标签：

is to draw a formal connection between steps taken by

Introduction to

Introduction to

Ariola Retrieved on: 02 July 2015

Sequential

Computer Science Department Technical Report University of California Los Angeles, CA 90024-1596 FLAT CONCURRENT PROLOG SEQUENTIAL

Notes on evaluating λ-

Reisig November 20, 2004 Contents Contents 1

1 Sequential

It is conceived as an

In particular, some