9512.net

甜梦文库

甜梦文库

当前位置：首页 >> >> # A semantic approach to secure information flow

SRC Technical Note 1997 - 032

17 December 1997

A Semantic Approach to Secure Information Flow

K. Rustan M. Leino and Rajeev Joshi

digi tal

Systems R

esearch Center 130 Lytton Avenue Palo Alto, California 94301 http://www.research.digital.com/SRC/

Copyright c Digital Equipment Corporation 1997. All rights reserved

Abstract A classic problem in security is the problem of determining whether a given program has secure information ?ow. Informally, this problem may be described as follows: Given a program operating on public and private variables, check whether observations of the public variables before and after execution reveal any information about the initial values of the private variables. Although the problem has been studied for several decades, most of the previous approaches have been syntactic in nature, often using type systems and compiler data ?ow analysis techniques to analyze program texts. This paper presents a considerably different approach to checking secure information ?ow, based on a semantic characterization of security. A semantic approach has several desirable features. Firstly, it gives a more precise characterization of security than that possible by conservative methods based on type systems. Secondly, it applies to any programming constructs whose semantics are de?nable; for instance, nondeterminism and exceptions pose no additional problems. Thirdly, it can be applied to reasoning about indirect leaking of information through variations in program behavior (e.g., whether or not the program terminates). The method is also useful in the context of automated veri?cation, since it can be used to develop a mechanically-assisted technique for checking whether the ?ow of a given program is secure.

A Semantic Approach to Secure Information Flow

K. Rustan M. Leino

DEC SRC, Palo Alto, CA rustan@pa.dec.com

Rajeev Joshi

University of Texas, Austin, TX joshi@cs.utexas.edu

17 December 1997

0 Introduction

A classic problem in security is the problem of determining whether a given computer program has secure information ?ow [BLP73, Den76]. In its simplest form, the problem may be described informally as follows: Given a program operating on “high-security” and “low-security” variables, check whether observations of the initial and ?nal values of the low-security variables reveal anything about the initial values of the high-security variables. A related problem is that of detecting covert ?ows, where information may be leaked by variations in program behavior [Lam73]. For instance, it may be possible to conclude something about the initial values of the high-security variables by examining the resource usage of the program (e.g., by counting the number of times it accesses the disk head). The problem of secure information ?ow is more important today than ever before, because more and more computers are becoming connected. The risk of information being leaked inappropriately is a concern in protecting high-security military and ?nancial data as well as in protecting data stored on personal computers. For example, with the advent of the Web, there is a danger that a downloaded applet will leak private data (such as the contents of a local ?le) by communicating it to other sites. Most previous approaches to checking secure information ?ow have been syntactic in nature, often using type systems and compiler data ?ow analysis techniques to analyze program texts. In this paper, we present a considerably different approach to secure information ?ow, based on a semantic notion of program equality. A de?nition based on program semantics has several desirable features. Firstly, it provides a more precise characterization of secure information ?ow than that possible by conservative methods based on types. Secondly, it is applicable to any programming construct whose semantics are de?ned; for instance, nondeterminism and exception handling constructs pose no additional problems. Thirdly, it can be applied to reasoning about a variety of covert channels, including termination behavior and timing channels. Finally, the method is also useful in the context of automated veri?cation, since it can be used to develop a mechanically-assisted technique for certifying program ?ow. 0

The outline of the rest of the paper is as follows. We start in section 1 by informally de?ning the problem and discussing several small examples. We present the formal characterization of security in section 2. In section 3, we relate our de?nition to a notion used elsewhere in the literature. In sections 4 and 5, we show how to rewrite our de?nition in the weakest precondition calculus so that it is amenable for use with tools for mechanical veri?cation. Finally, we discuss related work in section 6 and give a short summary in section 7.

1

Informal description of the problem

Throughout this paper, we assume that for each program that we consider, the program variables are partitioned into two disjoint tuples h (denoting “high-security” variables) and k (denoting “lowsecurity” variables). Informally, we say that a program is secure if: Observations of the initial and ?nal values of k do not give any information about the initial value of h . We illustrate this notion with a number of program examples. The program k := h is not secure, since the initial value of h can be observed as the ?nal value of k . However, the program h := k is secure, since k , whose value is not changed, is independent of the initial value of h . Similarly, the program k := 6 is secure, because the ?nal value of k is always 6 , regardless of the initial value of h . It is possible for an insecure program to occur as a subprogram of a secure program. For example, in each of the four programs k := h ; k := 6 h := k ; k := h k := h ; k := k ? h if false then k := h end (0) (1) (2) (3)

the insecure program k := h occurs as a subprogram; nevertheless, the four programs are all secure.

1

There are more subtle ways in which a program can be insecure. For example, if h then k := true else k := false end is insecure, despite the fact that each branch of the conditional is secure. This program has the same effect as k := h , but the ?ow of information from h to k is called implicit [Den76]. In the insecure programs shown so far, the exact value of h is leaked into k . This need not always be the case: a program is considered insecure if it leaks any information about h . For example, neither of the two programs k := h ? h if 0 ≤ h then k := 1 else k := 0 end transmits the entire value of h , but each of the programs is insecure because the ?nal value of k does reveal some information about the initial value of h . A nondeterministic program can be insecure even if an adversary has no control over how the nondeterminism is resolved. For example, the following program is insecure, because the ?nal value of k is always very close to the initial value of h : k := h ? 1 k := h + 1 T proceeds by choosing any one of S

(The operator denotes demonic choice: execution of S or T and executing it.) The program skip k := h

is also insecure, because if the initial and ?nal values of k are observed to be different, then the initial value of h is known. Finally, we give some examples of programs that transmit information about h via their termination behavior. The nicest way to present these examples is by using Dijkstra’s if ? construct [Dij76]. The operational interpretation of the program if B0 ?→ S0 B1 ?→ S1 ?

is as follows. From states in which neither B0 nor B1 is true, the program loops forever; from all other states it executes either S0 (if B0 is true) or S1 (if B1 is true). If both B0 and B1 are true, the choice between S0 and S1 is made arbitrarily. Now, the deterministic program if h = 0 ?→ loop h = 0 ?→ skip ? (4)

(where loop is the program that never terminates) is insecure, because whether or not the program terminates depends on the initial value of h . In addition, each of the nondeterministic programs if h = 0 ?→ skip if h = 0 ?→ loop true ?→ loop ? true ?→ skip ? 2 (5) (6)

is also insecure. Program (5) terminates only if the initial value of h is 0 ; even though there is always a possibility that the program will loop forever, if the program is observed to terminate, the initial value of h is known. Similarly, if an adversary is able to detect in?nite looping, then the nontermination of program (6) indicates that the initial value of h is 0 . We hope that these examples, based on our informal description of secure information ?ow, have helped give the reader an operational understanding of the problem. From now on, we will adopt a more rigorous approach. We start in the next section by formally de?ning security in terms of program semantics.

2

Formal characterization

In this section, we give a formal characterization of secure information ?ow. As stated in the introduction, our characterization is expressed as an equality between two program terms. We use . the symbol = to denote program equality and we write “ S is secure” to mean that program S has secure information ?ow. A key ingredient in our characterization is the program “assign to h an arbitrary value” which we denote by HH (“havoc on h ”). Program HH may be used to express some useful properties. Firstly, observe that the difference between a program S and the program HH ; S is that the latter executes S after setting h to an arbitrary value. Thus, the programs S and HH ; S are equal provided that S does not depend on the initial value of h . Secondly, observe that the program S ; HH “discards” the ?nal value of h resulting from the execution of S . These observations help in giving an informal justi?cation to the following de?nition of security. De?nition 0 (Secure Information Flow) . S is secure ≡ (HH ; S ; HH = S ; HH) (7)

Using the two observations above, this characterization may be understood as follows. First, note that the ?nal occurrence of HH on each side means that only the ?nal value of k , not of h , is of interest. Next, observe that the pre?x “ HH ; ” in the ?rst program means that the two programs are equal provided that the ?nal value of k produced by S does not depend on the initial value of h . In section 3, we provide a more rigorous justi?cation for this de?nition, by relating it to a notion of secure information ?ow that has been used elsewhere in the literature; but, for now, we hope that this informal argument gives the reader an operational understanding of our de?nition. In the rest of this section, we discuss some of the features of our approach. Firstly, note that we have not stated the de?nition in terms of a particular style of program semantics (e.g., axiomatic, denotational, or operational). Sequential program equality can be expressed in any of these styles [Hoa96] and different choices are suitable for different purposes. For 3

instance, in this paper, we will use a relational semantics to establish the correctness of our de?nition, but we will use weakest precondition semantics to obtain a de?nition that is more amenable for use in mechanical veri?cation. Secondly, observe that our de?nition is given purely in terms of program semantics; thus it can be used to reason about any programming construct whose semantics are de?ned. For instance, nondeterminism and exceptions (cf. [Cri84, LvdS94]) pose no additional problems in this approach, nor do data structures such as arrays, records, or objects (cf. [Hoa72, Lei97]). Finally, note that our de?nition leaves open the decision of which variables are deemed to be low-security (i.e., observable by an adversary). Different choices may be used to reason about different kinds of covert channels, by introducing appropriate special variables (such as those used by Hehner [Heh84]) and including them in the low-security variables k . For example, one can reason about timing channels by introducing in k a program variable that models execution time.

3

Security in the relational calculus

In this section, we justify our characterization of security by showing that it is equivalent to a notion used elsewhere in the literature. Since that notion was given in operational terms, we ?nd it convenient to use a relational semantics. Thus, for the purposes of this section, a program is a relation over the state space formed by variables h and k . We use the following notational conventions. The identi?ers w, x, y, z denote elements of a state space. We write x.k and x.h to denote the values of k and h in a state x . For any relation S and states x and y , we write x S y to denote that S relates x to y ; this means that there is an execution of program S from the pre-state x to the post-state y . The identity relation is denoted by Id ; it satis?es x Id y ≡ x = y for all x and y . The symbol ? denotes relational containment and the operator ; denotes relational composition. We will use the facts that Id is a (left and right) identity of composition and that ; is monotonic with respect to ? in each of its arguments. We use the following format for writing quanti?ed expressions [DS90]: For Q denoting either ? or ? , we write ( Q j : r.j : t.j ) to denote the quanti?cation over all t.j for all j satisfying r.j . We call j the dummy, r.j the range, and t.j the term of the quanti?cation. When the range is true or is understood from context, we omit it. We use similar notation and conventions to de?ne sets, by writing { j : r.j : t.j } to mean the set of elements of the form t.j for all j satisfying r.j . The relational semantics of program HH are given by ( ? x, y :: x HH y ≡ x.k = y.k ) 4

Note that the relation HH is both re?exive and transitive: Id ? HH HH ; HH ? HH Using these properties, condition (7) may be rewritten in relational terms as follows. = ? ? ? ? S is secure . f Security condition (7), program equality = is relational equality = HH ; S ; HH = S ; HH f (8) and ; monotonic, hence HH ; S ; Id ? HH ; S ; HH g HH ; S ? S ; HH f Applying “ ; HH ” to both sides, using ; monotonic g HH ; S ; HH ? S ; HH ; HH f (9) and ; monotonic g HH ; S ; HH ? S ; HH f (8) and ; monotonic, hence Id ; S ; HH ? HH ; S ; HH g HH ; S ; HH = S ; HH (8) (9)

g

Since the second and last lines are equal, we have equivalence throughout; thus, we have shown S is secure ≡ (HH ; S ? S ; HH) (10)

This de?nition is useful because it facilitates the following derivation, which expresses security in terms of the values of program variables. = = = = = = = = HH; S ? S; HH f De?nition of relational containment g ( ? y, w :: y HH; S w ? y S; HH w ) f De?nition of relational composition g ( ? y, w :: ( ? x :: y HH x ∧ x S w ) ? ( ? z :: y S z ∧ z HH w )) f Relational semantics of HH , shunting between range and term g ( ? y, w :: ( ? x :: y.k = x.k ∧ x S w ) ? ( ? z : y S z : z.k = w.k )) f Predicate calculus g ( ? y, w :: ( ? x :: y.k = x.k ∧ x S w ? ( ? z : y S z : z.k = w.k ))) f Unnesting of quanti?ers g ( ? x, y, w :: y.k = x.k ∧ x S w ? ( ? z : y S z : z.k = w.k )) f Nesting, shunting g ( ? x, y : y.k = x.k : ( ? w :: x S w ? ( ? z : y S z : z.k = w.k ))) f Set calculus g ( ? x, y : y.k = x.k : { w : x S w : w.k } ? { z : y S z : z.k }) f Expression is symmetric in x and y g ( ? x, y : y.k = x.k : { w : x S w : w.k } = { z : y S z : z.k }) 5

Thus, we have established that, for any S , S is secure ≡ ( ? x, y : x.k = y.k : { w : x S w : w.k } = { z : y S z : z.k }) (11)

This condition says that the set of possible ?nal values of k is independent of the initial value of h . The condition has appeared in the literature [BBLM94] as the de?nition of secure information ?ow, and similar de?nitions for deterministic programs have been used elsewhere [VSI96, VS97b]. Thus, one may view the derivation above as a proof of the soundness and completeness of our characterization (7) with respect to the notion used by others.

4 Security in the weakest precondition calculus

In this section, we show how our de?nition of secure information ?ow may be written succinctly in the weakest precondition calculus [Dij76]. In section 5, we go one step further by showing how to write this weakest precondition formulation of security as a ?rst-order predicate. We begin by introducing some useful notation. Recall that we are considering programs whose variables are partitioned by k and h . Thus, every predicate on the state space of such a program depends only on the variables k and h . We will use identi?ers p and q to denote such predicates; for any such p we will write [p] (read “everywhere p ”) as an abbreviation for ( ? k, h :: p ) . For any program S , the predicate transformers wlp.S (“weakest liberal precondition”) and wp.S (“weakest precondition”) are informally de?ned as follows [Dij76]: For any predicate p , wlp.S.p holds in exactly those initial states from which every terminating computation of S ends in a state satisfying p , and wp.S.p holds in exactly those initial states from which every computation of S terminates in a state satisfying p . The two predicate transformers are related by the following pairing property [Dij76]: For any programs S and T , ( ? p :: [wp.S.p ≡ wlp.S.p ∧ wp.S.true] ) (12)

The predicate transformer wlp.S is universally conjunctive, that is, it distributes over arbitrary conjunctions. Universal conjunctivity implies monotonicity [DS90]. The wlp and wp semantics of the program HH are: ( ? p :: [wlp.HH.p ≡ ( ? h :: p )] ) [wp.HH.true ≡ true] Program equality in the weakest precondition calculus is de?ned as the equality of wp and wlp on each predicate; thus . S = T ≡ ( ? p :: [wlp.S.p ≡ wlp.T.p] ∧ [wp.S.p ≡ wp.T.p] ) 6

which, on account of the pairing property, can be simpli?ed to . S = T ≡ ( ? p :: [wlp.S.p ≡ wlp.T.p] ) ∧ [wp.S.true ≡ wp.T.true] Using this de?nition of program equality, we now rewrite security condition (7) in the weakest precondition calculus as follows. . HH ; S ; HH = S ; HH = f Program equality in terms of wlp and wp g ( ? p :: [wlp.(HH ; S ; HH).p ≡ wlp.(S ; HH).p] ) ∧ [wp.(HH ; S ; HH).true ≡ wp.(S ; HH).true] = f wlp and wp of HH and ; g ( ? p :: [( ? h :: wlp.S.( ? h :: p )) ≡ wlp.S.( ? h :: p )] ) (13) ∧ [( ? h :: wp.S.true ) ≡ wp.S.true] (14) The last formula above contains expressions having the form of a predicate q satisfying [( ? h :: q ) ≡ q] Predicates with this property occur often in our calculations, so it will be convenient to introduce a special notation for them and identify some of their properties. This is the topic of the following subsection.

4.0

Cylinders

A predicate q that satis?es [q ≡ ( ? h :: q )] has the property that its value is independent of the variable h . We shall refer to such predicates as “ h -cylinders”, or simply as “cylinders” when h is understood from context. For notational convenience, we de?ne the set Cyl of all cylinders: De?nition 1 (Cylinders) For any predicate q , q ∈ Cyl ≡ [q ≡ ( ? h :: q )] (15)

The following lemma provides several equivalent ways of expressing that a predicate is a cylinder. Lemma 0 For any predicate q , the following are all equivalent. i. ii. iii. iv. v. q ∈ Cyl [q ≡ ( ? h :: q )] [q ? ( ? h :: q )] ( ? p :: [q ≡ ( ? h :: p )] ) ?q ∈ Cyl

Proof. Follows from simple predicate calculus.

2

7

4.1

Security in terms of cylinders

We now use the results in the preceding subsection to simplify the formulation of security in the weakest precondition calculus. We begin by rewriting (13) as follows. = = = = = ( ? p :: [( ? h :: wlp.S.( ? h :: p )) ≡ wlp.S.( ? h :: p )] ) f De?nition of Cyl (15) g ( ? p :: wlp.S.( ? h :: p ) ∈ Cyl ) f One-point rule g ( ? p, q : [q ≡ ( ? h :: p )] : wlp.S.q ∈ Cyl ) f Nesting and trading g ( ? q :: ( ? p :: [q ≡ ( ? h :: p )] ? wlp.S.q ∈ Cyl )) f Consequent is independent of p g ( ? q :: ( ? p :: [q ≡ ( ? h :: p )] ) ? wlp.S.q ∈ Cyl ) f Lemma 0.iv, and trading g ( ? q : q ∈ Cyl : wlp.S.q ∈ Cyl ) [( ? h :: wp.S.true ) ≡ wp.S.true] f De?nition of Cyl (15) g wp.S.true ∈ Cyl

Similarly, we rewrite the expression (14) as follows. =

Putting it all together, we get the following condition for security: For any program S , S is secure ≡ ( ? p : p ∈ Cyl : wlp.S.p ∈ Cyl ) ∧ wp.S.true ∈ Cyl (16)

5

A ?rst-order characterization

Checking whether a given program S is secure using condition (16) given in the previous section involves checking that certain predicates are cylinders; this can be done with a theorem prover. (Checking whether a given predicate is a cylinder can also be done by the considerably cheaper and more conservative method of syntactically scanning the predicate for free occurrences of h .) Thus, checking the second conjunct on the right-hand side of (16) is straightforward. However, checking the ?rst conjunct, namely ( ? p : p ∈ Cyl : wlp.S.p ∈ Cyl ) (17)

involves a quanti?cation over predicates; thus, it is not well-suited for automated checking (nor does the syntactic scan for free occurrences of h work). In this section, we show how this higher-order quanti?cation over p can be reduced to a ?rst-order quanti?cation over the domain of k . To explain how the reduction is brought about, we need two new notions, namely that of conjunctive and disjunctive spans. Informally speaking, for any set X of predicates, the conjunctive 8

(disjunctive) span of X , denoted A.X ( E .X ), is the set of predicates obtained by taking conjunctions (disjunctions) over subsets of X . The main theorem of this section asserts that the range of p in the quanti?cation (17) above may be replaced by any set of predicates whose conjunctive span is the set Cyl . The usefulness of the theorem is demonstrated in subsection 5.1, where we show that there exist small sets of predicates whose conjunctive span is Cyl . We will use the following notational conventions in this section. For any set X of predicates, we write ?X to mean the set { q : q ∈ X : ?q } . We also write ?.X to mean the conjunction of all the predicates in X , and ?.X to mean the disjunction of all the predicates in X . Note that, as a result of these conventions, we have [?(?.X) ≡ ?.(?X)] .

5.0

Spans

For any set X of predicates, we de?ne sets De?nition 2 (Spans)

A.X

and

E .X

as follows.

A.X E .X

= { XS : XS ? X : ?.XS } = { XS : XS ? X : ?.XS }

The two notions are related by the following lemma, whose proof follows from simple predicate calculus. Lemma 1 For any set X of predicates, ?E .X =

A.(?X)

We are now ready to present the main theorem of this section, which states that one can establish the quanti?ed expression ( ? q : q ∈ A.X : f .q ∈ Cyl ) by establishing a quanti?ed expression over a smaller range. Theorem 0 Let f be a universally conjunctive predicate transformer and let X be any set of predicates. Then ( ? p : p ∈ X : f .p ∈ Cyl ) ≡ ( ? q : q ∈ A.X : f .q ∈ Cyl ) Proof. Note that the left-hand side follows from the right-hand side, since X ? remains to prove the implication ( ? p : p ∈ X : f .p ∈ Cyl ) ? ( ? q : q ∈ A.X : f .q ∈ Cyl )

A.X ; thus, it

We prove this implication by showing that for any predicate q in A.X , the antecedent implies that f .q is a cylinder. So, let q be any predicate in A.X . By the de?nition of a conjunctive span, there is a subset XS of X such that [q ≡ ?.XS] . Observe: 9

( ? p : p ∈ X : f .p ∈ Cyl ) f XS ? X , antimonotonicity of ? in its range g ( ? p : p ∈ XS : f .p ∈ Cyl ) = f Lemma 0.iii g ( ? p : p ∈ XS : [f .p ? ( ? h :: f .p )] ) ? f f is universally conjunctive, hence monotonic, also [?.XS ? p] g ( ? p : p ∈ XS : [f .(?.XS) ? ( ? h :: f .p )] ) = f Range p ∈ XS is independent of k and h g [( ? p : p ∈ XS : f .(?.XS) ? ( ? h :: f .p ))] = f Antecedent is independent of p g [f .(?.XS) ? ( ? p : p ∈ XS : ( ? h :: f .p ))] = f Interchange quanti?cations, range p ∈ XS is independent of h g [f .(?.XS) ? ( ? h :: ( ? p : p ∈ XS : f .p ))] = f f is universally conjunctive g [f .(?.XS) ? ( ? h :: f .( ? p : p ∈ XS : p ))] = f Notation “ ?. ” described above g [f .(?.XS) ? ( ? h :: f .(?.XS) )] = f Lemma 0.iii g f .(?.XS) ∈ Cyl = f By assumption, [q ≡ ?.XS] g f .q ∈ Cyl ?

2

From the standpoint of mechanical veri?cation, the usefulness of the result above is due to the fact that there are sets of predicates much smaller than Cyl whose conjunctive span is Cyl . In the following subsection, we show such a set of predicates whose cardinality is the same as the cardinality of the domain of k .

5.1

A lower-order quanti?cation

Consider the following two sets of predicates, where K ranges over the set of possible values of the variables k . PP = { K :: k = K } NN = { K :: k = K } It follows directly from these de?nitions that PP = ? NN The relationship of these sets to Cyl is given by the following lemma. 10 (20) (18) (19)

Lemma 2 With PP and NN as de?ned above, we have i. ii.

E .PP A.NN

= Cyl = Cyl

Proof. We give an informal sketch of the proof here; details are left to the reader. By de?nition, Cyl consists of exactly those predicates that are independent of h , that is, they depend on the variable k only. But every predicate on k may be written as a disjunction of predicates, one for each value in the domain of k ; thus part (i) follows. Part (ii) follows directly from part (i), observation (20), Lemma 1, and Lemma 0.v. 2 Using the fact that wlp.S is universally conjunctive for any statement S , and Lemma 2.ii, we apply Theorem 0 with f , X := wlp.S, NN to obtain the following result. S is secure ≡ ( ? q : q ∈ NN : wlp.S.q ∈ Cyl ) ∧ wp.S.true ∈ Cyl.

Using the de?nition of NN (19), this may be further simpli?ed to the following. S is secure ≡ ( ? K :: wlp.S.(k = K) ∈ Cyl ) ∧ wp.S.true ∈ Cyl (21)

Note that this is simpler than (16) since the range of the quanti?cation is the domain of k . Thus, in particular, security according to (21) is stated as a ?rst-order formula.

5.2

Deterministic programs

In the case that S is also known to be deterministic, we can simplify the security condition (21) even further. A deterministic program S satis?es the following properties [DS90]: ( ? p :: [wp.S.p ≡ ?wlp.S.(?p)] ) wp.S is universally disjunctive Consequently, the term involving wp in (21) is subsumed by the term involving wlp : wp.S.true ∈ Cyl f (22), with p := true g ?wlp.S.false ∈ Cyl = f Lemma 0.v g wlp.S.false ∈ Cyl ? f false ∈ Cyl g ( ? q : q ∈ Cyl : wlp.S.q ∈ Cyl ) = (22) (23)

11

Thus, the security condition for deterministic S is given by S is secure ≡ ( ? K :: wlp.S.(k = K) ∈ Cyl ) (24)

Next, we show that condition (24) may also be expressed in terms of wp and PP instead of wlp and NN . = = = = ( ? q : q ∈ Cyl : wlp.S.q ∈ Cyl ) f Theorem 0 g ( ? q : q ∈ NN : wlp.S.q ∈ Cyl ) f Negation is its own inverse, so rename dummy q to ?p ( ? p : ?p ∈ NN : wlp.S.(?p) ∈ Cyl ) f Observation (20), and (22) g ( ? p : p ∈ PP : ?wp.S.p ∈ Cyl ) f Lemma 0.v g ( ? p : p ∈ PP : wp.S.p ∈ Cyl )

g

Thus we have another way of expressing the condition for security of deterministic programs, namely, S is secure ≡ ( ? K :: wp.S.(k = K) ∈ Cyl ) (25)

5.3

Examples

We give some examples to show our formulae at work. Firstly, consider the secure program (2). We calculate, = = = = = (k := h ; k := k ? h) is secure f Security condition (21) g ( ? K :: wlp.(k := h ; k := k ? h).(k = K) ∈ Cyl ) ∧ wp.(k := h ; k := k ? h).true ∈ Cyl f wlp and wp of := and ; g ( ? K :: (h ? h = K) ∈ Cyl ) ∧ true ∈ Cyl f Lemma 0.ii, and true ∈ Cyl g ( ? K :: [h ? h = K ≡ ( ? h :: h ? h = K )] ) f Arithmetic g ( ? K :: [0 = K ≡ ( ? h :: 0 = K )] ) f Predicate calculus g true

This shows that our method does indeed establish that program (2) is secure. Secondly, we apply the security condition to program (5), which is insecure because of its termination behavior. 12

(if h = 0 ?→ skip true ?→ loop ?) is secure f Security condition (21) g ( ? K :: wlp.(if h = 0 ?→ skip true ?→ loop ?).(k = K) ∈ Cyl ) ∧ wp.(if h = 0 ?→ skip true ?→ loop ?).true ∈ Cyl = f wlp and wp , using ( ? p :: [wlp.loop.p ≡ true] ) and [wp.loop.true ≡ false] ( ? K :: ((h = 0 ? k = K) ∧ (true ? true)) ∈ Cyl ) ∧ ((h = 0 ∨ true) ∧ (h = 0 ? true) ∧ (true ? false)) ∈ Cyl = f Predicate calculus g ( ? K :: (h = 0 ? k = K) ∈ Cyl ) ∧ false ∈ Cyl = f Lemma 0.ii, and false ∈ Cyl g ( ? K :: [h = 0 ? k = K ≡ ( ? h :: h = 0 ? k = K )] ) = f [p] is shorthand for ( ? k, h :: p ) g ( ? K, k, h :: h = 0 ? k = K ≡ ( ? h :: h = 0 ? k = K )) ? f Instantiate with K, k, h := 2, 2, 2 g 2 = 0 ? 2 = 2 ≡ ( ? h :: h = 0 ? 2 = 2 ) = f Arithmetic and predicate calculus g ( ? h :: h = 0 ) ? f Instantiate h := 0 g false =

g

Finally, using program (4), we illustrate how one can reason about secure termination behavior of deterministic programs using wlp . = = = (if h = 0 ?→ loop h = 0 ?→ skip ?) is secure f Security condition for deterministic programs (24) g ( ? K :: wlp.(if h = 0 ?→ loop h = 0 ?→ skip ?).(k = K) ∈ Cyl ) f wlp g ( ? K :: ((h = 0 ? true) ∧ (h = 0 ? k = K)) ∈ Cyl ) f De?nition of Cyl , since h = 0 ? k = K is not independent of h false

g

6

Related work

The problem of secure information ?ow has been studied for several decades. A frequently used mathematical model for secure information ?ow is Denning’s lattice model [Den76], which is based on the Bell and La Padula security model [BLP73]. Static certi?cation mechanisms for secure information ?ow, an area pioneered by Denning and Denning [Den76, DD77], seem to fall into two general ?avors: type systems and data ?ow analysis techniques. In this section, we discuss each of these and compare them to our work. A historical perspective on secure information ?ow can be found, for example, in a book by Gasser [Gas88]. 13

6.0

Type systems approach

The static certi?cation mechanism proposed by Denning and Denning [DD77] can be described as a type checker for secure information ?ow. Each variable x occurring in a program is declared to have a particular security class, or type, written class.x . These security classes are related by a given lattice ordering ≤ . The type checker computes the type of an expression as the lattice join of the types of its subexpressions. For example, class.(E + F) = class.E ↑ class.F Each program statement also has a security class, computed as the lattice meet of the security classes of the statement’s l-expressions. For example, class.(x := E) = class.x class.(if E then S else T end) = class.S ↓ class.T The type checker certi?es a program S as being secure only if: 0. for every assignment statement x := E in S , class.E ≤ class.x , and 1. for every conditional statement if E then T else U end in S , class.E ≤ class.T and class.E ≤ class.U . Other programming constructs, such as loops, give rise to similar requirements. Denning and Denning gave an informal proof of the soundness of their certi?cation mechanism, that is, a proof that the mechanism certi?es only secure programs. Recently, Volpano et al. have given a more rigorous proof thereof [VSI96, VS97b]. The advantage of using a type system as the basis of a certi?cation mechanism is that it caters for a simple implementation. However, because of the nature of type systems, this certi?cation mechanism rejects any program that contains a subprogram that by itself would be insecure. But as we saw in examples (0)–(3) of section 1, an insecure program can occur as a subprogram of a secure program. In contrast, our approach does recognize programs like (0)–(3) as being secure. Also, just like it would be too much to hope for a useful type system that would reject programs that may fail to terminate, it would be too much to hope for a useful type system that would reject programs that may leak information via their termination behavior. (Volpano and Smith [VS97a] have attempted such a type system; however, their type system rejects any program that mentions h in a loop guard, which can be construed as resigning the game before it starts.)

6.1

Data ?ow analysis approach

Just like there are data ?ow analysis techniques that let an optimizing compiler analyze a given program beyond type checking [ASU86], there are data ?ow analysis techniques for secure information ?ow. The idea is to conservatively ?nd the program paths through which data may ?ow. 14

We choose to present the data ?ow analysis approach to secure information ?ow as a translation from a given program S to a program S that captures and facilitates reasoning about the possible ?ows of S . For every variable x of S , program S instead contains a variable x , representing the security class of the value dynamically computed into x . To deal with implicit ?ows, S also contains a special variable local , representing the security class of the values used to compute the guards that led execution to the current instruction. We describe the translation by example. For every assignment statement of the form x := y + z in S , S contains a corresponding statement x := y ↑ z ↑ local where, as in the previous subsection, ↑ denotes the lattice join on the set of security classes. For every conditional statement if x = y ?→ S0 z < 0 ?→ S1 ?

in S , S contains a corresponding statement var old := local in local := local ↑ x ↑ y ↑ z ; if true ?→ S0 true ?→ S1 ? ; local := old end where S0 and S1 are the statements in S that correspond to S0 and S1 . If a program S has the variables k and h belonging to the security classes low and high (denoted ⊥ and , respectively, where ⊥ ≤ ), then “ S is secure ” can be expressed as the following Hoare triple on S : {k≤⊥ ∧ h≤ ∧ local ≤ ⊥ } S { k ≤ ⊥ } (26)

The ?rst data ?ow analysis approach of this kind was given by Andrews and Reitman [AR80], whose treatment also dealt with communicating sequential processes. Ban? tre et al. [BBLM94] a used a variation of the method described above that attempts to keep track of the set of initial variables used to produce a value rather than only the security class of the value. They also developed an ef?cient algorithm for this approach, akin to data ?ow analysis algorithms used in compilers, and wrestled with a proof of soundness for the approach. (Contrary to what our description of the approach above may suggest, Andrews and Reitman used an ordinary if then else construct rather than Dijkstra’s if ? construct. Ban? tre et al. used the if ? construct, but, as Volpano et al. a point out, their soundness theorem is actually false for programs that make use of the nondeterminism [VSI96].) The data ?ow analysis approach can offer more precision than the type system approach. For example, the approach will certify the programs (0) and (1). However, the approach still rejects some secure programs that our approach will certify. This comes about because of two reasons. The 15

?rst reason is that the semantics of operators like + and ? are lost in the rewriting of S into S . Thus a program like (2), which is secure on account of that h ? h = 0 , is rejected by the data ?ow analysis approach. The second reason is that guards are replaced by true in the rewriting of S into S . Thus, a program like (3) whose security depends on when control can reach a certain statement is rejected. A way to reduce rejections of secure programs that result from the second of these reasons, one can combine the data ?ow analysis approach with a correctness logic, as mentioned by Andrews and Reitman [AR80]: Instead of rewriting program S into S , one can superimpose the new variables ( k , h , local ) and their updates onto program S , and then reason about S using the Hoare triple (26) but on S instead of on S . A consequence of this combination with a correctness logic is that one can rule out some impossible control paths, such as the one in program (3).

7

Summary

We have presented a simple and novel mathematical characterization of what it means for a program to have secure information ?ow. The characterization is general enough to accommodate reasoning about a variety of covert channels. Unlike previous methods like type systems and compiler data ?ow analysis techniques, the characterization is in terms of the semantics of the program. Consequently, our method is more precise, and can therefore certify more secure programs, than previous methods. We showed an application of our security characterization with a weakest precondition semantics, deriving a ?rst-order predicate whose validity implies that an adversary cannot deduce any secure information from observing the public inputs, outputs, and termination behavior of the program. As far as we know, our approach is the only one that can analyze termination behavior in a useful way and is also the only one that handles nondeterminism correctly. Future work includes employing our method in practice, and investigating the utility of our method in other areas, such as program slicing [Wei84]. Acknowledgments. We are grateful to the following colleagues for sharing their insights and comments on our work: Mart?n Abadi, Ernie Cohen, Mark Lillibridge, Jayadev Misra, Greg Nelson, ? Raymie Stata, and the participants at the September 1997 session of the IFIP WG 2.3 meeting in Alsace, France.

References

[AR80] Gregory R. Andrews and Richard P. Reitman. An axiomatic approach to information ?ow in programs. ACM Transactions on Programming Languages and Systems, 2(1):56–76, January 1980.

16

[ASU86]

Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley Publishing Company, 1986.

[BBLM94] Jean-Pierre Ban? tre, Ciar? n Bryce, and Daniel Le M? tayer. Compile-time detection of a a e information ?ow in sequential programs. In Proceedings of the European Symposium on Research in Computer Security, pages 55–73. Lecture Notes in Computer Science 875, Sprinter Verlag, 1994. [BLP73] D. E. Bell and L. J. La Padula. Secure computer systems: Mathematical foundations and model. Technical Report M74-244, MITRE Corporation, Bedford, Massachusetts, 1973. Flaviu Cristian. Correct and robust programs. IEEE Transactions on Software Engineering, 10:163–174, 1984. Dorothy E. Denning and Peter J. Denning. Certi?cation of programs for secure information ?ow. Communications of the ACM, 20(7):504–513, July 1977. Dorothy E. Denning. A lattice model of secure information ?ow. Communications of the ACM, 19(5):236–243, May 1976. Edsger W. Dijkstra. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, NJ, 1976. Edsger W. Dijkstra and Carel S. Scholten. Predicate Calculus and Program Semantics. Texts and Monographs in Computer Science. Springer-Verlag, 1990. Morrie Gasser. Building a secure computer system. Van Nostrand Reinhold Company, New York, 1988. Eric C. R. Hehner. Predicative programming Part I. Communications of the ACM, 27(2):134–143, February 1984. C. A. R. Hoare. Notes on data structuring. In O.-J. Dahl, E. W. Dijkstra, and C. A. R. Hoare, editors, Structured Programming, pages 83–174. Academic Press, 1972. C. A. R. Hoare. Unifying theories of programming. In Marktoberdorf Summer School on Programming Methodology, 1996. Butler W. Lampson. A note on the con?nement problem. Communications of the ACM, 16(10):613–615, October 1973. K. Rustan M. Leino. Ecstatic: An object-oriented programming language with an axiomatic semantics. In The Fourth International Workshop on Foundations of Object-Oriented Languages, January 1997. Proceedings available from http://www.cs.indiana.edu/hyplan/pierce/fool/. 17

[Cri84] [DD77] [Den76] [Dij76] [DS90] [Gas88] [Heh84] [Hoa72] [Hoa96] [Lam73] [Lei97]

[LvdS94]

K. Rustan M. Leino and Jan L. A. van de Snepscheut. Semantics of exceptions. In E.R. Olderog, editor, Proceedings of the IFIP WG2.1/WG2.2/WG2.3 Working Conference on Programming Concepts, Methods, and Calculi, pages 447–466. Elsevier, 1994. Dennis Volpano and Geoffrey Smith. Eliminating covert ?ows with minimum typings. In Proceedings of the 10th IEEE Computer Security Foundations Workshop, pages 156– 168, June 1997. Dennis Volpano and Geoffrey Smith. A type-based approach to program security. In Theory and Practice of Software Development: Proceedings / TAPSOFT ’97, 7th International Joint Conference CAAP/FASE, volume 1214 of Lecture Notes in Computer Science, pages 607–621. Springer, April 1997. Dennis Volpano, Geoffrey Smith, and Cynthia Irvine. A sound type system for secure ?ow analysis. Journal of Computer Security, 4(3):1–21, 1996. Mark Weiser. Program slicing. 10(4):352–357, July 1984. IEEE Transactions on Software Engineering,

[VS97a]

[VS97b]

[VSI96] [Wei84]

18

- A theorem proving approach to analysis of secure information flow
- A Library for Secure Multi-threaded Information Flow in Haskell
- A FaultTolerant Approach to Secure Information Retrieval
- Abstract Non-Interference An Abstract Interpretation-based approach to Secure Information F
- A Control-Theoretic Approach to Flow Control
- a basic approach to wellbore two-phase flow modeling
- Dynamic dependency monitoring to secure information flow
- A generalized processor sharing approach to flow control in integrated services networks Th
- The Kasteleyn model and a cellular automaton approach to traffic flow
- semantic-centric approach to information visualization.In
- A Library for Secure Multi-threaded Information Flow in Haskell
- A simple view of type-secure information flow in the -calculus
- Dynamic dependency monitoring to secure information flow
- A PER model of secure information flow in sequential programs
- A FaultTolerant Approach to Secure Information Retrieval

更多相关文章：
**...of an ***information* quality framework *to* measure c...

The ICENI*Semantic* Servi... 暂无评价 8页 免费...management of traffic *flow* during the final flight...(LSM) *approach* *to* proceed *to* the evaluation of ...**Integration and verification of ***semantic* constraint...

Integration and verification of*semantic* constraints in adaptive process ...Modelling *approach* 2.3. Modelling forest carbon pools 2.4. Wood *flow* model...*Semantic* devices

*Semantic* devices In linguistics, morphology refers...*information* about the intended use of *a* parameter...Borrowing:This *approach* is the often used one. ...**工作流系统**

*A* scientific workflow *approach* can greatly ... and reasoning advantages of *Semantic* Web ...Like Taverna, Kepler is data*flow* oriented, with ...*A* Process Driven *Approach* *to* Integrated Environments

*A* Process Driven *Approach* *to* Integrated Environments_公共/行政管理_经管营销_...*semantic* entity in the receiving tool, and can accept *information* from such...*semantic* matchmaking of ontology instances

The*information* would include services requirements ...Section 3 puts forward an *approach* for *semantic* ...“*Security* Ontology for Annotating Resources”, ...**The package and its documentation are available**

*information* among those 9DLT strings, but also ...*A* fast dynamic programming *approach* is also ...This paper provides *a* *semantic* foundation for the...*A* cognitive *approach* *to* language acquisition

*A* cognitive *approach* *to* language acquisition_文化...*information* processing abilities affect language ...*semantic* distinction, such as the complex temporal... 更多相关标签：

The ICENI

Integration and verification of

The