当前位置:首页 >> >>

Software Assurance By Synergistic Static And Dynamic Analysis

Software Assurance By Synergistic Static And Dynamic Analysis
Benjamin Taitelbaum John Knight Xiang Yin Department of Computer Science, University of Virginia PO Box 400740, Charlottesville,VA 22904-4740 +1 434.982.2225 {bet3q, knight, xyin}@cs.virginia.edu Abstract
We introduce the notion of software assurance by synergistic static and dynamic analysis. For each desired property of the software, we combine a static and a dynamic element such that the combination provides high assurance that the software achieves the desired property. Either component could be null, and synergistic assurance analysis is, therefore, the general case. In principle, most properties that we desire could be checked during execution, i.e., dynamically, but by exploiting the opportunity to make both the static and dynamic parts non-null, we obtain important benefits. By proper selection of what is shown by each part, synergistic analysis is able to reduce the complexity of proofs (often thereby making them feasible) and reduce both the overhead and the additional verification burden imposed by dynamic techniques. We present an illustration of the application of synergistic assurance analysis using a challenge problem from the security domain. Keywords: Formal verification, runtime checking, software assurance, safe programming, dependability Submission Category: Regular paper Contact author: Knight ant system, mechanisms are added that attempt to detect the effects of faults during execution and to cope with them in some way. For example, the idea of a safety kernel [19] has been developed to ensure that critical safety properties are maintained during execution. If they are not, the kernel can maintain a safe state by reacting in some way, perhaps by shutting the system down. Checking properties during execution using a safety kernel is only needed if verification during development cannot show the desired properties. Despite their individual limitations and complementary purposes, static verification and dynamic checking techniques are seldom combined explicitly in a synergistic way. There are exceptions (discussed in section 4), but rarely is an assurance plan for a system developed systematically that combines both verification and runtime checking to yield a unified, comprehensive result. In this paper, we present a general approach to such a combination. We introduce the notion of Synergistic Assurance in which we obtain software confidence by a hybrid use of static and dynamic analysis. In general with this technique, for each desired property of the software, a static and a dynamic method are combined such that the combination ensures that we achieve the desired property. Naturally, either component could be null in which case the technique reduces to either a purely static verification technique or a purely dynamic faulttolerance technique. Synergistic analysis is the general case of assurance, and by exploiting the opportunity to make both the static and dynamic parts non-null, we obtain important benefits. In principle, most properties that we desire could be checked dynamically, i.e., we could add code to check properties during execution. This is what is done in a safety kernel, for example. However, by careful selection of what is established by static and dynamic techniques, synergistic analysis is able to: (a) reduce the complexity of proofs (often thereby making them feasible) by allowing them to depend on runtime checks of elements that are difficult to prove; and (b) reduce both the overhead and the additional verification burden imposed by dynamic techniques.

1. Introduction
Having confidence in computing systems is important in many applications, and verification is a crucial technology in achieving confidence. As a result, many powerful software verification techniques have been developed ranging from proofs and modelchecking methods to sophisticated test methods. Notwithstanding their power, in many cases the use of verification techniques is limited either because of restricted applicability or technical difficulty. Even verification by testing is limited since testing alone cannot be used to show that a system has a specific property with a very high level of assurance [5]. The fact that it might not be feasible to establish all the necessary properties of an implementation using available verification techniques has led to the development of fault-tolerance mechanisms. In a fault-toler-


R equirem ents

S ource P rogram

Form al V erification System

Lem m as R untim e C hecks N ecessary C hecks

verification of source programs against formal specification, admitting but not considering here the additional problems that arise in translation to and extra detail in machine code. In the next section, we present the concept of Synergistic Assurance. To show the utility of our approach, in section 3 we present a detailed example of comprehensive verification using part of a challenge system called Tokeneer [4]. In section 4 we review related work, and we present our conclusions in section 5.

Figure 1. Synergistic Assurance The two main goals of a system's stakeholders are to have a correct set of requirements and to have a system that meets these requirements. The first goal implies that the requirements of the desired system be elicited, documented, and validated so that they represent the stakeholders’ intent correctly. The second goal implies that the developed software system implements the requirements correctly. We assume that determining the requirements and other development issues are dealt with separately. Our goal with Synergistic Assurance analysis is to address an important variant of the latter goal. Achieving very high levels of assurance for all elements of functional correctness using either static or dynamic means is either infeasible or impractical for most real systems. Instead, we enhance the requirements and address the problem of achieving a high level of assurance that the system either implements the requirements correctly or halts, a simpler (but not simple) problem. This enhancement was introduced by Anderson and Witty [2] to facilitate verification in a technique they refer to as safe programming. With this definition, software has the fail-stop property [20], i.e., it either meets its requirements or stops, although in the case of software, we make no attempt to ensure that the integrity of storage is maintained. In this work, we are not concerned with any type of continued service if an execution time error is detected, although it could be added. We introduce the assurance theorem for a system that formalizes the goal of fail-stop software. We then introduce a comprehensive approach to proving the assurance theorem in which a variety of techniques, both static and dynamic, are combined. Static proof techniques are used to the extent possible and run-time checking used where necessary. Our interest in this work is with high-integrity applications in which all parties and all sources are completely trustworthy. Thus, we do not need to resort to runtime checks associated with trust such as proofcarrying code [17]. In addition, we are interested in

2. Synergistic Assurance
2.1. Applying Synergy To Proof
The ideal situation with verification of any particular system is the development of a complete static proof that the implementation implies the requirements. This has always been the case with verification since the original concepts were introduced [13]. Recently, it has become the core of a “grand challenge” posed to the computer science community [25]. That proof-based verification of large programs has become a scientific grand challenge even after decades of research attests to the difficulty that verification poses. Proofs are relatively hard to construct, and establishing a complete proof for a given program within a single formal framework is difficult. Some functional properties might be established by proof, but temporal properties are often more easily established by model checking and others properties by testing. The fundamental goal of Synergistic Assurance is to facilitate the static proof of a program’s properties by making some proof obligations into runtime checks. The obligations that are moved are those which make the proof either infeasible or impractical in some way. By making an obligation into a runtime check, the static proof can treat the obligation as proved, and the remainder of the proof can be completed. We refer to obligations treated this way as dynamic obligations. The synergistic approach to proof is shown in Figure 1. In Synergistic Assurance, the theorem that needs to be proved cannot be the usual one associated with formal verification. If it were, it would be necessary to show that the runtime check of a dynamic obligation could never fail, because a dynamic obligation is treated as an axiom or lemma by the verification proof. The theorem that is proved in Synergistic Assurance, the assurance theorem, is a formal statement of safe programming [2] behavior, and so it includes the notion of a program either working correctly or halting. This is carried through to the dynamic obligations, each of which has the form:


Predicate ∨ Halt The assurance theorem is discussed in detail in the next section. Making an obligation into a runtime check means that the obligation (including the disjunct with Halt) becomes a new requirement that the program being verified has to meet. For a dynamic obligation D, the new requirement is: Ensure(D) For all such requirements, correct implementations have to be developed and tied into the proof, and there has to be assurance that they will be invoked correctly during execution. We note that this set of conditions has to be met with any dynamic technique irrespective of whether it is part of Synergistic Assurance or not. A safety kernel, for example, is only effective if it meets these conditions, and this topic has been problematic for safety kernels because the checking code can often become large and therefore difficult to verify. We discuss this point further in section 4. Provided the additional requirements derived from dynamic obligations can be implemented correctly, they can serve as lemmas for the static proof of the program being verified. The practical success of synergistic analysis depends, therefore, on identification of dynamic obligations that facilitate the desired static proof substantially and that can be implemented correctly. We have no specific process or guide for determining how this identification can be made, but we note the following three points: ? Difficulties with completing the static proof suggest candidate dynamic obligations. ? Difficulties with ensuring correct implementation of the checks on the dynamic obligations suggest that certain obligations are, in fact, easier to deal with in the static proof. ? Difficulties with an obligation that is both hard to prove statically and hard to check dynamically suggest that the obligation could be broken into parts, some of which will fit into each side. These three points illustrate the application of synergy to assurance. Assurance is never easy but now tradeoffs between the two major mechanisms of assurance are explicit and clear. Clearly, the allocation of obligations need not be a fixed decision since the best way to deal with a proof obligation might change as the assurance problem becomes better understood. It is also reasonable to consider changing allocation as development of a system proceeds. For simplicity, obligations might be made

dynamic in preliminary versions and static in later versions. For purposes of illustration here and later in this paper, we use sorting a set of integers as an example. Our functional requirements for sorting in this case are: ? R1: Generate a list of integers as output such that the elements in the list are a permutation of the input set. ? R2: Place the elements of the list into ascending order. Developing a static proof that an implementation meets R1 is not simple, but it is fairly straightforward. R2 on the other hand is difficult because it requires proof that the selected algorithm actually does sort integers and that the implementation in source code is a correct implementation of the algorithm. On the other hand, trying to check an implementation of these requirements dynamically is just the opposite. R2 is easily checked in linear time with simple checking code. Checking R1 is much harder both in terms of the complexity of the checking code and its execution time. These distinctions in the case of sort was noted by Wasserman and Blum [23]. With synergy we can partition these requirements into static and dynamic elements that are most appropriate. Further, we can partition R1 into subparts by defining some subelements either statically or dynamically as appropriate.

2.2. The Assurance Theorem And Its Proof
The assurance theorem for a particular system states that the implementation either implies the system’s requirements or halts, specifically: I ? (R1 ∧ R2 ∧ R3 ∧ ... ∧ Rn) ∨ Halt

This theorem is proved statically just as with a conventional proof of correctness. Note that the theorem is implication of the requirements by the implementation because an arbitrary number of different implementations can satisfy the same requirements. For the sort example, we can easily formalize the two requirements, but we use the informal statements above for illustration. For an implementation I then, the assurance theorem for sort states: I ? (R1 ∧ R2) ∨ Halt

where I is written in a notation with formal semantics (a program) and R1 and R2 are formalized. The general approach to proof of the assurance theorem for a given system is a sequence of four steps:


The obligations required for the proof are examined and partitioned between static proofs and dynamic checks. ? The dynamic obligations are restated as additional requirements for the program being verified. ? Additional software is implemented to meet the additional requirements and this checking software is verified. ? The program being verified is analyzed statically to ensure that the newly implemented checks will actually be made whenever there is any prospect of the associated requirement being violated. ? The assurance theorem is proved using traditional formal verification mechanisms. The assurance theorem is a practical formalization of the stakeholders’ second goal. By introducing the assurance theorem, we are able to state the intent of the system’s stakeholders in a succinct and formal manner. This formalization is important because, if the assurance theorem for a system can be shown to hold, it is of immediate value to the stakeholders and not just the developers.


checkers of specific obligations, creating a loose yet manageable coupling between the functional and checking code. Below, we describe the annotations we have introduced, along with their semantics:

This method-level annotation specifies that any call to the annotated method must either satisfy the given obligation or halt.

This parameter-level annotation specifies that the annotated method parameter corresponds to the given variable in the requirements. By convention, we use a question mark to signify that it is an input variable.
@ReqVar(value="var-name", isOutput="true")

2.3. Support For Synergistic Assurance
By definition, the checks that are required during execution for the proof of the assurance theorem (the dynamic obligations) have to be added to the application. This means that they have to be implemented, their implementations verified, and the implementations tied into the overall proof argument. In addition, there has to be a high level of assurance that they will always be invoked correctly when needed during execution. To facilitate these activities, we have implemented a prototype runtime assurance support mechanism for implementations in Java. The support mechanism consists of three major parts: (1) annotations that document which methods and variables affect the various dynamic obligations together with methods that check the dynamic obligations during execution; (2) an execution-time structure that monitors overall behavior during execution and halts the program if a dynamic obligation would not be met; and (3) a static analyzer which produces proof obligations for unchecked dynamic obligations. We examine each of these in turn. 2.3.1. Specifying Dynamic Analysis. In order to specify the dynamic obligations, we use declarative annotations inserted directly into the source program. This provides a binding between requirements-level formal parameters and implementation details. We also use annotations to designate certain methods as the runtime

This parameter-level annotation is similar to the previous one, except that it is used for parameters that may be modified by the method call. In this case, the requirements should specify both varname and var-name', representing the input value and the modified value respectively. Before the method containing this annotated parameter is called, the input value of this parameter is copied to a temporary variable so that the checker can examine both the original and modified values. At present, this has been implemented for some basic types and arrays, but only to the extent needed for support of the example described in section 3. We plan to add support for a much wider range of parameter types.

While similar in meaning to the previous two annotations, this method-level annotation signifies that the method’s output should be bound to the given requirements variable. The checker uses this value just as any other. Our current implementation requires that this same annotation be used as a variable-level annotation on the method's local variable that will be returned.

This method-level annotation designates the annotated method as a checker for the given dynamic obligation. Each of the method’s parameters must be annotated with the @ReqVar("var-name") annotation, although not all of the requirements variables need be listed. A checker will only be called if each of the listed requirements variables can be mapped into the method’s parameters. Every suitable checker for a given obligation found at runtime will be invoked, but if a non-suitable checker exists (e.g. it claims to be a checker for the obliga-


tion, but has an invalid requirements variable listed or an incompatible parameter) then the system will halt. If no checker for a given obligation exists at compile time, then a static proof obligation will be output by the compiler. Returning to the example of sorting a set of integers, the following is an example of an annotated sort method:
@Satisfies({"sort", "sortedOrder", "permutation"}) @ReqVar("lst!") public int[] sort(@ReqVar("lst?")int[] input){ @ReqVar("lst!") int[] output = copy(input); // bubble sort for (int i = 0; i < output.length-1; i++){ for (int j = i; j < output.length; j++){ if (output[i] > output[j]) swap(output, i, j); } } return output; }

input and output variables, it merely iterates through a list determining if it is sorted. 2.3.2. Programming Dynamic Analysis. The processing that is required to check that dynamic obligations are being met during execution is a cross-cutting concern in the sense of aspect-oriented programming [14]. It has nothing to do with the basic logic of the program, and it might require multiple changes if the dynamic obligation were to change. With this in mind, we have implemented checking as an aspect. At run time, the aspect: (1) catches any call to a method containing the @Satisfies annotation; (2) stores any values which may be modified into temporary variables; (3) runs the method; (4) stores the return value in a temporary variable; (5) finds the corresponding @Checks checking method; (6) binds the parameters; (7) and runs the check. If the check fails, it raises CheckFailedException that causes the system to log an error and abort. Similarly, if an unsuitable checker is found, or if there is an error in the parameter binding, this error is logged and the system aborts. If the check succeeds, then the returned value is returned to the calling method. 2.3.3. Generated Proof Obligations. In order to ensure that all dynamic obligations have corresponding checkers, we have implemented a static analyzer in the form of an Eclipse compilation plug-in. If no checker is found for a given obligation, instead of merely producing an error, the plug-in generates a proof obligation which may be proved in lieu of a dynamic check. This static proof obligation is an implication proof, where the right hand side is the unchecked dynamic obligation or halt. The left hand side of the implication proof consists of all the checked obligations or’d with halt, in addition to the sequence of dynamic obligations corresponding to method invocations (also or’d with halt). In order to allow for this implication proof, we impose the restriction that any assignment or modification to parameters corresponding to requirements variables may only be done through method calls to dynamically checked methods. This guarantees that all state modifications relevant to the obligations are themselves either dynamically checked or statically proved. Since the structure of the implementation and the checkers may change at any time throughout the development process, our plug-in regenerates the list of required proof obligations incrementally with every code change. The proof obligations are added as tasks in Eclipse, so that they are presented as notifications with check boxes which may be checked to indicate that the proof has been completed manually. If any

Here we declare that the sort method satisfies three obligations: sort, sortedOrder, and permutation. As we noted earlier, sortedOrder can be easily checked, while permutation requires some combination of static analysis and dynamic checks. We also declare that the output of the sort example corresponds to the requirements variable lst!, while the input corresponds to lst?. We further state that lst! corresponds to the local variable output. We note that lst! is assigned by a call to copy, and only modified by zero or more calls to swap. Naturally, both copy and swap would be annotated as well. A checker for the sortedOrder obligation might look as follows:
@Checks("sortedOrder") public void checkSortedOrder( @ReqVar("lst?")int[] input, @ReqVar("lst!")int[] output) throws CheckFailedException { for (int i=0; i < output.length-1; i++){ if (output[i] > output[i+1]) { throw new CheckFailedException ("Not sorted.”); } } }

Here we declare that the method checkSorteserves as a check for the obligation sortedOrder. It binds the parameters input and output to the requirements variables lst? and lst! respectively. Note that this checker does not need lst?, and could therefore omit it. Another point to note is that while we have used the question mark and exclamation point by convention, the checker makes no distinction between


required proofs have not been proved (as indicated with the check mark), our plug-in also produces a warning.

3. A Detailed Example
In order to obtain some preliminary information about the feasibility and utility of Synergistic Assurance, we applied it to a non-trivial example. In choosing an example, we sought an application that had been built by others in an application domain requiring very high levels of assurance. The application we used is part of a hypothetical system called Tokeneer that was defined by the National Security Agency (NSA) to act as a challenge problem for security researchers [4]. The system consists of a secure enclave containing a number of workstations having access to files with various restrictions. The challenge to researchers is to construct implementations with appropriate security properties. Since software defects are often the source of security vulnerabilities, our interest was in verification of Tokeneer software. We applied Synergistic Assurance to an implementation of part of Tokeneer and made our own judgement of its efficacy. We report the details so that they can be gauged by others.
A dm inistra tor
C ard R eader

T okeneer E nclave

` ` ` W orksta tion s
A ccess D oor

D isplay
A larm

F ingerprint R eader

Figure 2. Tokeneer system. dependability. This implementation includes a specification written in Z [21], a detailed design, and a source program written in SPARK Ada [3]. We were fortunate to be able to use this implementation as the subject of study for our example.

3.2. Proof Process For The Example
In a related project, we are developing the Echo formal verification process [22]. Echo allows verification proofs to be established statically using a process in which an extracted specification is shown to imply an original specification. Our implementation of Echo supports formal verification of programs written in SPARK Ada from specifications written in PVS [18]. SPARK Ada and its associated tools provide a source code annotation mechanism for documenting source programs at the level of function pre- and post-conditions. The SPARK Ada tools support verification that source code correctly implements its annotations. PVS and SPARK Ada were chosen for our Echo system in large part to allow us to use the PVS theorem prover and the SPARK Ada tools in the Echo toolset. As part of our work on Synergistic Assurance, we have developed the support mechanism for dealing with dynamic obligations discussed in section 2.3. Source programs written in Java were chosen as the target for that mechanism because the mechanism relies on Java tools including support for aspects. As well as the incompatibility between our Echo toolset and our Synergistic Assurance toolset, the Praxis TIS implementation was built using notations that are not entirely compatible with the tools that we have available for experimentation. Rather than rebuild the tools in order to continue with our example, we translated some of the Praxis’ TIS documents into notations with which we could work. Specifically, we translated the parts of the Z specification that we needed into PVS. The PVS specification coupled with the Praxis’ source program written in SPARK Ada

3.1. System Functional Overview
The Tokeneer system has many components including the workstations, security resources such as certificate and authentication authorities, an administration console, and an ID station. The Tokeneer ID Station (TIS) is the component of Tokeneer that is responsible for controlling human access to the enclave, and we used part of the TIS in this research. The overall organization of the Tokeneer system is shown in Fig 2. Informally and omitting a lot of detail, the core functionality of the TIS is as follows. An individual desiring access to the enclave presents an electronic token to a reader. If the token is valid, the individual’s fingerprint is scanned. If the fingerprint is valid, an authentication is written onto the token. If the individual has the requisite access privileges, the system waits for the token to be removed and then unlocks the door. A wide variety of failures of this basic sequence are possible including time-outs that raise an audible alarm. Much of the complexity of the TIS derives from dealing with all eventualities. A fairly complete implementation of major parts of the TIS has been built by Praxis High Integrity Systems as part of an NSA-sponsored research project on


were then suitable for our Echo static verification technique. We also translated that part of the SPARK Ada implementation which we needed into Java. This source program was then the subject of dynamic analysis. Although these translations were inconvenient, they do not detract from the major goals that we had for this example, preliminary assessment of feasibility and utility. Both specification languages used are completely formal and both translations were just mappings from structures in one notation to structures in the other. The steps followed in the example were as follows: ? A representative subset (two) of the TIS requirements defined by the NSA were selected for study. The statement of the assurance theorem for this system thus has the form of that for our sort example. ? The requirements were translated from Z to PVS. ? Each requirement was refined into a set of more detailed obligations that was partitioned into static and dynamic subsets. ? The necessary parts of the implementation were translated from SPARK Ada to Java. ? The proofs of the static obligations were constructed using Echo. This step necessitated the inclusion of additional SPARK Ada annotations in the source program to allow the Echo tools, the SPARK tools, and the PVS theorem prover to complete the proofs. ? The dynamic obligations were defined in the Java source program using the annotations described in section 2.3.1. The source program was checked using the static analyzer described in section 2.3.3. ? The assurance theorem for the selected requirements was proved using PVS using the dynamic obligations as lemmas. With limited space, we are unable to present many of the details of our formal requirements, annotated code, and proofs. Also, the formalisms contain many symbols that we do not have space to define. Some of the details of the artifacts are discussed in the remainder of this section, but we emphasize the obligations we obtained, the partitioning we chose, and its rationale.

rect sequence of checking actions will be taken by the TIS, and R2 states that the alarm will be raised if the system is deemed to be “insecure”. Each of these requirements is dealt with most conveniently by partitioning into a set of shorter, more manageable obligations that are, in fact, defined in the original NSA requirements. The result of this partitioning are shown in the respective figures for R1 and R2. To illustrate synergistic assurance, we present for each of these obligations the choice we made (static or dynamic) for the associated assurance and the rationale for each choice. R1 is a requirement about the user authentication and entry process. The user has to insert a card into a reader for authentication and, depending on the user and his or her circumstances, also present a finger for print checking. The process is multi-staged, and is documented as a finite-state machine with transitions on eight internal states: quiescent, gotUserToken, waitingFinger, gotFinger, waitingUpdateToken, waitingEntry, waitingRemoveTokenSuccess, and waitingRemoveTokenFail. Based on the transitions of these internal states, we partitioned R1 into the obligations shown in Figure 3 and dealt with these obligations as follows: ? R1 - Obligation 1: This obligation is concerned with actually opening the enclave door and is obviously crucial. It is, however, easy to check both statically and dynamically, and so we checked it statically, the preferred approach. The check was complete by translation of the obligation into simple function postconditions. R1 - Obligation 2: This obligation is simple to understand, but to verify it statically involves a lot of detail. It is necessary to prove that the postcondition of every function that can change the state satisfies the obligation, and some functions define more than one state change. Function level granularity might not be sufficient unless we perform full verification. On the other hand, checking this obligation dynamically only requires checking the assignment of the state. Thus we discharged this obligation dynamically. R1 - Obligation 3: This obligation is concerned with the conduct of the various authentication mechanisms (for example, checking the fingerprint) as state transitions occur. It is obviously critical that the right checks occur in the right way at the right place and at the right time. Since the checks are tied to state transi-



3.3. System Requirements And Partitioning
The natural language and PVS statements of the first requirement that we chose to use in this example, R1, are shown in Figure 3. The statements for the second, R2, are shown in Figure 4. R1 states that the cor-


TIS Requirement 1: If the latch is unlocked by the TIS and the TIS is not in possession of an Admin Token, then the TIS must be in possession of a User Token. The User Token must either have a valid Authorization Certificate, or must have valid ID, Privilege, and I&A Certificates, together with a template that allows the TIS to successfully validate the user's fingerprint.
R1: THEOREM FORALL (f: StateChange): (FORALL (st: SystemState): st`Latch = locked AND f(st)`Latch = unlocked AND st`adminTokenPresence = absent => TokenWithValidAuth(st`currentUserToken) AND UserTokenWithOKAuthCert(st`currentUserToken, st`currentTime) OR ValidToken(st`currentUserToken) AND UserTokenOK(st`currentUserToken, st`currentTime) AND FingerOK(st`currentFinger))

Obligation 1: The enclave door is only unlocked in the waitingRemoveTokenSuccess state. Obligation 2: In order to reach the state waitingRemoveTokenSuccess, the state transitions must either follow: (1) gotUserToken → waitingFinger → gotFinger → waitingUpdateToken → waitingEntry → waitingRemoveTokenSuccess, or: (2) gotUserToken → waitingEntry → waitingRemoveTokenSuccess. Obligation 3: In the first path: ValidToken and UserTokenOK are checked in the transition to waitingFinger, FingerOK is checked in the transition to waitingUpdateToken. In the second path: TokenWithValidAuth and UserTokenWithOKAuthCert are checked in the transition to waitingEntry. Obligation 4: User token is not changed during the transitions from gotUserToken to waitingRemoveTokenSuccess, and user fingerprint is not changed during the transitions from gotFinger to waitingRemoveTokenSuccess. Figure 3. TIS requirement R1 and associated obligations tions and implemented as predicate functions, the required actions are quite easy to check statically. If an attempt were made to check them dynamically, then the checking function would essentially have to duplicate the existing predicate functions because the obligation is to actually confirm the truth of something like the fingerprint check not merely that it was conducted. Thus the dynamic check would have to fully invoke the fingerprintchecking hardware. The verification of the check would be as complex as the static verification of the implementation. The static verification has to prove that the hardware is used correctly and that its results are used to influence the flow of control correctly. We chose the static route. ? R1 - Obligation 4: This obligation is concerned with the possibility of an attacker trying to switch artifacts in midauthentication. The whole authentication process relies on continuity of trust between states. If a switch occurs part way through the process, the credentials associated with one individual might be highjacked by another. That the implementation ensures this obligation is, therefore essential. The obligation could be checked either statically or dynamically with roughly equal effort, and we chose statically. R2 is a requirement about the timing of the TIS. Time is an important part of the overall security mechanism. For example, if the enclave door were left open for an inordinate amount of time, it might be possible for an attacker to gain access without being noticed because an authorized user had entered at some point in the past. ? R2 - Obligation 1: This obligation is concerned with ensuring that the actual time-out value in the system is the one that


Requirement 2: An alarm will be raised whenever the door/latch is insecure. “Insecure” is defined to mean the latch is locked, the door is open, and too much time has passed since the last explicit request to lock the latch.
R2: THEOREM FORALL (st: SystemState): st`Latch = locked AND st`Door = open AND st`currentTime >= st`alarmTimeout => st`Alarm = alarming

Obligation 1: The value alarmTimeout complies with the specification. Obligation 2: The alarm will sound when the door/latch is insecure. Figure 4. TIS requirement R2 and associated obligations was specified. The concern is with the possibility of the time-out value being changed incorrectly by a software defect. The obligation maps to the postconditions of those functions modifying the value of alarmTimeout. Either static verification or a dynamic check could be used for this obligation, each with little effort, and we used static verification. ? R2 - Obligation 2: This obligation is a real-time requirement that is extremely difficult to deal with statically. To discharge this obligation with static methods requires determination of worst-case execution time bounds on various sections of the software, a complete proof of functionality, a proof that machine time is an adequate representation of real-world time, etc. By contrast, provided machine time is an adequate representation of real-world time, a dynamic check is simple to implement and to verify. We implemented the dynamic check by utilizing the "or halt" part of the dynamic obligation. We annotated the UnlockDoor method with @Satisfies("R2-Obligation2"), and in the check for this obligation, set a timer that causes the system to halt when it goes off. We then annotated the LockDoor method and wrote a check for it that cancels this timer only after successfully checking that the door is closed and locked.

3.4. Static Proofs Using SPARK and Echo
The various obligations for the TIS outlined in the previous section were either proved or became the subject of runtime checks as indicated. The process followed is that outlined in section 3.2. Annotated SPARK Ada code is the primary structure used in the Echo verification approach, and so it is used extensively by the SPARK Ada tools and the Echo specification-extraction tools. We cannot include all the details of the proof of the assurance theorem. As a small illustration of the static part of the proof, the following is a fragment of the SPARK Ada annotations from the UnlockDoor subprogram in the UserEntry package. It is used in the static proof of requirement R1, obligation 1:
--# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# --# pre Status = WaitingRemoveTokenSuccess; post (not UserToken.IsPresent (UserToken.State~) and not Latch.IsLocked(Latch.State) and Status = Quiescent) or (UserToken.IsPresent(UserToken.State) and Clock.GreaterThan(Clock.TheCurre-Time (Clock.CurrentTime), TokenRemovalTimeout) and Door.State = Door.State~ and Latch.State = Latch.State~ and UserToken.State = UserToken.State~ and Status = WaitingRemoveTokenFail) or (UserToken.IsPresent(UserToken.State) and not Clock.GreaterThan (Clock.TheCurrentTime (Clock.CurrentTime), TokenRemovalTimeout) and Door.State = Door.State~ and Latch.State = Latch.State~ and UserToken.State = UserToken.State~ and Status = Status~);

In summary, these six obligations derive from realistic requirements for a system that has serious consequences of failure. The partitioning and the associated rationale suggests that splitting the proof burden between static and dynamic techniques is quite feasible and is potentially effective as a way of producing a useful level and form of assurance.


4. Related Work
In the area of static methods, a wide variety of techniques have been developed from complete formal verification to light-weight formal methods and static analysis. Formal verification has been studied extensively, although for various reasons it is not as widely used as it might be [12, 16]. Synergistic Assurance permits some of the difficulties with formal verification to be circumvented. One of the goals of light-weight methods is to facilitate proof by limiting it to certain key aspects of the software. It is often possible, for example, to prove that a concurrent software system will not deadlock or that a sequential program will not raise any exceptions. Properties such as freedom from deadlock are important, but they show the absence of only one of the difficulties that might preclude a system from operating correctly. This is valuable, but it leaves all other necessary properties open. Synergistic Assurance takes advantage of such techniques by placing them in the general verification context and supplementing them with other static and dynamic techniques. Synergistic Assurance builds on the notion of Safe Programming introduced by Anderson and Witty [2]. Although they did not state their goal the same way, the assurance theorem is close to the goal that Anderson and Witty established for Safe Programming. In their formulation, the proof they sought was achieved by proving that the program checked its output correctly. With Synergistic Assurance, we are building on Safe Programming by including only those dynamic checks needed to support static proof. Many variations of the basic idea of runtime checking have been developed. The notion of a safety kernel was introduced by Rushby [19] with the goal of checking at runtime that a system met a set of crucial safety properties. Rushby’s formulation of the safetykernel concept was derived from the idea of a security kernel. Several other authors have developed the idea. The efficacy of a safety kernel depends in large measure on the kernel being more easily verified than the system it is designed to check. In practice, this requirement is hard to meet because, as we noted with obligation 3 of the TIS, checking often turns out to be as hard as the original computation. In order to try to deal with the problem, Knight and Wika introduced the notion of a weakened policy in which an application is required to perform its own check [24]. The potential exists to avoid the problem entirely with Synergistic Assurance since challenging individual obligations can be split into several shorter obligations and only those that are suitable for dynamic checking need be.

More general checking systems have been proposed that permit specification of desired behavior in customized notations and synthesis of the associated runtime checks. Two examples are Java-MaC [15] and Naccio [8]. Tolerating software faults at runtime has been studied extensively. Approaches such as N-version programming [7] and N-copy programming [1] combine various techniques to permit error detection and, in some cases, masking of the effects of faults. Various techniques that combine static and dynamic analysis has been developed. One of the earliest was motivated by the need to reduce the overhead of constraint checking in high-level language programs. In Ada, for example, elimination of runtime constraint checks by compile-time analysis has proved very successful [10, 11]. More general notions combining static and dynamic analysis have been introduced in the areas of type checking and resource management. Hybrid Type Checking introduced by Flanagan [9] advocates synergy between static and dynamic checks so as to permit more powerful type checking mechanisms. A mechanism to permit efficient management of resources by programs that combines static and dynamic checks was proposed by Chander et al [6].

5. Conclusion
We have introduced the concept of Synergistic Assurance in which the burden of establishing properties of software is distributed between static proof and dynamic checking. We have also introduced the assurance theorem by which the assurance goals of a system are stated formally in the style of Safe Programming. By combining static and dynamic techniques, we are able to exploit the benefits of each in a complementary manner. This allows each to be applied as appropriate to each element of the assurance proof without having to force everything into just one style. In addition, a single complex obligation can be broken down into parts and each part allocated in a manner best suited to that part. This helps to solve the problem with many forms of runtime checking in which the check, which needs to be implemented with high assurance if it is to be effective, is as complex as that which it checks. Finally, we have demonstrated the concept on a realistic sample application from the security domain.

We thank Randolph Johnson, Rodney Chapman, Jennifer Black, and Zhendong Su for their technical

- 10 -

help with this work. We are extremely grateful to the National Security Agency and Praxis High Integrity Systems for giving us access to the Tokeneer artifacts. This work was sponsored in part by NSF grant CCR0205447 and in part by NASA grant NAG1-02103.

[13] Hoare, C. “An Axiomatic Basis For Computer Programming”, Communications of the ACM, Vol. 12, No. 10, October 1969, pp. 576–585. [14] Kiczales, G., J. Lamping, A. Mendhekar, C. Maeda, C. Lopes, J-M. Loingtier, and J. Irwin, “Aspect-Oriented Programming”, Proceedings of the European Conf. on ObjectOriented Programming, Vol.1241, Finland, 1997, pp.220– 242. [15] Kim, M., S. Kannan, S., I. Lee, and O. Sokolsky, “JavaMaC: A Run-time Assurance Tool For Java Programs”, Electronic Notes in Theoretical Computer Science, Vol. 55, No. 2, 2001. [16] Knight, J., C. DeJong, M. Gibble, and L. Nakano, “Why Are Formal Methods Not Used More Widely?”, Fourth NASA Formal Methods Workshop, Hampton, VA, September 1997. [17] Necula, G., “Proof-Carrying Code”, in Proceedings of the ACM Symposium on Principles of Programming Language, Paris, France, 1997. [18] Owre, S., J. Rushby, N.Shankar, D. Stringer-Calvert, “PVS Language Reference”, published only at: http://pvs.csl.sri.com/doc/pvs-language-reference.pdf [19] Rushby, J., “Kernels for Safety?”, in Safe and Secure Computing Systems, T. Anderson, Editor, Blackwell Scientific Publishing, 1989, pp. 210-220. [20] Schlichting, R. and F. Schneider, “Fail-Stop Processors: An Approach to Designing Fault-Tolerant Computing Systems”, ACM Transactions on Computing Systems, Vol. 1, No. 3,
August 1983, pp.222-238.

[1] Ammann, P. and J. Knight, “Data Diversity: An Approach To Tolerating Software Faults”, IEEE Transactions on Computers, Vol. 37, No. 4, pp. 418-425, 1986. [2] Anderson, T., and R.W. Witty, 1978. “Safe Programming”, BIT Vol.18, 1978, pp. 1-8. [3] Barnes, J., High Integrity Software. The SPARK Approach to Safety and Security, Addison-Wesley Longman, Amsterdam, 2003. [4] Barnes, J., R. Chapman, R. Johnson, J. Widmaier, David Cooper, B. Everett, “Engineering the Tokeneer Enclave Protection Software”, Proc. of the IEEE Int. Symposium on Secure Software Engineering, Washington DC, 2006. [5] Butler, R., and G. Finelli, “The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software”, IEEE Transactions on Software Engineering, Vol. 19, No. 1, January 1993, pp.3-12. [6] Chander, A., D. Espinosa, N. Islam, P. Lee, and G. Necula, “Enforcing Resource Bounds via Static Verification of Dynamic Checks”, ACM Transactions on Programming Languages and Systems, Vol. 29, No. 5, August 2007, pp. 1-18. [7] Chen, L. and A. Avizienis, “N-version Programming : A Fault-tolerance Approach To Reliability Of Software Operation”, International Symposium on Fault-Tolerant Computing, IEEE, 1978. [8] Evans, D. and A. Twyman, “Flexible Policy-directed Code Safety”, in Proceedings of the IEEE Symposium on Security and Privacy, Oakland, CA, 1999. [9] Flanagan, C., “Hybrid Type Checking”, ACM Symposium on Principles of Programming Languages, Charleston SC, 2006. [10] Ganapathi, M and G. Mendal, “Issues in Ada Compiler Technology”, IEEE Computer, Vol. 22, No. 2, February 1989, pp. 52-60. [11] Gupta, R., “Optimizing Array Bound Checks Using Flow Analysis”, ACM Letters on Programming Languages and Systems (LOPLAS), Vol. 2, No. 1-4, pp. 135 - 150, 1993 [12] Hall, A., “Seven Myths Of Formal Methods”, IEEE Software, Vol. 7, No. 5, September 1990, pp. 11-19.

[21] Spivey, M., “The Z Notation: A Reference Manual”, published only at: http://spivey.oriel.ox.ac.uk/mike/zrm/index.html [22] Strunk, E., X. Yin, and J. Knight, “Echo: A Practical Approach to Formal Verification”, FMICS-05: Tenth International Workshop on Formal Methods for Industrial Critical Systems, Lisbon, Portugal, September 2005. [23] Wasserman, A. and Blum, “Software Reliability via Run-Time Result-Checking”, Journal of the ACM, Vol. 44, 1997 [24] Wika, K. and J. Knight, “On The Enforcement Of Software Safety Policies”, COMPASS: Conference on Computer Assurance, Gaithersburg MD, 1995. [25] Woodcock, J. and R. Banach, “The Verification Grand Challenge”, Journal of Universal Computer Science, Vol. 13, No. 5, 2007, pp. 661-668.

- 11 -

学霸百科 | 新词新语

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。