9512.net
ÌðÃÎÎÄ¿â
µ±Ç°Î»ÖÃ£ºÊ×Ò³ >> >>

# Predicates and predicate transformers for supervisory control of discrete event systems

Predicates and Predicate Transformers for Supervisory Control of Discrete Event Dynamical Systems 1
Ratnesh Kumar Department of Electrical Engineering University of Kentucky Lexington, K

Y 40506-0046 Vijay Garg Department of Electrical and Computer Engineering University of Texas at Austin Austin, TX 78712-1084 Steven I. Marcus Department of EE and System Research Center University of Maryland College Park, MD 20742 February 2, 1995

of Kentucky, in part by the National Science Foundation under Grant NSFD-CDR-8803012 and NSF-CCR-9110605, in part by the Air Force O ce of Scienti c Research (AFOSR) under Contract F49620-92-J-0045, in part by a University Research Institute Grant and in part by a Bureau of Engineering Research Grant.

1 This research was supported in part by the Center for Robotics and Manufacturing, University

Abstract
Most discrete event system models are based on de ning the alphabet set or the set of events as a fundamental concept. In this paper, we take an alternative view of treating the state space as the fundamental concept. We approach the problem of controlling discrete event systems by using predicates and predicate transformers. Predicates have the advantage that they can concisely characterize an in nite state space. The notion of controllability of a predicate is de ned, and the supervisory predicate control problem introduced in this paper is solved. A closed form expression for the weakest controllable predicate is obtained. The problem of controlling discrete event systems under incomplete state observation is also considered and observability of predicates is de ned. Techniques for nding extremal solutions of boolean equations is used to derive minimally restrictive supervisors.

1 Introduction
Many discrete event system models 24, 23, 25, 9, 11, 12] are based on de ning the alphabet set or the set of events as a fundamental concept. The language of a deterministic system characterizes its behavior, and two systems are considered equivalent if they have the same alphabet and language 9]. In this paper, we take an alternative view of treating the state space as the fundamental concept. Problems treated in 22, 19, 3, 1, 18, 14, 7] are also formalized with a similar point of view. We approach the problem of controlling the behavior of a discrete event system described in terms of its state trajectories by using predicates and predicate transformers. Predicates have the advantage that they can concisely characterize an in nite state space. Petri net based models have also been used for describing in nite state discrete event dynamical systems 26, 17]. The notions of two types of predicate transformers, namely strongest postcondition (sp) and weakest liberal precondition (wlp) 5, 6, 8] are very useful in characterizing the dynamics of discrete event dynamical systems. In this paper, we study the system dynamics in the framework of these predicate transformers. We use the notion of duality of predicate transformers and show that sp and wlp are duals of each other. Thus one of the predicate transformers - either sp or wlp - can be treated as fundamental and the other as a derived notion. In this paper we treat sp to be the fundamental predicate transformer, as it describes the forward evolution of the system behavior, and develop the supervisory control theory using it. We describe a few basic properties - strictness, monotonicity, disjunctivity, conjunctivity etc. - of predicate transformers. sp is a strict, monotone and disjunctive predicate transformer, while its dual wlp is a strict, monotone and conjunctive. One or more of these properties of sp and wlp are used to obtain all the results in this paper. It is known 6] that a predicate equation in the variable predicate Q of the type Q : f (Q) g(Q) has unique extremal solutions, provided the predicate transformers f; g satisfy certain basic properties. We use the extremal solutions of such predicate equations to demonstrate the existence and uniqueness of minimally restrictive 23, 2] supervisors. We introduce the supervisory predicate control problem as the problem of synthesizing a supervisor for a given system so that the state trajectories of the system remain con ned to a set of \legal" states, and also visit all the states in the set of legal states. Thus the set of legal predicate corresponds to the weakest predicate that remains invariant under control. A special case of this problem where the latter constraint is relaxed was considered in 22]. The notion of controllability is de ned and it is shown that it serves as a necessary and su cient condition for the existence of a supervisor that solves the supervisory predicate control problem. A di erent de nition of controllability is presented in 19], which can be shown to be equivalent to our de nition. Our de nition of controllability is purely in terms of the predicate transformer sp, which results in a more compact de nition, simplicity of the proofs (as demonstrated by the proof of Theorem 4.5) as well as the synthesis techniques of the supervisors. In this paper, we also address the problem of synthesizing supervisors for the case when the required predicate is not controllable. This problem is quite important and is not addressed in 19, 22]. We show that if the given \legal" predicate is not controllable, then the minimally restrictive supervisor can be constructed so that the state trajectories of the 1

controlled system remain con ned to and also visit all the states in the set of states where the weakest controllable legal predicate holds. We prove that the weakest controllable predicate stronger than the required predicate exists and present an algorithm for computing it. This algorithm is then used for constructing the minimally restrictive supervisor. Thus our work extends the earlier works on supervisory predicate control 22, 19]. In 22] a method is presented for computing the weakest \invariant" predicate stronger than the legal predicate; which can also be computed as a special case using the method of computing the weakest controllable predicate presented in this paper. Next we consider the supervisory predicate control problem under partial state observations. The behavior of the system, i.e. the state trajectories, are observed under a \mask" which maps the state space of the system to the \observation space", and is not necessarily injective. This problem is much harder, as the controller also consists of a state estimator, and based on its state estimates takes the appropriate control actions. This problem was rst addressed in 19] and a solution was obtained under very restrictive assumptions on the desired reachable predicate. This is because the observability condition obtained in 19] is based only on the current observations and ignores all the past observations. We obtain the observability condition based on the entire available information: control as well as observation; present as well as past. Thus the notion of observability introduced in this paper is quite general. The notion of dynamical observers, which use the entire history for estimating states, is presented in 3, 21]. However, in 3] it is assumed that the transition events are completely known at all the transition steps, which is not the case in this paper. All that is known is at any transition step one of the several events that are enabled by the supervisor will occur and cause a system transition. In 21] the issue of synthesizing dynamical state estimator for a partially observed system was addressed and no control was exercised. In this paper we address the issue of simultaneously estimating the state and controlling the system so that the set of reachable states equals the required legal set of states. The notion of observability of a predicate described in this paper also leads to a synthesis technique for the minimally restrictive supervisor. Unlike the supervisory control of system behavior described in terms of event trajectories under partial observation where the minimally restrictive supervisor does not exist 20], we show in this paper that it is possible to construct the minimally restrictive supervisor for the supervisory predicate control problem under partial state observation, where the system behavior is described as the weakest invariant predicate. The advantage of using predicates and predicate transformers to represent a DEDS is that we can concisely characterize systems with a very large, possibly in nite, number of states. This is illustrated by the Readers-Writers example considered in this paper, in which case the state space is in nite. We provide a technique for synthesizing a supervisor so as to ensure mutual exclusion of readers and writers. Also, a technique is provided for the synthesis of the minimally restrictive supervisor for a modi ed Readers-Writers problem, again with in nite state space. Thus we have successfully developed a technique for supervisory control of in nite state systems. The computational complexity of our approach depends on the number of variables and conditional assignment statements representing the system rather than the actual number of states and transitions. Computationally more e cient algorithms for supervisory control of in nite state systems need to be developed based on the theory presented in this paper. Some such techniques that involve mathematical induction 2

as an analysis tool are reported in 7], and further research on this issue is currently under investigation.

2 Notation and Terminology
The discrete event dynamical system (DEDS) to be controlled - called the plant - is modeled as a state machine (SM) 10] following the framework of 23]. Let the quadruple G def (X; ; ; x0) denote a SM representing a plant; where X denotes the state set; denotes = the nite event or alphabet set; : X ! X denotes the partial state transition function; and x0 2 X denotes the initial state. A supervisor or controller for the given DEDS is designed so that the behavior of the closed loop system satis es certain qualitative constraints described in terms of the state set of the plant. The event set is partitioned into = u ( ? u ), the sets of uncontrollable and controllable events. A supervisor for the given plant G is characterized by a (static) control law S : X ! 2 . A control law is said to be static if the control action at each step depends only on the observation at that step (for a more detailed and formal de nition refer to 13, 16]). A dynamic control law will be considered in section 5, where the supervisory control problem under partial observation is addressed. Thus if 2 is such that 2 S (x) for some x 2 X , then is said to be enabled by the supervisor in state x. Since a supervisor can disallow only controllable events from occurring, we also have for each x 2 X , u (x) S (x), where u (x) is the set of uncontrollable events de ned at state x. The controlled system is then described by the state machine GS def (X; ; S ; x0), where for x 2 X and 2 , S (x; ) = (x; ) if 2 S (x), and = unde ned otherwise. Remark 2.1 One way to implement a static control law as described above is to let the subautomaton of G corresponding to the control law S run in synchrony with G, as described in 13, 16].

2.1 Predicates and Predicate Transformers

Next we introduce a few de nitions from the theory of predicate calculus 5, 6, 8] that we use in this paper to study and formulate the supervisory predicate control problem. Let P denote the collection of predicates de ned on the state set X , i.e. if P 2 P , then it is a boolean valued map P : X ! f0; 1g. With every P 2 P , we associate a set XP X on which P takes the value one, i.e. x 2 XP if and only if P (x) = 1. We say that the predicate P holds on x 2 X if P (x) = 1. Conversely, given a set X 0 X , it can be associated with a predicate PX 0 2 P such that PX 0 (x) = 1 if and only if x 2 X 0. Thus the collection of predicates P can be associated with the power set 2X using the association described above. In what follows next, we use the names predicates and subsets interchangeably. The symbols P; Q; R etc. are used for denoting predicates. De nition 2.2 Given P 2 P , its negation, denoted :P , is another predicate de ned to be: for every x 2 X; :P (x)V 1 , P (x) = 0. Given an indexing set , let P 2 P for each 2 . = Then the conjunction 2 P is de ned to be: for every x 2 X; V 2 P (x) = 1 , 8 2 3

; P (x) = 1; and the disjunction W 2 P is de ned to be: for every x 2 X; W 2 P (x) = 1 , 9 2 s.t. P (x) = 1. De nition 2.3 The symbols true and false are used for denoting predicates that hold on all and none of the states respectively, i.e. true(x) = 1; 8x 2 X and false(x) = 0; 8x 2 X . Thus true = :false. Also, the predicate true can be associated with the entire state space X , and the predicate false can be associated with the empty set ;. Example 2.4 Let X = R2, i.e. the state space equals the real plane. Let x; y be state variables taking values in R. Then the predicate (x y) holds in those states in R2 where the value of the variable x is not greater than that of variable y. Notice that predicates can
concisely describe an in nite state space. The quadruple (P ; :; ^; _) forms a boolean algebra which is isomorphic to the algebra of subsets of X under the operations of complementation, intersection and union. For P1; P2 2 P , we say that P1 P2 if and only if P1 ^ P2 = P1, or P1 _ P2 = P2. P1 is said to be stronger than P2, equivalently, P2 is said to be weaker than P1 if P1 P2. Note that induces a partial order on P , i.e. is a re exive, transitive and antisymmetric relation on P . Since is antisymmetric, if P1 ; P2 2 P are such that P1 P2 and P2 P1, then P1 = P2, i.e. they are the same predicate. It can be shown that the partial order (P ; ) is also complete 9]. Let F denote the collection of all predicate transformers, i.e. if f 2 F , then f : P ! P . We use the symbols f; g; h etc. to denote predicate transformers. De nition 2.5 The negation :f for some f 2 F is de ned to be (:f )(P ) = :(f (P )) for each P 2 P . The conjunction and disjunction of an arbitrary set of predicate transformers are de ned in an analogous way and are obtained by taking the conjunction and disjunction, respectively, over the set of image predicates, i.e. given an arbitrary indexing set , (V 2 f )(P ) = V 2 (f (P )), and (W 2 f )(P ) = W 2 (f (P )) for each P 2 P .

De nition 2.6 Consider G def (X; ; ; x0). For each 2 , sp : P ! P is de ned to be: = def sp (P ) =WQ, where XQ = fx 2 X j 9y 2 XP s.t. (y; ) = xg. We use sp; spu to denote W
2

sp ; 2 u sp respectively. Thus sp (P ) is the predicate which holds on the set of states that are reached by the transition from a state where P holds. De nition 2.7 For the system G, the predicate transformer wlp 2 F for each 2 is: wlp (P ) def Q, where XQ = fx 2 X j either (x; ) 2 XP or (x; ) is unde nedg. The = predicate transformers wlp; wlp are de ned to be V wlp , V wlp respectively.
u 2 2
u

Thus wlp (P ) is the predicate which holds in those states where either the transition is not de ned or a transition from them leads to a state where P holds. Example 2.8 Let the state space be R2. We will use the pair (x; y) to denote an arbitrary element of R2. Consider a program G that assigns (x; y) to (x + y; x ? y), i.e.

G : (x; y) := (x + y; x ? y):
4

Then given any set R R2, it gets \transformed" to the set f(x0; y0) j (x0 = x + y) ^ (y0 = x ? y); (x; y) 2 Rg whenever G is executed. Let the predicate (xy 10) be true upon execution of G. Then the predicate (x2 ? y2) 10 must be true before execution of G. In other words, wlp(x;y):=(x+y;x?y) ((xy 10)) = (x2 ? y2 10). The predicate transformers sp and wlp as de ned above are called strongest postcondition and the weakest liberal precondition 5, 6, 8]. We use the notation spS (wlpS ) to denote the strongest postcondition (weakest liberal precondition) operator induced by the transition function S of the controlled system GS , i.e. spS = W 2 (spS ) and wlpS = V 2 (wlpS ) . With the above introduction on predicates and predicate transformers it is clear that a DEDS can also be represented in terms of predicates and predicate transformers.

De nition 2.9 G def (PX ; ; sp; I ); where PX corresponds to the set of predicates de ned = on the state space X of G; denotes the event set; sp 2 F is the predicate transformer corresponding to the state transition function of G; and I 2 PX corresponds to the initial

states (also called initial condition) of G. The behavior of a DEDS is essentially described by sp 2 F which is speci ed using the state transition function. It may also be speci ed using a nite set of conditional assignment statements of the type: x := F (x) if C (x) : The above assignment statement, labeled by the event , is said to be enabled if the the condition speci ed as the predicate C (x) holds, and if executed, value of variable x becomes F (x). Thus given any predicate P (x), sp (P (x)) is the predicate reached after the execution of and can be readily calculated to be P (F ?1(x)) ^ C (F ?1(x))], where F ?1(x) = fx0 j F (x0) = xg. Similarly, the predicate wlp (P (x)) can be easily computed to be P (F (x)) ^ C (x)] _ :C (x)]. Consider for example a conditional assignment statement:

x; y := x + y; x ? y if x > y : Let P (x; y) = x + y 10. Then sp (P (x; y)) = f x+y + x?y 10g ^ f x+y > x?y g] = (x 2 2 2 2 10) ^ (y > 0)]; and wlp (P (x; y)) = f(x + y) + (x ? y) 10g ^ fx > yg] _ x y] = f(x 5) ^ (x > y)g _ fx 5g].

Note that the DEDS representation as described in De nition 2.9 describes a wide range of DEDS's, such as system with an in nite state space, a nondeterministic 10] system, or a system that could initially be in a set of states where the predicate I holds. Henceforth we use the representation of G introduced in De nition 2.9 for describing a DEDS. Example 2.10 Consider the following program which corresponds to the Readers{Writers problem written in a programming logic adapted from UNITY framework 4]. Informally stated, the Readers{Writers problem can be expressed as a DEDS which has a distributed database, access to which is sought by an in nite numbers of readers and writers.

5

Program declare initially assign

Rd Wr

nr; nw st rd; st wt; end rd; end wt nr; nw nr nw

: : = :=

end fRd Wrg The declare section contains the description of the program variables. The state variables

integer event 0,0 nr + 1 nr ? 1 if (nr > 0) := nw + 1 nw ? 1 if (nw > 0)

:st rd :end rd :st wt :end wt

to the events start-read, start-write, end-read, end-write respectively. The program starts executing with the system being in the initial state ((nr; nw) = (0; 0)), which is described in the initial section of the program. The system evolves according to the execution of the \enabled" assignment statements of the assign section. An assignment statement is said to be enabled whenever the condition under the \if" part of the assignment is satis ed. One of the enabled assignment statements is nondeterministically picked for execution and upon execution the state variables accordingly change their values. The entries in the last column of the assign section are the event names for the corresponding assignment statements. = The program Rd Wr describes a DEDS of the type: G def (PX ; ; sp; I ); where PX denotes the set of predicates corresponding to the subsets of the state set X = N 2; = fst rd; st wt; end rd; end wtg; sp corresponds to the assignments of the assign section as aforementioned; and I = ((nr; nw) = (0; 0)).

nr; nw are integer type and denote the number of readers, number of writers respectively accessing the database. Both nr; nw are bounded below by zero. Thus the state space of the system is N 2 which is in nite. The symbols st rd; st wt; end rd; end wt correspond

3 On Solving Predicate Equations
So far we have de ned the notions of predicates and predicate transformers. Next we consider some of their properties and describe a few methods for solving some predicate equations that we use later to design supervisors for a given DEDS. Some of the results presented in this section can also be found in 6], however, we present their proofs here mainly to illustrate the proof style that we follow throughout the paper. De nition 3.1 6] Consider f 2 F . f is said W be strict W f (false) = false; monotone to if if V Q ) V(P ) f (Q); disjunctive if f ( 2 P ) = 2 f (P ); and conjunctive if P f f ( 2 P ) = 2 f (P ) ( denotes an arbitrary indexing set, and we adopt the convention that the disjunction as well as conjunction over the empty set is predicate false, i.e. W P = V P = false if = ;). 2 2 It is easily shown that sp (W 2 P ) = W 2 sp (P ), as fx 2 X j 9 2 : 9y 2 XP s.t. (y; ) = xg = 9 2 : fx 2 X j 9y 2 XP s.t. (y; ) = xg. Thus sp is disjunctive, and similarly it is easily veri ed that wlp is conjunctive. Example 3.2 To illustrate that sp is disjunctive, consider the program of Example 2.8, and let x + y 1] and (x; y) = (0; 1)] be two predicates on the state space R2. Then under 6

program G, sp( x + y 1]) = ( x 1]), and sp( (x; y) = (0; 1)]) = (x; y) = (1; ?1)]. Since x + y 1] _ (x; y) = (0; 1)] = x + y 1], sp( x + y 1] _ (x; y) = (0; 1)]) = sp( x + y 1]) = x 1]. Notice that x 1] _ (x; y) = (1; ?1)] = x 1] as expected, for sp is disjunctive. wlp is conjunctive; to illustrate this consider two predicates xy 10 and x = 2 on the state space R2. Then xy 10 ^ (x = 2) = y 5. Consider the program G of Example 2.8. Then under G, wlp( xy 10]) = x2 ? y2 10], wlp( x = 2]) = x + y = 2], and wlp( y 5]) = x ? y 5]. Note that x2 ? y2 10] ^ x + y = 2] = x ? y 5] as expected, for wlp is conjunctive.

Lemma 3.3 6] Consider f 2 F .
1. f disjunctive implies that f is strict and monotone. 2. f conjunctive implies that f is strict and monotone.

3. f (false) = false ;from 2 and using 2 P = false 4. f is strict ;from 3 5. P Q ) P _ Q = Q ;by de nition of 6. P Q ) f (P _ Q) = f (Q) ;apply f on 5 7. P Q ) f (P ) _ f (Q) = f (Q) ;f is disjunctive 8. P Q ) f (P ) f (Q) ;follows from 7 and de nition of 9. f is monotone ;from 8 This completes the proof. 2 We de ne the following operations on f 2 F (we have already de ned negation, disjunction and conjunction above): De nition 3.4 The conjugate of f , written as f , is de ned to be :f :, i.e. for P 2 P , f (P ) = :(f (:P )); the disjunctive closure of f , written as f ?, is de ned to be Wn 0 f n ; and the conjunctive closure of f , written as f?, is de ned to be Vn 0 f n, where f 0 is de ned to be the identity predicate transformer. Thus sp(P ) characterizes the predicate which holds in states that cannot be reached by a single transition from a state where :P holds, i.e. in states that either have no transitions leading into them or which can be reached by a single transition only from those states where P holds. sp?(P ) denotes the predicate which holds in those states that can be reached by any number of transitions from a state where P holds. Thus sp? is useful in characterizing the set of reachable states. wlp(P ) characterizes the predicate which holds in states from which only a state where P holds can be reached by a single transition. wlp?(P ) characterizes the the weakest predicate stronger than P that is closed under the executions of G, i.e. Xwlp?(P ) is the supremal invariant 13, 16] subset of XP . 7

Proof: We prove the rst part of Lemma 3.3; the proof of the second part is obtained in a similar manner. LetWP; Q 2 P be arbitrary. W 1. f (W 2 P ) = W 2 f (P ) ;f is disjunctive 2. f ( 2; P ) = 2; f (P ) ;replace by ; in W 1

Lemma 3.5 Let f 2 F be monotone; then 1. f (P ) P , f ?(P ) P 2. P f (P ) , P f?(P ). Proof: We only prove the rst part of Lemma 3.5; the proof for the second part is derived
in an analogous way. Note that the reverse implication is obvious; we use induction on the exponent n in the de nition of f ? to prove the forward implication. 1. f ?(P ) P ) f (P ) P ;by de nition of f ? 2. f 0(P ) P , true ;by de nition of f 0 0 (P ) P 3. f (P ) P ) f ;from 2 (base case for induction) n (P ) P 4. f (P ) P ) f ;induction hypothesis n (P )) f (P ) 5. f (P ) P ) f (f ;apply f on RHS of 4, and f monotone 6. f (P ) P ) f n+1 (P ) P ;simplifying 5 i (P ) P ;from 3, 6 and induction 7. 8i 0 : f (P ) P ) f 8. f (P ) P ) f ?(P ) P ;taking disjunct wrt i in 7 ? (P ) P 9. f (P ) P , f ;from 1 and 8 This completes the proof. 2 Since sp is disjunctive, it is also monotone (Lemma 3.3). Thus Lemma 3.5 applies to sp as well; the implication of part 1 is that if the set of states reached by a single transition is contained in the set of starting states, then so is the entire set of reachable states. The implication of the second part of Lemma 3.5 applied to wlp (wlp conjunctive implies wlp monotone from Lemma 3.3) is that if the set of states from which only the states in a target state set can be reached in a single transition contains the set of target states, then so does the set of states from which only the target state set is reached in all numbers of transitions. Lemma 3.6 6] Consider f 2 F . 1. If f is disjunctive, then so is f ? . 2. If f is conjunctive, then so is f?. Proof: As above we omit the proof of the second part, which can be obtained analogously n to the proof of part 1 that we present next. It su ces to W that fW is disjunctive for show n is disjunctive for each n 2 N , then f ? ( n W each n 2 N , for if f W W 2 P )= n 0f ( 2 P )= W W f n (P ) = W ? n n n 0 2 2 n 0 f (P ) = 2 f (P ). Hence we prove disjunctivity of f ?. for each n 2 N by induction on the exponent n in the de nition of f 1. f 1 = f is disjunctive ;by assumption (base case for induction) n+1 = f (f n ) 2. f W ;de nition of exponent 3. f n ( W P ) = W 2 W n (P ) f ;induction hypothesis 2 4. f n+1 (W 2 P ) = f ( 2 f n(P )) ;using 2 and 3 5. f n+1 ( 2 P ) = W 2 f n+1 (P ) ;from 4 and f is disjunctive Thus the proof is completed. 2 Lemma 3.7 6] If f is disjunctive (conjunctive), then f is conjunctive (disjunctive). 8

Proof: The proof is simple and based on application of De Morgan's law.
It thus follows from Lemma 3.6 that sp?

extremal solution of a predicate equation, the proof of which can be found in 6]. A notation of the type Q : f (Q) h(Q), where f; h 2 F and Q 2 P , is used to denote a predicate equation in the variable predicate Q such that it stais es f (Q) h(Q). Theorem 3.8 6] Consider the predicate equation Q : f (Q) h(Q). 1. If f is disjunctive and h is monotone, then the weakest solution of the above equation exists (and is unique). 2. If f is monotone and h is conjunctive, then the strongest solution of the above equation exists (and is unique). An immediate consequence of Theorem 3.8 is the following corollary: Corollary 3.9 Let f be disjunctive and P 2 P be arbitrary; then 1. the weakest solution of the equation Q : f (Q) P exists. 2. the weakest solution of the equation Q : f (Q) Q exists. Proof: 1. Follows from the fact that P , treated as a constant predicate transformer, is monotone. 2. Follows from the fact that the identity function is monotone. 2 Next we show that there exists a strong relationship between the set of disjunctive and the set of conjunctive predicate transformers. A result of similar nature is obtained in 6, p. 202, Theorem 1] regarding converses of predicates. Theorem 3.10 Consider f; g 2 F . Let f be disjunctive and g be conjunctive; then the following are equivalent: C-1. g(P ) is the weakest solution of Q : f (Q) P for all P 2 P . C-2. (f (g(P )) P ) ^ (P g(f (P ))) for all P 2 P C-3. f (P ) Q , P g(Q) for all P; Q 2 P . C-4. f (P ) is the strongest solution of Q : P g(Q) for all P 2 P . Proof: Refer to Appendix A. 2 The weakest solution of Q : f (Q) P depends both on f and P . Let it be denoted by f ? (P ). Note that f ? 2 F . We de ne it to be the dual of f . Formally, De nition 3.11 Let f ? ( ) 2 F be the weakest solution of Q : f (Q) ( ), where f 2 F is disjunctive. Then f ? is called the dual of f .

is disjunctive, and from Lemma 3.7 that sp and sp? are conjunctive. Similarly, since wlp is conjunctive, so is wlp? (Lemma 3.6) and wlp and wlp? are disjunctive (Lemma 3.7). Next we quote a result regarding existence of the

2

Lemma 3.12 If f is disjunctive, then its dual f ? is conjunctive.
9

weakest solution of this equation. We show that 2 (f ? (P )) is also the weakest solution; hence from the uniqueness of the solution it follows that f ? is conjunctive. First we show that V 2 (f ? (P )) is a solution. 1. 8 2 : f (f ? (P )) P ;f ?(P ) is a solution of Q : f (Q) P V (f ? (P )) f ? (P ) 2. 8 2 : V ;de nition of conjunction 2 ? (P ))) f (f ? (P )) ;apply f on 2, f monotone (Lemma 3.3) 3. 8 2 : f (V 2 (f ;from 1 and 3 4. 8 V2 : f ( 2 (f ? (P ))) P ? (P ))) V P 5. f ( 2 (f ;by taking conjunct wrt in 4 2 V (f ? (P )) is a solution 6. 2 ;from 5 Next we show that V 2 (f ? (P )) is the weakest solution also. Let R be another solution of Q : f (Q) VV2 (P ). Then we need to show that R V 2 (f ? (P )). 1. f (R) ;by assumption 2 P 2. 8 2 : f (R) P ;from 1 3. 8 2 : R is a solution of Q : f (Q) P ;from 2 4. 8 2 : f ? (P ) is weakest solution of Q : f (Q) P ;by de nition of f ? 5. 8 2V : R f ? (P ) ;from 3 and 4 ? (P )) 6. R ;taking conjunct wrt in 5 2 (f ;from 5 7. V 2 (f ? (P )) is the weakest solution This proves that f ? is conjunctive. 2 An immediate consequence of Theorem 3.10 and Lemma 3.12 is the following corollary: Corollary 3.13 Let f 2 F be disjunctive. Then the strongest solution of the equation Q : P f ? (Q) exists and is given by f (P ). Proof: From Lemma 3.12 we have that f ? is conjunctive. Since P as a constant predicate transformer is monotone, the strongest solution of Q : P f ? (Q) exists (Theorem 3.8). That f (P ) is the strongest solution follows by substituting f ? for g in Theorem 3.10. 2 Remark 3.14 The result of Corollary 3.13 justi es the term dual for the functions f and f ? . Note that C-1 is used to de ne the dual of a disjunctive predicate transformer. Since in Theorem 3.10 we showed the equivalence of C-1 and C-2 and C-3 and C-4, any one of them can be used to equivalently de ne the dual predicate transformer. Next we prove a result that is interesting from the control perspective. Theorem 3.15 wlp and sp are duals of each other. Proof: Note that sp is disjunctive and wlp is conjunctive; hence the duality is well de ned in this context. In order to show duality we need to show that wlp and sp satisfy any of the conditions C-1 through C-4 (refer to Theorem 3.10 and Remark 3.14). We show that C-2 holds. Firstly, it follows from the de nitions of sp and wlp that sp(wlp(P )) P for any P 2 P . This is true because sp(wlp(P )) holds only in those states of XP which have at least one transition leading into them. Secondly, it again follows from the de nitions of sp and wlp that P wlp(sp(P )) for any P 2 P . This is true because wlp(sp(P )) holds in those states where either P holds or which have no transitions leading out of them. Thus both the conjuncts of C-2 hold, which proves the duality of wlp and sp. 2 10

Proof: Consider the equation Q : f (Q) V 2 P . Then by de nition f ? (V 2 P ) is the V

4 Predicate Transformers and Supervisory Control
In the previous section we described the conditions under which extremal solutions of various boolean equations exist, and introduced the notion of duality of predicate transformers, which is one of the key concepts relating the extremal solutions of the above boolean equations. We now show how these concepts can be useful in synthesizing static supervisors 16, 13] for a given DEDS. Let G def (PX ; ; sp; I ) be a plant as described in De nition 2.9. Let R 2 P denote the = required behavioral constraint on G. In other words, the control task is to design a static controller S : X ! 2 such that as the closed loop system evolves, it visits only and all those states where R holds. Formally, Supervisory Predicate Control Problem (SPCP): The control task is to construct a static controller S : X ! 2 for the plant G def (PX ; ; sp; I ) such that sp? (I ) = R. = S The SPCP requires that the state trajectories (in the controlled system GS ) starting from a state where the initial predicate I holds, remain con ned to the set of states where the required predicate R holds, and visit all the states where R holds. A special case of this problem was rst treated in 22] where the control task was to synthesize a static supervisor S such that sp? (I ) R, i.e. the states visited under closed loop control be con ned to R. S Thus R in 22] represents a predicate that remains invariant under control. The required predicate R considered in this paper represents the weakest predicate that remains invariant under control (i.e. if S solves SPCP, then no other predicate weaker than R is invariant under S ). It is clear that SPCP is solvable only if the set of initial states is contained in the set of states where R holds. Hence in order to allow nontrivial solution of the SPCP we assume that the above condition is satis ed, which we state as assumption A-1 below: A-1. I R. Next we de ne a few notions that play a central role in supervisory control of DEDS. De nition 4.1 R is said to be invariant if sp(R) R. R is said to be u -invariant if spu (R) R. R is said to be control-invariant if there exists a static controller S : X ! 2 such that spS (R) R. Thus if R is an invariant predicate and if the system starts from a state where R holds, then as it evolves it visits only those states where R holds. If R is u -invariant, and if the system starts in a state where R holds, then under the execution of any uncontrollable event it remains in a state where R holds. If R is control-invariant, then there exists a static supervisor S such that R is invariant in the controlled system GS . Note that all the above notions are de ned with respect to the plant G, for they depend on the plant transition function sp. It also follows in view of C-3 of Theorem 3.10 that R being invariant, wlp(R); R wlpu (R); R wlpS (R) u -invariant, control-invariant is equivalent to R respectively. These are the same as the de nitions given in 22]. Hence it follows in view of Proposition 7.1 of 22] that R is control-invariant if and only if R is u -invariant. 11

These notions of invariance introduced in 22] are useful in characterizing those sets of states which are closed under the system execution, i.e. if the system starts in a state of an invariant set, then under all executions the state of the system remains in that invariant set. However, it may be quite possible that the system may never visit some states in that invariant set. Thus the notion of invariance alone is not enough for addressing the SPCP. The notion of controllability of predicates (a notion stronger than that of invariance) was introduced in 19] for addressing the above mentioned problem. We present an equivalent but slightly di erent de nition of controllability and show that controllability is a necessary and su cient condition for a solution of SPCP to exist. In our opinion, our de nition is much more compact and uses a more convenient notation that results in simplicity of proofs and supervisory synthesis techniques. De nition 4.2 Given f 2 F and P 2 P , the restriction of f to P , denoted f jP , is the predicate transformer de ned as: (f jP )(Q) = f (P ^ Q) ^ P for each Q 2 P . Next we prove a useful property of the restriction operator. We say f 2 F is weakening if P f (P ) for any P 2 P . Lemma 4.3 Let f 2 F be monotone and weakening, and P; Q 2 P be arbitrary. Then f j(fjP (Q)) (Q) = f jP (Q).

Proof: Let R def f jP (Q). Then we need to show that f jR (Q) = R. = 1. R = f (P ^ Q) ^ P ;de nition of restriction 2. f jR (Q) = f (f (P ^ Q) ^ P ^ Q) ^ f (P ^ Q) ^ P ;de nition of f j( ) and 1 3. P ^ Q f (P ^ Q) ;f is weakening 4. P ^ Q f (P ^ Q) ^ P ^ Q ;conjunct with P ^ Q in 3 5. f (P ^ Q) f (f (P ^ Q) ^ P ^ Q) ;apply f in 4, f monotone 6. f jR (Q) = f (P ^ Q) ^ P ;from 2 and 5 7. f jR (Q) = R ;from 6 and de nition of R

This completes the proof. 2 Thus it follows from Lemma 4.3 that the restriction of a monotone predicate transformer, the application of which results in an image predicate weaker than its preimage predicate, exhibits a nice \invariance" property. For example, the disjunctive closure of any predicate transformer is weakening as well as monotone (Lemma 3.3 and 3.5), and thus exhibits such a property. De nition 4.4 R is said to be controllable with respect to G if 1. spu (R) R, and 2. R = (sp jR)?(I ) Note that it follows from the de nition of restriction and disjunctive closure that the following ordering always holds: (sp jR)?(I ) R. Thus the second condition in the de nition of controllability is equivalent to R (sp jR)?(I ). We use either of these equivalent de nitions of controllability interchangeably. In the next theorem we present a solution to the SPCP. The simplicity of the proof obtained by using the theory of predicate transformers and a more compact de nition of controllability is easily seen. 12

G.

Theorem 4.5 The solution to SPCP exists if and only if R is controllable with respect to

controllable (S never disables any uncontrollable events). Also, note that the strongest postcondition predicate transformer spu corresponds to the maximally restrictive control law - the control law that disables all the controllable events from occurring. 1. spu (R) R ;by assumption (R is controllable) 2. sp jR= spS ;from 1, de nitions of S and sp jR and A-1 ?(I ) ;by assumption (R is controllable) 3. R = (sp jR) 4. R = sp? (I ) ;from 2 and 3 S Next we show that if there exists a controller S such that sp? (I ) = R, then R is controllable. S 1. sp? (I ) = R ;by assumption S ;apply sp? on 1 2. sp? (I ) = sp? (R) S S S ? (R) = R ;from 1 and 2 3. spS 4. sp? (R) sp? (R) ;spu : most restrictive control u S ? (R) R ;from 3 and 4 5. spu 6. spu (R) R ;from 5 and Lemma 3.5 7. (spS jR)?(I ) = R ;from 1 and Lemma 4.3 ? (I ) (sp jR )?(I ) ;S restricts behavior 8. (spS jR) 9. R (sp jR)?(I ) ;from 7 and 8 10. R is controllable ;from 6 and 9 This completes the proof of Theorem 4.5. 2 Example 4.6 Consider the problem of mutual exclusion for the Readers-Writers program of Example 2.10. The mutual exclusion constraint requires that the number of writers accessing the database should never be more than one, and a reader can access the database only when no writer is accessing it. Thus the mutual exclusion constraint can be written as the following required predicate: R = ((nw = 0) _ (nw = 1 ^ nr = 0)). Let u = fend rd; end wtg. Then it is easily veri ed that R is controllable, namely, spu (R) R and (sp jR)?(I ) = R as described below. First consider the event end rd:

Proof: First assume that R is controllable; we show that there exists a controller S : X ! 2 such that sp? (I ) = R. Consider the controller de ned as: for each x 2 XR; 2 S (x) , S (x; ) 2 XR. Since R is u -invariant, it follows that the events disabled by S are all

nr := nr ? 1 if nr > 0

: end rd

Comparing the above statement with the standard form:

x := F (x) if C (x) : we obtain F (x) = x ? 1, i.e. F ?1(x) = x + 1; and C (x) = x > 0]. Thus spend rd(R(nr)) = R(F ?1(nr)) ^ C (F ?1(nr)) = (nw = 0) _ (nw = 1 ^ nr + 1 = 0)] ^ nr + 1 > 0] = (nw = 0) ^ (nr + 1 > 0)] _ (nw = 1) ^ (nr + 1 = 0) ^ (nr + 1 > 0)] = (nw = 0) ^ true] _ false
13

= (nw = 0)

R:

Next consider the event end wt:

nw := nw ? 1 if nw > 0 : Thus F (x) = x ? 1 and C (x) = x > 0] as before. Hence spend wt(R(nw)) = R(F ?1(nw)) ^ C (F ?1(nw)) = (nw + 1 = 0) _ (nw + 1 = 1 ^ nr = 0)] ^ nw + 1 > 0] = nw + 1 = 0 ^ nw + 1 > 0] _ nw + 1 = 1 ^ nr = 0 ^ nw + 1 > 0] = false _ nw = 0 ^ nr = 0] = nw = 0 ^ nr = 0] R:

spu (R) = (nw = 0) _ (nw = 0 ^ nr = 0) = (nw = 0) R. Next, in order to verify (sp j R)?(I ) = R, it can be easily shown by induction on n 2 N that (sp jR)n(I ) = (nw = 0 ^ nr n) _ (nw = 1 ^ nr = 0). Hence (spWjR)?(I ) = n 0 (nw = 0 ^ nr n) _ (nw = 1 ^ nr = 0) = (nw = 0 ^ true) _ (nw = 1 ^ nr = 0) = (nw = 0) _ (nw = 1 ^ nr = 0) = R: It then follows that R is controllable and hence the SPCP is solvable. The supervisor S can be computed as follows. For each controllable event 2 ( ? u ) of the type: x := F (x) if C (x) : ; the predicate on which the event is disable by S is computed as C (x) ^ R(x) ^ wlp (:R(x)): This is the weakest predicate stronger than R, where is enabled (C (x) holds), and from which a state in X:R (states where R does not hold) is reachable by a single execution of . First consider the event st rd; then S disables st rd on the predicate: true ^ (nw = 0) _ (nw = 1 ^ nr = 0)] ^ wlpst rd( nw > 1] _ nw = 1 ^ nr > 0])] = (nw = 0) _ (nw = 1 ^ nr = 0)] ^ (nw > 1) _ (nw = 1 ^ nr + 1 > 0)] = (nw = 0) _ (nw = 1 ^ nr = 0)] ^ (nw 1) = (nw = 1 ^ nr = 0). Thus S disables st rd on nw = 1 ^ nr = 0]. The predicate where S disables the other controllable event st wt can be computed as: true ^ (nw = 0) _ (nw = 1 ^ nr = 0)] ^ wlpst wr ((nw > 1) _ (nw = 1 ^ nr > 0))] = (nw = 0) _ (nw = 1 ^ nr = 0)] ^ (nw + 1 > 1) _ (nw + 1 = 1 ^ nr > 0)] = (nw = 0) _ (nw = 1 ^ nr = 0)] ^ (nw > 0) _ (nw = 0 ^ nr > 0)]
14

Combining the results of the above two derivations, we thus obtain:

= nw = 0 ^ nw > 0] _ nw = 0 ^ (nw = 0 ^ nr > 0)] _ (nw = 1 ^ nr = 0) ^ nw > 0] _ (nw = 1 ^ nr = 0) ^ (nw = 0 ^ nr > 0)] = false _ (nw = 0 ^ nr > 0) _ (nw = 1 ^ nr = 0) _ (nw = 1 ^ nr = 0) = (nw = 0 ^ nr > 0) _ (nw = 1 ^ nr = 0). Thus S disables st wt in (nw = 0 ^ nr > 0) _ (nw = 1 ^ nr = 0). Thus it is clear that at states where (nw = 1 ^ nr = 0) holds both the controllable events are disabled by S , and at states where (nw = 0 ^ nr > 0) holds only st wr is disabled. Thus S : X ! 2 is given by: (x is used to denote an arbitrary element of X = N 2, i.e. x = (nr; nw)):

8 > (x) ? fst wt; st rdg if (nw = 1 ^ nr = 0) < if (nw = 0 ^ nr > 0) S (x) def > (x) ? fst wtg = : (x) otherwise where (x) = f 2 j (x; ) is de nedg.

Remark 4.7 This example illustrates that techniques based on predicates and predicate

transformers can be used for solving the supervisory control problem in an in nite state space setting. The computational complexity of computing the supervisor is linear in the number of variables used and the number of conditional assignment statements in the program description of the plant, and does not depend on the actual number of states and transitions in the system which may be very large, possibly in nite. Another advantage of using the theory based on predicate transformers is that it provides an automated technique for synthesizing supervisors as illustrated by the above example.

4.1 Minimally Restrictive Supervisors for Predicate Control

It follows from Theorem 4.5 that if the required predicate R is controllable, i.e. satis es the u -invariance and the reachability constraint R (sp jR)?(I ), then the following control law can be used for solving the SPCP: for each x 2 XR, 2 S (x) , (x; ) 2 XR. In this subsection, we address the problem of supervisory synthesis when the required predicate R is not controllable. This problem was not addressed in 19]. If the required predicate R is not controllable, then a supervisor cannot be constructed which solves the SPCP. In the next Theorem we prove that in such a situation, the minimally restrictive supervisor can be constructed. Theorem 4.8 The weakest controllable predicate stronger than a given predicate exists (and is unique). Proof: Let R 2 P be the required predicate. Assume that R is not controllable; then at least one of the following two conditions is not satis ed: 1. spu (R) R 2. R (sp jR)?(I ) We show that the weakest solution to the following set of equations exists: 15

E-1. Q : spu (Q) Q E-2. Q : Q (sp jQ)?(I ) E-3. Q : Q R Equations E-1 and E-2 correspond to conditions 1 and 2 respectively. Equation E-3 requires that the weakest solution be stronger than R. Consider the rst equation; since spu is disjunctive and Q as an identity function is monotone, it follows from Theorem 3.8 that the weakest solution of it exists. Similarly consider the second equation; since Q as an identity function is disjunctive and (sp jQ)?(I ) as a function of Q is monotone, it follows that the second equation also has a weakest solution. Also, since R as a constant function is monotone and Q as an identity function is disjunctive, the third equation possess a weakest solution as well. Since the weakest solutions of all the equations E-1 through E-3 exist, it follows that the weakest solution of the above set of equation exists and by its de nition it is unique. More formally, let be an indexing set W that for each 2 ; Q satis es the above set such of equations. Then as explained above 2 Q satis es all the equations E-1 through E-3 individually. Thus W 2 Q is the weakest solution of the above set of equations. 2 " R to denote the weakest solution of the above set of equations, then We will use R R" denotes the weakest controllable predicate stronger than R. If R" also satis es A-1, i.e. if I R" , then since R" is controllable, it follows from Theorem 4.5 that there exists a static control law S such that sp? (I ) = R" . A supervisor exercising such a control law is called S the minimally restrictive supervisor. However, R" may not satisfy A-1, i.e. it is possible that I 6 R", in which case the minimally restrictive supervisor does not exist. In Theorem 4.8 we proved the existence of the weakest controllable predicate R" , which is the weakest solution of equations E-1, E-2 and E-3, stronger than R. Now we present a method for computing it. We proceed by rst proving a few lemmas. First we note that the weakest solution of E-2 stronger than any predicate P 2 P exists. This follows easily from 1 and 2 below: 1. E-2 has a weakest solution (follows from Theorem 4.8), and 2. Equation Q : Q P has a weakest solution, for Q as an identity predicate transformer is monotone and P as a constant predicate transformer is disjunctive. stronger than P , then P 0 is also a solution of E-1 and E-3.

4.2 Computation of R"

Lemma 4.9 If P 2 P is a solution of E-1 and E-3 and P 0 is the weakest solution of E-2
1. P P ;by assumption 2. P R ;P is a solution of E-3 0 R 3. P ;from 1 and 2 4. P 0 is a solution of E-3 ;from 3 Next we show that P 0 is also a solution of E-1. 16

Proof:0 We rst show that P 0 is a solution of E-3.

1. spu (P ) P ;P is a solution of E-1 0 P 2. P ;by assumption 3. spu (P 0) spu (P ) ;apply spu on 2, spu monotone 4. spu (P 0) P ;from 1 and 3 0) _ P 0 P 5. spu (P ;from 2 and 4 ;P 0 is a solution of E-2 6. P 0 (sp jP 0 )?(I ) ;by weakening the RHS of 6 7. P 0 (sp jspu(P 0)_P 0 )?(I ) 0 ) (sp jsp (P 0 )_P 0 )?(I ) ;from 6 and de nition of sp j( ) 8. spu (P u ;taking disjunct of 7 and 8 9. spu (P 0) _ P 0 (sp jspu(P 0)_P 0 )?(I ) 0 ) _ P 0 is a solution of E-2 stronger than P ;from 9 and 5 10. spu (P 11. spu (P 0) _ P 0 P 0 ;from 10 and de nition of P 0 12. spu (P 0) P 0 ;simplifying 11 0 is a solution of E-1 13. P ;from 12 This completes the proof. 2 Thus the weakest solution of E-2 stronger than any solution of E-1 and E-3 is also a solution of E-1 and E-3 and hence a solution of the all the three equations. Lemma 4.10 Let P be the weakest solution of E-1 and E-3 and P 0 be the weakest solution of E-2 stronger than P . Then P 0 is the weakest solution of E-1 through E-3. Proof: It follows from Lemma 4.9 that P 0 is a solution of E-1 through E-3. We need to show that it is the weakest solution also, i.e. P 0 = R". 1. P 0 P ;by assumption 2. P 0 R" ;P 0 a solution of E-1, E-2, E-3 and R" weakest solution of E-1, E-2, E-3 " P ;R" weakest solution of E-1, E-2, E-3 3. R and P weakest solution of E-1, E-3 " P 0 ;R" weakest solution of E-1, E-2, E-3 stronger than P (from 3) 4. R and P 0 weakest solution of E-2 stronger than P (from 1) 5. R" = P 0 ;from 2 and 4 Thus the proof is completed. 2 " is by computing the weakest It follows from Lemma 4.10 that one way to compute R solution of E-2 stronger than the weakest solution of E-1 and E-3. Theorem 4.11 The weakest solution of E-1 and E-3 is (wlpu)?(R). Proof: Note that by de nition, (wlpu)?(R) = Vn 0 wlpu(R). We rst show that (wlpu )?(R) is a solution of E-1. V 1. Vn 0 wlpu (R) n V wlpu(R) ;trivially 1 2. (wlpu)?(R) wlpu ( n 0 wlpu(R)) ;rewriting LHS and RHS of 1 3. (wlpu)?(R) wlpu ((wlpu )?(R)) ;rewriting RHS of 2 4. spu ((wlpu)?(R)) (wlpu )?(R) ;from 3 and C-3 5. (wlpu)?(R) is a solution of E-1 ;from 4 Next V show that (wlpu)?(R) is a solution of E-3. we 1. n 0 wlpu (R) Vn=0 wlpu(R) ;trivially 2. (wlpu)?(R) R ;rewriting LHS and simplifying RHS of 1 3. (wlpu)?(R) is a solution of E-3 ;from 2 17

Next we prove that (wlpu)?(R) is the weakest solution of E-1 and E-3. Assume P is also a solution of E-1 and E-3. 1. spu (P ) P ;P is a solution of E-1 2. P wlpu(P ) ;from 1 and C-3 3. P (wlpu)?(P ) ;from 2 and Lemma 3.5 4. P R ;P is a solution of E-3 5. (wlpu)?(P ) (wlpu)?(R) ;apply (wlpu )? on 4, (wlpu )? monotone 6. P (wlpu)?(R) ;from 3 and 5 7. (wlpu)?(R) is the weakest solution ;from 6 This completes the proof of Theorem 4.11. 2 Thus it follows from Theorem 4.11 that the weakest u -invariant predicate stronger than R (i.e. the weakest solution of E-1 and E-3) is given by (wlpu)?(R). Finally we have the following Theorem for computing R". This is one of the main results of this paper. Theorem 4.12 R" = (sp j(wlpu )?(R))?(I ). the weakest solution of E-2 stronger than the weakest solution of E-1 and E-3. Since the weakest solution of E-1 and E-3 is (wlpu)?(R) (Theorem 4.11), we need to show that Q is the weakest solution of E-2 stronger than (wlpu )?(R), i.e. satis es Q (wlpu)?(R). First we show that Q is a solution of E-2 and Q (wlpu )?(R). 1. (sp j(wlpu )?(R))?(I ) (wlpu)?(R) ;by de nition of restriction 2. Q (wlpu)?(R) ;from 1 and de nition of Q ? (I ) 3. Q = (sp jQ) ;from Lemma 4.3 4. Q (sp jQ)?(I ) ;from 3 5. Q is a solution of E-2 ;from 4 Next we show that Q is the weakest solution of E-2 stronger than (wlpu )?(R). Let P be another solution of E-2 stronger than (wlpu )?(R). Since we have already shown above that Q is stronger than (wlpu)?(R), it su ces to show that P Q. 1. P (wlpu)?(R) ;by assumption (P stronger than (wlpu )?(R)) ?(I ) 2. P (sp jP ) ;by assumption (P solution of E-2) ? (I ) ;by weakening RHS of 2 using 1 3. P (sp j(wlpu)?(R)) 4. P Q ;from 3 and de nition of Q Hence the proof is completed. 2 Remark 4.13 The set of states XR" corresponding to the weakest controllable predicate stronger than R can be easily computed in two steps: = 1. Compute Ru def (wlpu)?(R) 2. Compute (sp jRu )?(I ) The rst step corresponds to the computation of the supremal u -invariant subset of XR. This we denote by XRu . The second step consists of computing the set of states reachable from the initial state set XI in the state space restricted to XRu . If G is represented as a nite state machine, then the set XRu as well as the states reachable from XI in the state space restricted to XRu can be computed (in the worst case) in time linear in the number of 18

Proof: Let Q def (sp j(wlpu )?(R))?(I ). In view of Lemma 4.10 it su ces to show that Q is =

transitions present in G (refer to 11, 15] for more elaborate discussions on computationally optimal algorithmic techniques for similar computations). If G has in nite states, then the computation of R" based on the state machine approach as described above will not terminate, as it involves the computation of the \?" operator - disjunctive and conjunctive closure. However, computation based on predicates and predicate transformers can be used to automatically construct the minimally restrictive supervisor in an in nite state space setting using e cient techniques such as those reported in 7]. Further research on this issue is currently under investigation. ple 2.10. We use variables ar; aw to denote the number of active readers, writers, respectively, and the variables wr; ww to denote the number of waiting readers, writers, respectively. The event set consists of: = fst rd; st wt; end rd; end wt; req rd; req wt; ovflog. Informally described, readers and writers are rst bu ered in separate queues of nite capacity, and whenever req rd or req wt occurs, the size of the corresponding queue increases. The number of active readers/writers is increased (decreased) according to the occurrence of st rd=st wt(end rd=end wt). If the number of waiting readers is more than a positive number B , then the number of active readers increases by one whenever the event ovflo occurs. Formally, Program Rd Wr 1 declare wr; ww; ar; aw : integer

Example 4.14 Consider the following re nement of the Readers-Writers program of Exam-

st rd; st wt; ovflo; end rd; end wt; req rd; req wt : event initially wr; ww; ar; aw = 0,0,0,0 assign wr; ar := wr + 1; ar : req rd wr ? 1; ar + 1 if wr > 0 : st rd wr; ar + 1 if wr B : ovflo wr; ar ? 1 if ar > 0 : end rd ww; aw := ww + 1; aw : req wt ww ? 1; aw + 1 if ww > 0 : st wt ww; aw ? 1 if aw > 0 : end wt end fRd Wr 1g Let the uncontrollable event set be given by u = fovflog. As in Example 4.6, the mutual exclusion constraint for the above program is given as the required predicate R = (aw = 0) _ (aw = 1 ^ ar = 0). It can be readily veri ed that R is not a controllable predicate, for spu (R) 6 R. In order to show that R is not controllable consider the uncontrollable event ovflo. Note that F (ar) = ar + 1 and C (wr) = (wr B ) for the event ovflo. Then spovflo (R(ar; aw)) = (aw = 0 _ (aw = 1 ^ ar ? 1 = 0)) ^ (wr B ) = (aw = 0 _ (aw = 1 ^ ar = 1)) ^ (wr B ) 6 R: In order to compute R", we rst compute (wlpovflo)?(R) = Vn 0 (wlpovflo)n (R). We need to compute (wlpovflo)n(R) for each n 2 N . First we compute wlpovflo (R): wlpovflo(R)
19

= wlpar:=ar+1 if wr B ((aw = 0) _ (aw = 1 ^ ar = 0)) = ((aw = 0) _ (aw = 1 ^ ar + 1 = 0)) ^ (wr B )] _ wr < B ] = (aw = 0) ^ (wr B )] _ wr < B ] = (aw = 0) _ (wr < B )] ^ (wr B ) _ (wr < B )] = (aw = 0) _ (wr < B )] Similarly we compute (wlpovflo)2(R): (wlpovflo)2(R) = wlpovflo(aw = 0 _ wr < B ) = (aw = 0 _ wr < B ) ^ (wr B )] _ wr < B ] = (aw = 0 _ wr < B ) = wlpovflo(R) Hence (wlpovflo)n (R) = wlpovflo(R) for each n 1. Thus (wlpovflo)?(R) = R ^ wlpovflo(R) = (aw = 0) _ (aw = 1 ^ ar = 0)] ^ (aw = 0) _ (wr < B )] = (aw = 0) ^ (aw = 0)] _ (aw = 0) ^ (wr < B )] _ (aw = 1 ^ ar = 0) ^ (aw = 0)] _ (aw = 1 ^ ar = 0) ^ (wr < B )] = (aw = 0) _ (aw = 0 ^ wr < B ) _ false _ (aw = 1 ^ ar = 0 ^ wr < B ) = (aw = 0) _ (aw = 1 ^ ar = 0 ^ wr < B ) Thus Ru def (wlpu)?(R) = (aw = 0) _ (aw = 1 ^ ar = 0 ^ wr < B ). In order to compute = R", we need to compute (sp jRu )?(I ). This can be easily shown to equal Ru . Thus R" = (aw = 0) _ (aw = 1 ^ ar = 0 ^ wr < B ): Using the technique described in Example 4.6, the predicate on which the minimally restrictive supervisor disables a given controllable event can be computed. Essentially, if 2 ? u is a controllable event of the type: x := F (x) if C (x) : ; then is disabled by S in the predicate C (x) ^ R" (x) ^ wlp (:R" (x)) = C (x) ^ R" (x) ^ :R"(F (x)). The following minimally restrictive control S : X ! 2 can be used for achieving the mutual exclusion constraint (we use x to denote an arbitrary element of X = N 4: 8 (x) ? fst wt; st rd; req rdg if (aw = 1 ^ ar = 0 ^ wr = B ? 1) > > (x) ? fst wt; st rdg < if (aw = 1 ^ ar = 0 ^ wr < B ? 1) def S (x) = > (x) ? fst wtg if (aw = 0 ^ ar > 0) > : (x) otherwise

5 Observability of Predicates
So far we have considered the supervisory predicate control problem assuming that complete information about the system states is available. Next we generalize the theory of supervisory predicate control developed above to the case where the system states are not necessarily completely observed. In order to formulate the problem of supervisory predicate control under partial state observation, consider a mask M , which is a map from the system 20

state space X to the observation space Y , i.e. M : X ! Y . Note that the mask M is not necessarily injective, and it is possible that two di erent states may yield an identical observation under the mask M . The supervisory predicate control problem under partial observation was rst studied in 19]. However, the conditions of observability of a predicate were obtained under very restrictive assumptions on the mask M . It was assumed in 19] that given any pair of states x1; x2 2 X and any event 2 such that (x1; ); (x2; ) are both de ned, the mask M is such that M (x1) = M (x2) , M ( (x1; )) = M ( (x2; )). Note that this assumption may be violated even when the mask M equals the identity function, which corresponds to the case of complete observation. Thus the observability theory developed in 19] is applicable only to a very small class of systems. We extend the condition of observability of predicates without assuming any restriction on the mask M . Supervisory Predicate Control and Observation Problem (SPCOP): Consider the plant G def (PX ; ; sp; I ) and the observation mask M : X ! Y . Let R 2 PX denote the = required predicate. The control task is to obtain a dynamic control law D : Y ? (2 )? ! 2 such that (spD )?(I ) = R. The notation Y ?; (2 )? is used to denote the set of nite sequences of observations in Y , the set of nite sequences of control actions, respectively. The supervisor uses all the information available corresponding to the entire past to determine the current control action (the set of events to be enabled). Thus the supervisor is dynamic. The supervisor considered for the SPCOP in 19] is static (current control actions are determined by the current observation only) and can be obtained as a special case of the supervisor considered in this section. We propose the following algorithm for dynamically estimating the current state of the system using the information available from the entire past. The notation yk 2 Y for each k 2 N is used to denote the observation at the kth step.

where M ?1 (yk+1) corresponds to the predicate which holds in those states which have the same mask value yk+1, and Pk for each k 0 denotes the predicate corresponding to the state estimate at the kth step. Thus initially when no observation is made the set of states corresponding to the initial state estimate equals the entire state space; hence P0 is set equal to trueX . The set of states corresponding to the state estimate at the (k + 1)th step, where the predicate Pk+1 holds, equals the set of states that correspond to the observation yk+1 and are reachable from a state where Pk holds. Algorithm 5.1 can be used to de ne the following dynamic observer for the system G. De nition 5.2 Consider the plant G def (PX ; ; sp; I ) and the mask M : X ! Y . The = dynamic observer for estimating the current state of G is a DEDS O def (PX ; Y; spO ; trueX ), = 21

Algorithm 5.1 Initiation step: P0 = trueX Recursion step: Pk+1 = sp(Pk ) ^ M ?1 (yk+1); k 0

where PX corresponds to the state set of the observer O; Y corresponds to the event set of O; trueX corresponds to the initial condition of O; and spO , the strongest postcondition predicate transformer of O, is de ned to be (spO)y (P ) = sp(P ) ^ M ?1(y) for each P 2 PX and y 2 Y . A similar de nition of dynamic observer is presented in 3], which uses the past sequence of observations as well as the past sequence of control inputs for estimating the current state.

5.1 Static and Dynamic Control Laws

An algorithm similar to Algorithm 5.1 can be used to simultaneously observe the evolution of the plant and control its behavior. Since the goal of the SPCOP is to obtain a dynamic control law D : Y ? (2 )? ! 2 such that the required predicate R remains invariant under the evolution of the controlled system, we assume that the system never starts in a state where R does not hold. This is stated as assumption A-1 in the previous section. Keeping assumption A-1 in mind, a dynamic control law D is obtained in the manner described below. The notations yk ; PkD are used to denote the observation at the kth step, and the predicate corresponding to the state estimate at the kth step under the control law D respectively. The control action at the kth step, k 1, depends on the observation sequence up to the kth step and the control sequence up to the (k ? 1)th step; using this available information at the kth step, rst the predicate PkD corresponding to the state estimate at the kth step is obtained and then an identical control action is de ned for each of the states in the set XPkD . Thus a dynamic controller D : Y ? (2 )? ! 2 can equivalently be viewed as a map D : PX ! 2 .

Algorithm 5.3 Initiation step: P1D = M ?1(y1) ^ I 2 D (P1D ) , sp (P1D ) R Recursion step: PkD = spS (PkD ) ^ M ?1 (yk+1) +1 2 D (PkD ) , sp (PkD ) R; k 1 +1 +1 where 2 D (P ) for any 2 and P 2 P means that is enabled by the control law D

in every state in the set XP . Since the system is assumed to start in I R (Assumption A-1), the initial state estimate after the rst observation y1 2 Y is given by the predicate M ?1 (y1) ^ I . For every state x 2 XP1D , a transition 2 is enabled by the control law if and only if the states reached by executing in XP1D are all contained in XR. The predicate PkD corresponding to the +1 state estimate at the (k +1)th step is obtained by determining the states that correspond to the (k +1)th observation yk+1 , and that are reachable in G under the control law D from the states where PkD holds. Since all the states in the set XPkD+1 (at step (k + 1)) correspond to the same past observation and control sequence up to step (k +1), the same control action is applied at all of them; and the controller enables an event 2 at all the states x 2 XPkD+1 if and only if the states reached by executing in the states XPkD+1 are all contained in XR. 22

dynamic control law D de ned in Algorithm 5.3 is the same as the static control law S de ned in section 4. In other words, if complete state observation is possible, then static and dynamic control laws are identical. This, as we will see, is not the case when incomplete state observations are made.

Remark 5.4 It is clear that if the mask M is the identity map (or is injective), then the

recursively in Algorithm 5.3. We use induction on k for obtaining the desired result. 1. I R ;Assumption A-1 D 2. P1 I ;initiation step of Algorithm 5.3 ;from 1 and 2 (base case) 3. P1D R 4. PkD R ;induction hypothesis D 5. Pk+1 R ;recursion step of Algorithm 5.3 and 3 6. 8k 1; PkD R ;from 3, 4, 5 and induction 7. Wk 1 PkD R ;taking disjunct wrt k in 6 This completes the proof. 2 Thus under the dynamic control law described in Algorithm 5.3, the state trajectories of the system remain con ned to the states where R holds. In fact, it follows from Lemma 5.5 that Wfyk g2Y Wk 1 PkD is the weakest predicate stronger than R to which the state trajectories of the partially observed controlled system G are con ned. The property, that this weakest predicate stronger than R equals R, is termed observability of R. De nition 5.6 The required predicate RWof the partially observed system G under the mask M is said to be observable if and only if fyk g2Y Wk 1 PkD = R, where PkD for each k 1 is recursively de ned in Algorithm 5.3. Algorithm 5.3 can be specialized to de ne a static control law S : Y ! 2 in which the control action at any step depends only on the observation at that step:

Proof: It su ces to show that for any yk 2 Y , Wk 1 PkD

Lemma 5.5 Consider the plant G and the mask M . Let G be controlled by the dynamic control law D described in Algorithm 5.3. Then Wfyk g2Y Wk 1 PkD R.
R. The controller D is de ned

Algorithm 5.7 Initiation step: P1S = M ?1 (y1) ^ I 2 S (M ?1 (y1) ^ I ) , sp (M ?1 (y1) ^ I ) R Recursion step: PkS+1 = spS (PkS ) ^ M ?1(yk+1) 2 S (M ?1 (yk+1)) , sp (M ?1 (yk+1)) R; k 1
Note that the control law S at each step k 1 depends only on the kth observation yk 2 Y , and an identical control action is applied at each of the states in the set where M ?1 (y1) holds (except for the case k = 1, where an identical control is applied at each of the states in the set where M ?1(y1) ^ I holds). Since by de nition, PkS M ?1(yk ), S (PkS ) = S (M ?1(yk )). Thus the term spS (PkS ) in the recursion step is well de ned. 23

Remark 5.8 It can be proved, similar to Lemma 5.5, that under the control of the static

controller S de W in Algorithm 5.7, the state trajectories of the system remain con ned to ned R, i.e. Wfyk g2Y k 1 PkS R. However, the static control law S is more restrictive than the dynamic control law D , for at every step k W 1, S depends on M ?1(yk ), whereas D depends on PkD , and PkD M ?1(yk ); as a result, k 1 PkS Wk 1 PkD . Thus in case of incomplete state observations, a static control law is of course more restrictive than a dynamic control law. In fact, the dynamic control law D de ned in Algorithm 5.3 is the minimally restrictive control law, i.e. if D0 : Y ? (2 )? ! 2 is any other control law, then Wk 1 PkD0 Wk 1 PkD . The dynamic control law D de ned in Algorithm 5.3 and the static control law de ned in Algorithm 5.7 can both be implemented by controllers of the type: controller that implements the dynamic control law of Algorithm 5.3 is another DEDS C def = (PX ; Y 2 ; spC ; IC ), where PX is the state set of the controller C ; Y 2 is the event set of C ; IC = M ?1 (y1) ^ I is initial condition of C ; and spC , the strongest postcondition predicate transformer of C , is de ned to be (spC )(y; 0 )(P ) = sp 0 (P ) ^ M ?1 (y) for each P 2 PX , y 2 Y and 0 (sp 0 = W 2 0 sp ). Finally, we present a necessary and su cient condition under which a solution to SPCOP exists, the proof of which is constructive so that a dynamic control law D that solves SPCOP is automatically obtained. Theorem 5.10 Consider the partially observed plant G under the mask M . Let R be the required predicate. Then a solution to SPCOP exists if and only if R is controllable and observable. Proof: Assume rst that R is controllable and observable. We will show that there exists a dynamic control law D such that (spD )?(I ) = R. Let D = D , where D is as de ned in Algorithm 5.3. Since R is u -invariant (R is controllable) and Pk R for each k 1 (Lemma 5.5), D (= D) in Algorithm 5.3 can be rewritten to yield the same dynamic control law: 8 2 ( ? u); 2 D (Pk ) , sp (Pk ) R; k 1 In other words, D W D) never disables any uncontrollable events. (= ? (I ) = 1. (spD ) W fyk g2Y Wk 1 PkD ;by de nition of D 2. Wfyk g2Y k 1 PkD = R ;R is observable ;from 1 and 2 3. (spD )?(I ) = R 4. (spD )?(I ) = R ;from 3 and D = D Next we show that if there exists a dynamic control law D such that (spD )?(I ) = R, then R is controllable and observable.

De nition 5.9 Consider the plant G def (PX ; ; sp; I ) and the mask M : X ! Y . The =

24

1. (spD )?(I ) = R ;by assumption W PD ? (I ) = W ; de nition of D 2. (spD ) W fyk g2Y k 1 k W D=R ;from 1 and 2 3. fyk g2Y k 1 Pk 4. Wfyk g2Y Wk 1 WkD Wfyk g2Y Wk 1 PkD ;D is minimally restrictive (Remark 5.8) P 5. R WfykW k 1 PkD ;from 3 and 4 g2Y 6. Wfyk g2Y Wk 1 PkD R ;from Lemma 5.5 ;from 5 and 6 7. Wfyk g2Y k 1 PkD = R 8. R is observable ;from 7 9. (spD )?(I ) = (spD )?(R) ;apply sp? on 1 D 10. (spD )?(R) = R ;from 1 and 9 11. sp? (R) (spD )?(R) ;spu is maximally restrictive control u ;from 10 and 11 12. sp? (R) R u 13. spu (R) R ;using Lemma 3.5 14. (spD jR)?(I ) = R ;from 1 and Lemma 4.3 ? (I ) (sp jR)? (I ) 15. (spD jR) ;D restricts behavior ? (I ) 16. R (sp jR) ;from 14 and 15 17. R is controllable ;from 13 and 16 This completes the proof. 2 Example 5.11 Consider the problem of mutual exclusion discussed in Example 4.6 for the Readers-Writers program of Example 2.10. Assume that the mask M : X ! Y is such that the number of writers always appears to be the same, namely, zero; however, the number of readers can be observed completely, i.e. M ((nr = p; nw = q)) = (nr = p; nw = 0) for all (p; q) 2 N 2. As discussed in Example 4.6, the mutual exclusion constraint is written as the required predicate R = ((nw = 0) _ (nw = 1 ^ nr = 0)), and it is controllable. However, R may or may not be observable, depending on the initial condition I . Case 1: I = ((nr; nw) = (0; 0)) as in Example 2.10. Then R is observable, as in this case the number of writers is completely determined by observing the occurrences of events st wt and end wt. Algorithm 5.3 yields the required dynamic supervisor. Case 2: I = (nr = 0) ^ ((nw = 0) _ (nw = 1)). In this case the number of writers cannot be fully determined by the past observations, and R is not observable. Hence a dynamic supervisor cannot constructed to solve SPCOP. Under the control of the dynamic supervisor D : Y ? (2 )? ! 2 as described in Algorithm 5.3, the closed-loop system can only achieve the predicate (nw = 0), i.e. (spD )?(I ) = (nw = 0) R.

6 Conclusion
We have presented in this paper a methodology for designing controllers for a wide variety of systems described in terms of a set of predicates and a set of predicate transformers. Predicates can concisely represent an in nite state space, hence many of the discrete event systems including clocks, queues with unbounded bu ers etc. can be modeled in this framework. The above theory can also be useful in synthesizing controller programs for programs describing possibly complex DEDS's. Thus the framework is quite general. 25

The strongest postcondition transformer has been presented as a fundamental concept for describing the state space evolution of a DEDS. We have presented the notion of duality of predicates and shown that sp and wlp are duals of each other. Many of the basic properties of a predicate transformer have been highlighted, and the relation of these properties to the existence of extremal solutions of some predicate equations has been pointed out. We have shown how these properties and extremal solutions of boolean equations can be applied for supervisory synthesis purposes. The notion of controllability of a required predicate describing the set of legal states has been de ned, and it has been shown that controllability is a necessary and su cient condition for the existence of a supervisor that guarantees the invariance of the required predicate under system evolution. The supervisory predicate control problem has been presented and solved using the notion of controllability. It has further been shown that the weakest controllable predicate stronger than the given predicate exists, and hence the construction of minimally restrictive supervisors is possible in case the required predicate is not controllable. We have presented a method for computing the weakest controllable predicate; this is one of the main results in this paper. We also address the problem of designing supervisors for a partially observed plant. We introduce the notion of observability which, together with controllability, is a necessary and su cient condition for the existence of a supervisor that solves the supervisory predicate control and observation problem introduced in this paper.

Proof: We prove Theorem 3.10 by way of cyclic implication. First we show that C-1 ) C-2. Note that since f is disjunctive, the weakest solution in C-1 exists. 1. f (g(P )) P ;from C-1 2. conjunct 1 of C-2 holds ;from 1 3. P is a solution of Q : f (Q) f (P ) ;f (P ) f (P ) 4. g(f (P )) is weakest solution of Q : f (Q) f (P ) ;from C-1 5. P g(f (P )) ;from 3 and 4 6. conjunct 2 of C-2 holds ;from 5 Next we show that C-2 ) C-3. This we show by showing that under C-2 the LHS of C-3 implies RHS of C-3 and vice versa. 1. f (P ) Q ;assume LHS of C-3 2. g(f (P )) g(Q) ;apply g on 1, g is monotone (Lemma 3.3) 3. P g(f (P )) ;conjunct 2 of C-2 4. P g(Q) ;from 2 and 3 5. RHS of C-3 holds ;from 4 6. P g(Q) ;assume RHS of C-3 7. f (P ) f (g(Q)) ;apply f on 6, f monotone (Lemma 3.3) 8. f (g(Q)) Q ;conjunct 1 of C-2 with P replaced by Q 9. f (P ) Q ;from 7 and 8 10. LHS of C-3 holds ;from 9 Next we show that C-3 ) C-4. Since g is conjunctive and P , treated as a constant predicate transformer, is monotone, it follows from Theorem 3.8 that the strongest solution in C-4
26

A Proof of Theorem

exists. 1. f (P ) f (P ) , P g(f (P )) ;replace Q by f (P ) in C-3 2. true , P g(f (P )) ;from 1 3. f (P ) a solution in C-4 ;from 2 4. P g(Q) , f (P ) Q ;rewriting C-3 5. f (P ) strongest solution in C-4 ;from 5 Next we show that C-4 ) C-1. 1. P g(Q) , f (P ) Q ;f (P ) is the strongest solution in C-4 2. g(P ) g(P ) , f (g(P )) P ;replace P; Q by g(P ); P respectively in 1 3. true , f (g(P )) P ;simplifying 2 4. f (P ) a solution in C-1 ;from 3 5. f (P ) Q , P g(Q) ;rewritting 1 6. f (Q) P , Q g(P ) ;replace P; Q by Q; P respectively in 5 7. g(P ) weakest solution in C-1 ;from 6 This completes the proof of Theorem 3.10.

2

B Acknowledgement
Authors would like to thank Shigemasa Takai, Department of Electronics Engineering, Osaka University, Japan for pointing out an omission in the de nition of observability.

References
1] A. Arnold and M. Nivat. Controlling behaviors of systems: Some basic concepts and some applications. In MFSC 1980 (Lecture Notes in Computer Science, 88), pages 113{122. Springer-Verlag, New York, 1980. 2] R. D. Brandt, V. K. Garg, R. Kumar, F. Lin, S. I. Marcus, and W. M. Wonham. Formulas for calculating supremal controllable and normal sublanguages. Systems and Control Letters, 15(8):111{117, 1990. 3] P. E. Caines, R. Greiner, and S. Wang. Dynamical logic observers for nite automaton. In Proceedings of the 1988 Conference on Decision and Control, pages 226{233, Austin, Texas, December 1988. 4] K. M. Chandy and J. Misra. Parallel Program Design: A Foundation. Addison-Wesley, Reading, MA, 1988. 5] E. W. Dijkstra. A Discipline of Programming. Prentice Hall, Inc., Englewood Cli s, NJ, 1976. 6] E. W. Dijkstra and C. S. Scholten. Predicate Calculus and Program Semantics. SpringerVerlag, New York, 1990. 7] V. K. Garg and R. Kumar. State-variable approach for controlling discrete event systems with in nite states. In Proceedings of 1992 American Control Conference, pages 2809{ 2813, Chicago, IL, July 1992. 27

8] D. Gries. The Science Of Programming. Springer-Verlag, New York, NY, 1985. 9] C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall, Inc., Englewood Cli s, NJ, 1985. 10] J. E. Hopcroft and J. D. Ullman. Introduction to Automata Theory, Languages and Computation. Addison-Wesley, Reading, MA, 1979. 11] R. Kumar, V. K. Garg, and S. I. Marcus. On controllability and normality of discrete event dynamical systems. Systems and Control Letters, 17(3):157{168, 1991. 12] R. Kumar, V. K. Garg, and S. I. Marcus. On !-controllability and !-normality of deds. In Proceedings of 1991 ACC, pages 2905{2910, Boston, MA, June 1991. 13] R. Kumar, V. K. Garg, and S. I. Marcus. Stability of discrete event system behavior. In Proceedings of 1991 IFAC Symposium on Distributed Intelligent Systems, pages 13{18, August 1991. 14] R. Kumar, V. K. Garg, and S. I. Marcus. Using predicate transformers for supervisory control. In Proceedings of 1991 IEEE Conference on Decision and Control, pages 98{ 103, Brighton, UK, December 1991. 15] R. Kumar, V. K. Garg, and S. I. Marcus. On supervisory control of sequential behaviors. IEEE Transactions on Automatic Control, 37(12):1978{1985, December 1992. 16] R. Kumar, V. K. Garg, and S. I. Marcus. Language stability and stabilizability of discrete event dynamical systems. SIAM Journal of Control and Optimization, 31(5):1294{ 1320, September 1993. 17] R. Kumar and L. E. Holloway. Supervisory control of Petri net languages. In Proceedings of 1992 IEEE Conference on Decision and Control, pages 1190{1195, Tucson, AZ, December 1992. 18] S. Lam and A. U. Shankar. Re nement and projection of relational speci cations. In Stepwise Re nement of Distributed Systems: Models, Formalisms, Correctness, New York, NY, May 1989. REX Workshop, Springer-Verlag. 19] Y. Li and W. M. Wonham. Controllability and observability in the state feedback control of discrete event systems. In Proceedings of the 27th CDC, pages 203{208, Austin, Texas, December 1988. 20] F. Lin and W. M. Wonham. On observability of discrete-event systems. Information Sciences, 44(3):173{198, 1988. 21] P. J. Ramadge. Observability of discrete event systems. In Proceedings of 1986 IEEE Conference on Decision and Control, pages 1108{1112, Athens, Greece, December 1986. 22] P. J. Ramadge and W. M. Wonham. Modular feedback logic for discrete event systems. SIAM Journal of Control and Optimization, 25(5):1202{1218, 1987. 28

23] P. J. Ramadge and W. M. Wonham. Supervisory control of a class of discrete event processes. SIAM Journal of Control and Optimization, 25(1):206{230, 1987. 24] P. J. Ramadge and W. M. Wonham. The control of discrete event systems. Proceedings of IEEE: Special Issue on Discrete Event Systems, 77:81{98, 1989. 25] R. Smedinga. Using trace theory to model discrete events. In P. Varaiya and A. B. Kurzhanski, editors, Discrete Event Systems: Models and Applications, pages 81{99. Springer-Verlag, 1987. 26] R. S. Sreenivas and B. H. Krogh. On Petri net models of in nite state supervisors. IEEE Transactions of Automatic Control, 37(2):274{277, February 1992.

29

¸ü¶àÏà¹ØÎÄÕÂ£º
¸ü¶àÏà¹Ø±êÇ©£º

All rights reserved Powered by ÌðÃÎÎÄ¿â 9512.net

copyright ©right 2010-2021¡£
ÌðÃÎÎÄ¿âÄÚÈÝÀ´×ÔÍøÂç£¬ÈçÓÐÇÖ·¸ÇëÁªÏµ¿Í·þ¡£zhit325@126.com|ÍøÕ¾µØÍ¼