当前位置:首页 >> >>

Discrete Event Based Simulation and Control of Continuous Systems

Discrete Event Based Simulation and Control of Continuous Systems
Ernesto Kofman

Thesis presented in partial full?lment of the requirements for the degree of

Doctor en Ingenier? ?a

Director: Sergio Junco

Facultad de Ciencias Exactas, Ingenier? ?a y Agrimensura Universidad Nacional de Rosario


This Thesis introduces the fundamentals and the theory of a new way to approximate di?erential equations applied to numerical integration and digital control. Replacing the classic time discretization with state quantization –an approximation approach already available in the literature– two new numerical integration methods are introduced. Due to this kind of discretization technique, the resulting simulation models are discrete event systems instead of a discrete time as in all classic numerical methods. This fact yields many practical advantages like an e?cient sparsity exploitation and an important computational cost reduction in hybrid system simulation. From a theoretical point of view, it is shown that the methods can be analyzed as continuous systems with bounded perturbations and thus, stability, convergence and error bound properties are proven showing also some interesting advantages with respect to classic approaches. The application of the new ?rst order method to the discretization of continuous controllers with the addition of an asynchronous sampling scheme allow to de?ne a new digital control methodology in which the time discretization is ideally avoided. As a result, this new technique improves considerably the dynamic response of digital control systems, reduces the quantization e?ects, the computational costs and the information tra?c between plant and controller. When it comes to theoretical properties, the new control scheme can ensure stability, convergence and error bound properties which can be applied to linear, non linear and time varying cases. Based on these properties, di?erent design algorithms are also provided.



Every Thesis begins with a mention to the Director. In this case, my acknowledgement is not ony due to our work in the last four years. If I had not met Sergio, my work would not been involved with teaching and research (and now I would be probably working at the industry, with a good salary, instead). Following with the academic side, an important part of what I learned during the last years (from technical and mathematical details to using LaTeX) is due to Marimar Seron. She also gave the impulse I needed to choose the subject of this Thesis. I also want to thank Julio Braslavsky for his remarks and for the review of many topics which then were part of this work (mainly those topics related to control). In that sense, I also owe thanks to Juan Carlos G? omez who, among many other things, helped me with a careful revision of more than one article supporting this Thesis. Many of the ideas which form part of the results of this Thesis were conceived after talking with Fran? cois Cellier. Besides that, his invitation to coauthor his book Continuous System Simulation was one of the most important impulses I received to continue with this work. I also want to mention the very important help that I received from Bernie Zeigler. He not only revised and collaborated with the correction of my articles but he also opened the doors of a great part of the simulation community to make my work known. Although I was only referring to the help I received in the academical sense, it would be unfair to forget the friendship and all what I received from these people in the human aspect. Nothing of this work had been done without the support of my parents, Julia and Hugo. They gave me –and they still do– everything what can be expected regarding love, advise, help and –the most important thing– the freedom to choose and build my own way. When I mention my family I must also thank Queca, my Grandmother. She, with her knishes and cookies, with her in?nite dedication and with her example, is one of the lights which guide every day of my life. I cannot forget mentioning the support of my brothers, Diego and Marco, and I must also thank the unconditional friendship of Monica (and Rami, of course). I do not want to skip a mention to my friends that always are with iii

iv me, leaving aside the geographic distance: Dami? an, Cabeza, Lottar, Dieguito, Mart? ?n, Diego, Gast? on, Betty, Momia, Hern? an. Finally, my greatest thanks to Juliana, for sharing each moment of our lifes.

1 Introduction 1.1 Outline of the Thesis . . . . . 1.2 Original Contributions . . . . 1.3 Related Works and Relevance 1.4 Supporting Publications . . . 1 2 5 5 8 9 10 11 14 16 18 21 25 27 28 29 30 33 35 38 39 41 42 44 46 49 52 58 60 62

. . . . . . . . . . . . . . . . of the Results . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

2 Quantization and DEVS 2.1 An Introductory Example . . . . . . . . . . 2.2 Discrete Event Systems and DEVS . . . . . 2.3 Coupled DEVS models . . . . . . . . . . . . 2.4 Simulation of DEVS models . . . . . . . . . 2.5 Quantized Systems and DEVS . . . . . . . 2.6 Illegitimacy of Quantized Systems . . . . . 2.7 DEVS and Continuous Systems Simulation 3 Quantized State Systems 3.1 Hysteretic Quantization . . . . . . . 3.2 QSS–method . . . . . . . . . . . . . 3.3 Trajectories in QSS . . . . . . . . . . 3.4 DEVS model of a QSS . . . . . . . . 3.5 Input signals in the QSS–method . . 3.6 Startup and Output Interpolation . 3.7 Quantization, Hysteresis and Errors

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

4 Theoretical Properties of QSS 4.1 QSS and Perturbation Theory . . . . . . . . . . . . . 4.2 Convergence of QSS–method . . . . . . . . . . . . . 4.3 General Stability Properties of QSS . . . . . . . . . 4.4 LTI Perturbed Systems: Lyapunov Approach . . . . 4.5 LTI Perturbed Systems: Non Conservative Approach 4.6 QSS–method in LTI Systems . . . . . . . . . . . . . 4.7 Quantum and Hysteresis Choice . . . . . . . . . . . 4.8 Limitations of the QSS–method . . . . . . . . . . . . v

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

vi 5 Second Order QSS 5.1 QSS2–Method . . . . . . . . . . 5.2 Trajectories in QSS2 . . . . . . 5.3 DEVS Representation of QSS2 5.4 Properties of the QSS2–Method 5.5 QSS vs. QSS2 . . . . . . . . . .

CONTENTS 63 63 65 67 70 76

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . .

. . . . .

6 Extensions of QSS and QSS2 Methods 6.1 QSS and QSS2 in DAE Systems . . . . . . . 6.2 Block–Oriented DEVS Simulation of DAEs . 6.3 QSS and QSS2 Simulation of Hybrid Systems 6.4 Quantized Bond Graphs . . . . . . . . . . . . 6.5 QBG and Structural Singularities . . . . . . . 7 Quantized–State Control 7.1 Asynchronous Sampling . . . . . . . . . . . 7.2 The QSC Scheme . . . . . . . . . . . . . . . 7.3 QSC and Perturbations . . . . . . . . . . . 7.4 Stability of Time Invariant QSC Systems . 7.5 Design Procedure for TI QSC . . . . . . . . 7.6 Stability of General QSC . . . . . . . . . . 7.7 General Procedure for QSC Implementation 7.8 Convergence of QSC . . . . . . . . . . . . . 8 Linear QSC Systems 8.1 QSC of LTI Systems . . . . . . . . . . . 8.2 Stability and Error in LTI QSC . . . . . 8.3 Procedure for LTI QSC Implementation 8.4 Matching the Converters . . . . . . . . . 8.5 Computational Costs Reduction in QSC . . . . . . . . . . . . . . . . . . . . . . .

79 . 80 . 86 . 89 . 98 . 104 . . . . . . . . . . . . . 109 110 111 112 114 118 120 124 126 133 133 134 136 142 142

9 Epilogue 145 9.1 Unsolved Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 145 9.2 Open Problems and Future Research . . . . . . . . . . . . . . . . 151 9.3 General Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 152 A List of Abbreviations B Pseudo–Codes for DEVS Simulation 161 163

Chapter 1

The resolution of most modern engineering problems could not be conceived without the help of simulation. Due to risk and cost reasons, the direct experimentation on real systems is leaving his place to the experimentation on simulation models (there are exceptions, of course). Nowadays, we can hardly ?nd a design problem which can be carried without the help of a computer. The increasing complexity of man–made systems and the need of more and more accurate and faster results stimulated the development of hundreds of new simulation techniques in the last 40 years. Simultaneously, the appearence of modern computers and its amazing evolution gave the necessary tool which allows the easy and e?cient implementation of the most complex simulation methods. Due to these facts, computer simulation constitutes a discipline itself. As every other discipline, it is divided in several sub–disciplines which deal with di?erent speci?c problems. Engineering problems involve dealing with physical systems. Since most physical laws are described by di?erential equations, simulation in engineering is in fact related to numerical resolution of di?erential equations. This is also called Continuous System Simulation. However, modern egineering systems usually include the presence of digital devices –digital controller for instance– whose description does not ?t in the form of a di?erential equation. This fact adds more complexity to our subject and leads to the family of the Hybrid Systems. In many applications, the simulations have to be performed in real–time. Typical examples are the so called man–in–the–loop systems (?ight simulators for instance) and, in general, simulations which interact with systems of the real world. Digital control systems can be considered as part of the last category since they have to interact with a real plant. Taking into account that plants are usually described by di?erential equations and consequently most controllers are designed to satisfy a continuous law, the digital implementation of continuous controllers can be seen as a problem of real–time simulation. Thus, it has to be 1



solved following some discretization techniques. The goal of this Thesis is to develop a completely new family of numerical methods for ordinary di?erential equations –which can be also applied to hybrid and di?erential algebraic equation systems– and to use the same ideas in the implementation of digital controllers. The main innovation in the numerical methods developed here is the avoidance of time discretization. All existing numerical methods for ordinary di?erential equations are based on the calculation of an approximate solution in some discrete instant of time. Here, we replace that discretization by the quantization of the state variables. As a result, the simulation model becomes discrete event instead of discrete time on one hand. This fact produces, in some cases, an important reduction of the computational costs (specially in hybrid systems). On the other hand, this new approximation yields strong theoretical properties related to stability, convergence, error bound, etc. Both kind of advantages –practical and theoretical– are also veri?ed in the application to digital control.


Outline of the Thesis

This Thesis is conceived as a self content work where each chapter is based in the previous ones although this does not always coincide with the chronological order in which the subjects were developed. On one hand, it is assumed that all the basic concepts –i.e. the concepts which are usually learned at the undergraduate level in Control and Numerical Methods– are already known by the reader. On the other hand, other new subjects and tools are only introduced for its use in the Thesis. In that way, the Thesis does not include neither a complete theory nor a state of the art description about DEVS1 , perturbation theory, numerical integration, or other concepts involved. Several original theoretical results, including theorems and proofs, are included in the Thesis. In order to give the reader the possibility of skipping them, most are concentrated in a chapter. When it comes to notation, the classic notations of control and DEVS theory were simulataneously used. Thus, many symbols are often rede?ned to be used in di?erent contexts. In that way, the meaning of each symbol must be seen according to the last de?nition made. Taking into account all these principles, the Thesis is organized as follows: This ?rst introductory chapter gives a description of the whole Thesis, not only by enummerating the results but also trying to relate them with the state of the art. The second chapter introduces the seed of the main ideas in which the rest of the work is based on. There, the original concepts about quantization and
1 The

list of abbreviations used in this Thesis can be found in Appendix A.



DEVS are introduced starting from a motivating example in Section 2.1. Then, Sections 2.2 to 2.4 develop an ad–hoc theory of DEVS which is then used in the rest of the Thesis. After that, Section 2.5 shows the relationship between the ?rst example and DEVS introducing the concept of Quantized Systems (QS), which was the ?rst idea to approximate di?erential equations with DEVS models. At that point, the state–of–the–art description is almost complete and then, Section 2.6 describes the main problem –called illegitimacy – shown by Quantized Systems. Discovering this problem and ?nding a solution were probably the most important motivations of this work. The third chapter starts describing the solution to the illegitimacy problem, which consists in adding hysteresis to the quantization. Based on the use of hysteretic quantization functions –formally de?ned in Section 3.1– the Quantized State Systems (QSS) and the QSS–method are introduced in Section 3.2. This method is the ?rst general discrete event numerical method for integration of Ordinary Di?erential Equations (ODEs). Based on the study of the trajectory forms (Section 3.3), the DEVS model of the QSS is deduced in Section 3.4. This model also shows the practical way to implement the method. After introducing a simple simulation example –where some qualities as the sparsity exploitations become evident– Sections 3.5 and 3.6 explain some practical aspects about the use of the QSS–method. Then, the chapter ?nishes concluding about the need of a deeper theoretical analysis. Chapter 4 is dedicated to the study of the main theoretical properties of the approximation. After explaining the relatioship between the QSS–method and the theory of perturbed systems, the convergence property is proven in the theorem included in Section 4.2. Further, the general stability properties are studied making use of a Lyapunov approach in Section 4.3. The main result here (Theorem 4.2) concludes that –under certain conditions– the ultimately boundedness of the approximate QSS solutions can be ensured. Despite the importance and generality of that stability result, its use is quite complicated and conservative (mainly due to the presence of Lyapunov functions). Thus, the analysis is carried to the ?eld of Linear Time Invariant (LTI) Systems where the properties of classic numerical method are usually studied. Here, in order to avoid conservative results, a new way to establish the bound of the perturbation e?ects is introduced (Section 4.5) which is then compared with the classic Lyapunov approach for LTI systems (previously introduced in Section 4.4). This comparison shows the convenience of th new approach. Based on this new approach, not only the stability but also the global error bound properties of the QSS–method in LTI systems are shown in Section 4.6. These theoretical results are then applied to the choice of the appropriate quantization and hysteresis according to the required accuracy. In spite of the good properties proven, the theoretical study also concludes that a good accuracy cannot be achieved without an important amount of calculations and therefore, a higher order approximation becomes necessary. The ?fth chapter introduces then the Second Order Quantized State Systems (QSS2) and the QSS2–method following a similar procedure to the one



used when the QSS–method was presented. After studying the trajectories in Section 5.2, the corresponding DEVS model is deduced (Section 5.3) and then the theoretical properties studied in Chapter 4 are extended to the new method. Finally, the simulation of some examples –which shows some practical advantages– is followed with a theoretical and empirical comparison between both introduced methods. Chapter 6 is concerned with the extension of the QSS and QSS2 methods to some special cases. Sections 6.1 and 6.2 study the use of these methods in Di?erential Algebraic Equation (DAE) Systems, providing a general methodology for the index 1 case. There, a very simple block–oriented solution which permits the direct simulation on block diagrams containing algebraic loops is also presented. The simulation examples illustrate an advantage: the method only iterates with the implicit algebraic equations in some particular steps. Section 6.3 shows the use of the new methods in Hybrid Systems . Here, the knowledge of the complete QSS and QSS2 trajectories together with the intrinsec asyncronous behavior of the methods give them several advantages for discontinuity handling. These advantages are illustrated with two examples which also include a comparative analysis against classic methods. This chapter ?nishes introducing the application of the QSS–method to the simulation of Bond Graphs (BG). The resulting scheme –called Quantized Bond Graph (QBG) – gives a very simple way to perform a direct simulation on a Bond Graph model of a physical system. There, Section 6.5 sketches a new way to deal with higher index DAEs resulting from BG models based on a switching behavior and avoiding iterative algorithms. The seventh chapter introduces the use of the QSS–method in real time control applications. The QSS approximation of a previously designed continuous controller implemented with an asynchronous sampling methodology de?nes a new digital control scheme which –in theory– avoids the time discretization. This asynchronous digital control method is called Quantized State Control (QSC) and it is formally de?ned in Section 7.2. Similarly to the simulation methods, QSC can be analyzed as a perturbed version of the original control system in order to deduce its theoretical properties. Making use of this, Sections 7.4 to 7.7 study the stability and the ultimate bounds of QSC nonlinear systems, going from the particular case of Time Invariant (TI) Systems to the general Time Varying cases. There, two practical design procedures –whose use is also illustrated with simulation examples– are developed from the corresponding stability theorems. Finally, the convergence of QSC system trajectories to the Continuous Control System (CCS) trajectories when the quantization goes to zero is shown in Section 7.8. In Chapter 8, the particular case of QSC applied to LTI Systems is studied and some practical aspects of the methodology are discussed. After introducing the LTI QSC model, Section 8.2 introduces the stability and error bound results based on the non conservative tools developed in Chapter 4. These results are then translated into practical design rules which are applied in two new examples where some advantages over classic digital discrete time control can be observed. Final remarks about practical implementation problems and practical



advantages are presented in Sections 8.4 and 8.5. The last chapter of the Thesis is ?nally dedicated to the discussion of unsolved problems, open questions, future research and general conclusions.


Original Contributions

From the last pages of the second Chapter to the end of the Thesis most of the results are original. The main contribution is the development of a new formal way to approximate di?erential equations, which not only includes methods and applications but also a wide variety of analysis tools. The ?rst original contributions were discovering the illegitimacy of Quantized Systems and ?nding the solution based on the use of hysteresis and the de?nition of hysteretic quantization functions and Quantized State Systems. All the work about Quantized State Systems is also original. There, the study of the trajectory forms, the deduction of the DEVS models, the practical issues related to the incorporation of input signals, startup and interpolation were all developed as part of this work. Another contribution was to realize the relationship between QSS and perturbed systems. Based on this, the theoretical study of stability and convergence was made converting the QSS–method in a well posed integration technique. In this theoretical study a colateral contribution was a new and less conservative way to analyze the ultimate bound of perturbed LTI systems which can be used not only in the context of quantization but also in more general problems related to perturbations. The use of this new analysis tool in the QSS–method allowed to establish a practical global error bound. Another original result was the de?nition of the second order method (QSS2) and all the work made about it: deduction of the trajectory forms, construction of the DEVS model and study of their theoretical properties. The extension of the methods to be used in DAEs, Hybrid Systems and Bond Graphs was also original as well as all the theoretical and practical study included there. When it comes to control, the de?nition of QSC is the ?rst asynchronous digital scheme which can be seen as an approximation of continuous controllers. Except the asynchronous sampling technique, all the study about QSC (de?nitions, theoretical properties and practical remarks) is completely original.


Related Works and Relevance of the Results

Among the works which have some relation with this Thesis, two di?erent cases should be distinguished. On one hand, there are works which use a similar methodology, trying to relate DEVS and di?erential equations. On the other hand, there are other



works –based on di?erent methods and tools- which attempt to give a solution to similar problems. With respect to the ?rst group, there is not yet an important amount of work in the literature. The ?rst ideas and de?nitions are due to Bernard Zeigler. After de?ning DEVS [69] in the seventies, the concept of Quantized Systems took more than 20 years to be formally de?ned [67]. In the context of this line, some interesting applications can be found in [64]. Following a similar goal –i.e. relating DEVS and ODEs– Norbert Giambiasi gave his own approach based on the event representation of trajectories and the de?nition of GDEVS [17], which was also applied to Bond Graph models in [49]. Although the event representation of piecewise linear trajectories in QSS2 was made using some concepts developed there, GDEVS is based on the knowledge of the ODE solutions and then it cannot be used as a general simulation method. There is also some recent work of Jean–Sebastien Balduc [2], but –despite the proposal is quite interesting – the research has not arrived yet to results which go much further than Zeigler’s ideas. This Thesis can be seen, in part, as a continuation of Zeigler’s work in the area. Here, the main problem (illegitimacy) was discovered and solved and the QSS–method appears then as the ?rst general discrete event integration algorithm for di?erential equations. However, as it was already mentioned, solving the illegitimacy problem was just the beginning. The work was then extended to a wide variety of theoretical and practical ?elds. Although the problem of real–time DEVS has been being considered since 10 years ago [65], there are not precedents about its use in continuous control. Anyway, there was already an idea about the use of quantization to approximate a linear continuous controller using a ?nite state automata [45]. However, due to the fact that Finite Automata are not as general as DEVS the resulting models are non–deterministic unless they use a very sophisticate quantization. When it comes to the second group –i.e. the related works which point to the same problems with di?erent tools– there is all the literature on numerical integration and digital control. However, the problems for which this work tries to o?er better solutions are in fact much more bounded. It is impossible anyway to mention and to know all what is being done to solve all these problems. Thus, the works mentioned here will be just the ones which were more related with the more important results of this Thesis. One of the most remarkable features of the QSS and QSS2 methods is the way in which they exploit sparsity. Many e?orts are being doing in the ?eld in order to take advantages of this fact. One of the most e?cient simulation tools for numerical ODE integration is Matlab, whose algorithms are provided of special routines which tries to make use of the structure in each matrix multiplication and inversion [59]. However, in the QSS and QSS2 cases the sparsity exploitation is just due to the intrinsic behavior of the methodology. Thus, it is not necessary to use any special routine. Moreover, when a part of a system does not perform changes (here each integrator acts independently) it does not expend computation time

1.3. RELATED WORKS AND RELEVANCE OF THE RESULTS and it does not cause any calculation in the rest of the simulation model.


The global error bounds of di?erent methods is usually studied to prove their convergence when the step size goes to zero [18]. Besides these cases, variable step methods are usually conceived to have this error bounded according to the desired accuracy. In the new methods –which are not provided of any kind of adaptive rules– the global error bound in LTI systems can be calculated by a closed formula which holds for any time and for any input trajectory. Another area in which QSS and QSS2 showed a good performance is in the simulation of Hybrid Systems. These cases have been always a problem for classic discrete time methods. Here, one of the most di?cult issues is the event detection. There is an important number of recent publications which are pointed to ?nd e?cient solutions to this problem [53, 62, 58, 13]. QSS and QSS2 have clear advantages in these cases. On one hand, the system trajectories are exactly known during the intersample times. Moreover, they are piecewise linear or parabolic. Thus, ?nding the exact time at which the discontinuities occur is a trivial problem. On the other hand, the methods are asynchronous and they accept events at any time. Thus, the implementation is very simple and it does not require to modify anything in order to take into account the events. When it comes to control applications, QSC is de?ned as an asynchronous digital control scheme based on quantization and one of the most important qualities is that it takes into account the quantization e?ects of converters at the design time. The study of quantization e?ects in sampled data control systems was studied during several years by Anthony Michel’s group. They have results on LTI systems [48, 47, 14], concluding about ultimate bounds and errors. Some work was also done with nonlinear plants [20] and with multirate controllers [21]. Some works also attempt to deal with the quantization at the design stage with di?erent goals. In [11] the problem of stabilizing a discrete time linear system taking into account the quantization in the state measurement is studied. The idea is also extended to continuous systems in [5]. Finally, there are some results which use quantization to reduce the amount of information which is transmitted between sensors, controllers and actuators [12]. In all these problems, QSC o?ers also new solutions. When it comes to quantization e?ects, their estimation is bounded by a very simple closed formula in LTI systems while in nonlinear cases the bound can be established by a Lyapunov analysis. All these concepts can be taken into account with design purposes. In the case of information reduction, the advantages are amazing. QSC can work transmitting only a single bit at each sampling.




Supporting Publications

Most of the results included in this Thesis were already published in journals and conference proceedings, while the rest are still in press or under review. The ?rst results were discovering the illegitimacy of Quantized Systems, solving it with the addition of hysteresis and the de?nition of QSS, the deduction of the trajectory forms, the construction of the DEVS model and the proof of the general stability properties. These results were ?rst published in a local conference [27] and then in an internation journal [39], where the convergence property was also included (Sections 2.6 to 4.3 of the Thesis). The second step was the extension of the results to Bond Graphs models and the de?nition of Quantized Bond Graphs, whose ?rst version was presented in [26] and then extended for its publication in an international conference [40] (Sections 6.4 and 6.5). The comparison between the QS approach and the QSS method and the rules for the hysteresis choice were included in an international conference paper [41] (Section 4.7). After that, the error bound properties of QSS in LTI systems (Section 4.6) were published in the proceedings of a local meeting [28]. Simultaneously, the ?rst results on QSC with Time Invariant plants including the study of stability, design algorithm, convergence and practical remarks were presented as a two parts paper [29, 30] in a local conference (Sections 7.1–7.5, 7.8, and 8.4–8.5). These results are also published in an international journal [37]. The following step was the de?nition of the second order method QSS2 and the study of their properties (Chapter 5). The results were published in a journal paper [31], where the error bound analysis in LTI systems was also included (Section 4.6). The non–conservative estimation of ultimate bounds in LTI perturbed systems and its comparison with the classic Lyapunov analysis (Sections 4.4–4.5) was presented in a local control conference [33] and then submitted to a journal (Automatica) as a technical note (it is still under review). The application of the previous result to QSC in LTI systems (Sections 8.1 and 8.3) was also published in the local control conference [35]. A journal paper with the application of QSS and QSS2 to DAEs (Sections 6.1–6.2) was accepted in Simulation [34]. The extension of those methods to Hybrid Systems (Section 6.3) was sent to a journal as a full paper [32], but it is still in the review process. In the same situation is the paper [36] which extends QSC for Time Varying plants (Sections 7.6–7.7 ) and study their properties in LTI systems (Sections 8.1 and 8.3) . Finally, all the results concerning simulation (Chapters 2 to 6) are included in a coauthored textbook [7] which is still in preparation.

Chapter 2

Quantization and DEVS
The literature on numerical integration of Ordinary Di?erential Equations –see for instance [55, 19, 18]– shows a wide variety of techniques. The methods can be either explicit, implicit or linearly implicit (according to the next–step formula), ?xed or variable step, ?xed or variable order, one or multiple step, etc. In spite of their di?erences all those methods have something in common: they discretize the time. In other words, the resulting simulation model (i.e. the system implemented in the computer program) is always a Discrete Time System. Here, the name Discrete Time Systems refers to systems which change in a synchronous way only in some given instants of time. The problems carried by this kind of behavior in the simulation of Continuous Systems are related to the lost of the simulation control between successive discrete instants. Thus, the error could grow to undesired values and, in some cases, it can produce unstability. It is also possible having input changes and even structure changes in some instants of time which do not correspond to those discrete instants. It is known that the use of methods with step control and implicit formulas allows –sometimes– to handle these problems. However, all these solutions imply using algorithms whose implementation is not straightforward –unless we have tools like Dymola or other comercial software– and they are completely useless in some contexts (in Real-Time Simulation for instance). Taking into account these facts, it is natural trying to do something in order to avoid the time discretization. However, the simulation model has to be implemented in a digital device. Then, it is clear that discretization is necessary since only a ?nite number of changes in the model can be computed for each ?nite interval of time. Thus, at ?rst glance, the time discretization avoidance seems to be impossible. In spite of this remark there are other variables which can be discretized as it will be shown in this chapter. 9




An Introductory Example
x ˙ (t) = ?x(t) + 10?(t ? 1.76) (2.1a) (2.1b)

Consider the ?rst order system1 :

with the initial condition x(t0 = 0) = 10 An attempt to simulate it using Euler or any other classic method with a step size h = 0.1 –which is appropiated according to the system speed– falls in the case where the input changes at an instant of time which does not coincide with the discrete time. Let us see now what happens with the following Continuous Time System : x ˙ (t) = ??oor(x(t)) + 10?(t ? 1.76) or where q (t) ? ?oor(x(t)). Although the system de?ned by Eq.(2.2) is nonlinear and it does not satisfy the properties usually observed in ODE integration (Lipchitz conditions, continuity, etc.), it can be easily solved. When 0 < t < 1/9 we have q (t) = 9 and x ˙ (t) = ?9. During this interval, x(t) goes from 10 to 9 with a constant slope (-9). Then, during the interval 1/9 < t < 1/9 + 1/8 we have q (t) = 8 and x ˙ (t) = ?8. Now, x(t) goes from 9 to 8 (also with constant slope). This analysis continues in the same way and in time t = 1.329 it results that x(t) = 3. If the input do not change, in t = 1.829 we would have x(t = 1.829) = 2. However, at time t = 1.76 (when x = 2.138) the input changes and we have now x ˙ (t) = 8. The derivative then changes again when x(t) = 3, i.e. in time t = 1.8678 (this time can be calculated as 1.76 + (3 ? 2.138)/8). The calculations continue until we have x(t) = q (t) = 10 and in that moment the derivative x ˙ (t) becomes zero and the system will not change any more. Figure 2.1 shows the trajectories of x(t) and q (t). This strange simulation was completed in only 17 steps and –ignoring the round-o? problems– the exact solution of (2.2) was obtained. This solution and the solution of the original system (2.1) are compared in Figure 2.2. The solutions of the true system and the modi?ed one are clearly similar. Apparently, if variable x is replaced by ?oor(x) in the right hand of a ?rst order di?erential equation, a method to simulate it is obtained. This idea could be generalized to be used in a system of order n replacing all the state variables by its ?oor value in the right hand of the equation. However, it is necessary to explore the discrete nature of system (2.2) ?rst and to introduce some tools for its representation and simulation.

(2.2a) (2.2b)

x ˙ (t) = ?q (t) + 10?(t ? 1.76)

stands for the unit step.

11 10 9 8 7 6 5 4 3 2 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5


x, q

q (t) x(t)



Figure 2.1: Trajectories in system (2.2) x(t)
10 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 11

modi?ed true



Figure 2.2: State trajectories in the systems (2.1) and (2.2)


Discrete Event Systems and DEVS

The simulation of a Di?erential Equation using any previously existing method can be expressed by a di?erence equation in the form: x(tk+1 ) = f (x(tk ), tk ) (2.3)

where the di?erence tk+1 ? tk can be either constant or variable, and function f can be explicit or implicit. As a consequence, the simulation program has an



iterative code which advances the time according to the next step size. Thus, it is said that those simulation methods produce Discrete Time Models of simulation. System (2.2) can be seen itself as a simulation model because it can be exactly simulated with only 17 steps. However, it does not ?t in the form of Eq. (2.3). The problem here is the asynchronous way in which it deals with the input change at t = 1.76. Evidently, there is a system here which is discrete in some way but it belongs to a di?erent category than Discrete Time. As it will be seen, our strange system can be represented by a Discrete Event System. The word ’Discrete Events’ is usually associated with very popular formalisms like State Automatas, Petri Nets, Event Graphs, Statecharts, etc. Unfortunately, none of them can represent this kind of systems in a general situation. Those graphical languages are limited to system with a ?nite number of possible states while in this case a more general tool is required. Anyway, such a general formalism exists and it is known as DEVS (Discrete EVent System speci?cation). The DEVS formalism [69, 66] was developed by Bernard Zeigler in the midseventies. The use of DEVS related with continuous systems is not yet very common and it is almost unknown by the numerical method and control communities. However, DEVS is widely used in computer sciences where it received a very important theoretical and practical development. DEVS allows to represent all the systems whose input/output behavior can be described by sequences of events with the condition that the state has a ?nite number of changes in any ?nite interval of time. An event is the representation of an instantaneous change in some part of a system. It can be characterized by a value and an ocurrence time. The value can be a number, a vector, a word, or in general, an element of a given set. The trajectory de?ned by a sequence of events adopts the value φ (or No Event ) for all the time values except in the instants in which there are events. In these instants, the trajectory takes the value corresponding to the event. Figure 2.3 shows an event trajectory which takes the value x1 in time t1 , then the value x2 at time t2 , etc. A DEVS model processes an input event trajectory and, according to that trajectory and its own initial conditions provokes an output event trajectory. This Input/Output behavior is represented in Figure 2.4. The behavior of a DEVS model is expressed in a way which is very common in automata theory. A DEVS atomic model is de?ned by the following structure: M = (X, Y, S, δint , δext , λ, ta) where: ? X is the set of input event values, i.e., the set of all possible values that and input event can adopt. ? Y is the set of output event values.



x4 x2

x1 x3 t1 t2 t3 t4 t5 t

Figure 2.3: An event trajectory

Figure 2.4: Input/Output behavior of a DEVS model ? S is the set of state values. ? δint , δext , λ and ta are functions which de?ne the system dynamics. Each possible state s (s ∈ S ) has an associated Time Advance calculated by the Time Advance Function ta(s) (ta(s) : S → ? + 0 ). The Time Advance is a non-negative real number saying how long the system remains in a given state in absence of input events. Thus, if the state adopts the value s1 at time t1 , after ta(s1 ) units of time (i.e. at time ta(s1 ) + t1 ) the system performs an internal transition going to a new state s2 . The new state is calculated as s2 = δint (s1 ). The function δint (δint : S → S ) is called Internal Transition Function . When the state goes from s1 to s2 an output event is produced with value y1 = λ(s1 ). The function λ (λ : S → Y ) is called Output Function . The functions ta, δint and λ de?ne the autonomous behavior of a DEVS model. When an input event arrives the state changes instantaneously. The new state value depends not only on the input event value but also on the previous state value and the elapsed time since the last transition. If the system goes to the state s3 at time t3 and then an input event arrives at time t3 + e with value x1 , the new state is calculated as s4 = δext (s3 , e, x1 ) (note that ta(s3 ) > e). In this case, it is said that the system performs an external transition. The function δext (δext : S × ? + 0 × X → S ) is called External Transition Function .



No output event is produced during an external transition. Example 2.1. DEVS model of a static scalar function. Consider a system which receives a piecewise constant trajectory u(t) represented by a sequence of events with the consecutive values. The system output is another sequence of events which represents the function y (t) = f (u(t)) where f (u) is a known real–valued function. A possible DEVS model corresponding to this behaviour is given by the following structure: M1 = (X, Y, S, δint , δext , λ, ta), where X=Y =? S = ? × ?+ 0 δint (s) = δint (u, σ ) = (u, ∞) δext (s, e, x) = δext ((u, σ ), e, x) = (x, 0) λ(s) = λ(u, σ ) = f (u) ta(s) = ta(u, σ ) = σ Note that the state is composed by two real numbers. The ?rst one (u) contains the last input value and the second one (σ ) has the time advance. In most DEVS models that variable σ , equal to the time advance, is put as part of the state. In that way, the modeling task becomes easier. This DEVS model has a static behavior since it only does something when an event arrives. It was mentioned that no output event is produced during an external transition. However, in this example a trick was made to produce the the output event: the time advance is set to zero when an event arrives. Then, the internal transition takes place immediatly and the output event is produced.


Coupled DEVS models

As it was mentioned, DEVS is a very general formalism and it can describe very complex systems. However, the representation of a complex system based only on the transition and time advance functions is too di?cult. The reason is that in those functions all the possible situations in the system must me imagined and described. Furtunately, complex systems can be usually thought as the coupling of simpler ones. Through the coupling, the output events of some subsystems are converted into input events of other subsystems. The theory guarantees that the coupling of DEVS models de?nes a new DEVS model (i.e. DEVS is closed under coupling) and the complex systems can be represented by DEVS in a hierarchical way [66]. There are basically two di?erent ways of coupling DEVS models. The ?rst one is the most general and uses translation functions between subsystems. The second one is based on the use of input and output ports. This last way is the



one which will be used in this Thesis since it is simpler and more adequate to the simulation of continuous systems. The use of ports requires adding to the input and output events a new number, word or symbol representing the port in which the event is coming. The following example –which is a modi?cation of the Example 2.1– illustrates this idea. Example 2.2. DEVS model of a static function Consider the system of Example 2.1, but suppose that it receives n piecewise constant inputs, u1 (t), . . . , un (t) and the output y (t) is now calculated as y = f (u1 , . . . , un ). Then, a DEVS atomic model with ports which can represent this behavior is given by the following structure: M2 = (X, Y, S, δint , δext , λ, ta), where X =Y =? ×? S = ?n × ?+ 0 δint (s) = δint (u1 , . . . , un , σ ) = (u1 , . . . , un , ∞) δext (s, e, x) = δext ((u1 , . . . , un , σ ), e, (xv , p)) = (? u1 , . . . , u ?n , 0) λ(s) = λ(u1 , . . . , un , σ ) = (f (u1 , . . . , un ), 1) ta(s) = ta(u1 , . . . , un , σ ) = σ where u ?i = xv ui if i = p otherwise

As it can be seen, in this last example the input and output events contain a natural number which indicates the corresponding port. Then, the coupling between di?erent systems is indicated by enumerating the connections to describe it. An internal connection involves an input and an output port corresponding to di?erent models. In the context of hierarchical coupling, there are also connections from the output ports of the subsystems to the output ports of the network –which are called external output connections – and connections from the input ports of the network to the input ports of the subsystems (external input connections ). Figure 2.5 shows a coupled DEVS model N which is the result of coupling the models Ma and Mb . There, the output port 2 of Ma is connected to the input port 1 of Mb . This connection can be represented by [(Ma , 2), (Mb , 1)]. Other connections are [(Mb , 1), (Ma , 1)], [(N, 1), (Ma , 1)], [(Mb , 1), (N, 2)], etc. According to the closure property, the model N can be also used as an atomic DEVS and it can be coupled with other atomic or coupled models. The DEVS theory uses a formal structure to represent coupled DEVS models with ports. The structure includes the subsystems, the connections, the network input and output sets and a tie-breaking function to manage the presence of simultaneous events. The connections are divided into three sets: one set composed by the connections between subsystems (internal connections), other



Ma Mb


Figure 2.5: Coupled DEVS model set which contains the connections from the network to the subsystems (external input connections) and the last one has the connections from the subsystems to the network (external output connections). The tie-breaking function can be avoided with the use of Parallel-DEVS , which is an extension of the DEVS formalism that allows dealing with simultaneous events. However, these last concepts –the coupled DEVS formal structure and the parallel-DEVS formalism– will not be develped here since they are not necessary to use DEVS as a tool for continuous system simulation. Anyway, the complete theory of DEVS can be found in the second edition of Zeigler’s book [66].


Simulation of DEVS models

One of the most important features of DEVS is that very complex models can be simulated in a very easy and e?cient way. In the last years, several software tools have been developed for the simulation of DEVS models. Some of those tools o?er libraries, graphical interfaces and di?erent facilities for the user. There are several free software packages for DEVS simulation and the most popular are DEVS-Java [68] and DEVSim++ [25]. It should be also mentioned here a software tool conceived in the context of this work which not only constitutes a general purpose DEVS simulation environment but also implements the main ideas which will be developed in this Thesis. This tool is called PowerDEVS [51] and it was developed by Esteban Pagliero and Marcelo Lapadula as a Diploma Work at the FCEIA, UNR. Besides these tools, DEVS models can be simulated with a simple ad-hoc program written in any language. In fact, the simulation of a DEVS model is not much more complicated than the simulation of a Discrete Time Model. The problem is that there are models which are composed by many subsystems and

2.4. SIMULATION OF DEVS MODELS root ? coordinator coordinator2 atomic2 atomic3 coordinator1 simulator1 simulator3


coupled2 coupled1 atomic1


Figure 2.6: Hierarchical model and simulation scheme the ad-hoc programming may become a very hard task. The basic idea for the simulation of a coupled DEVS model can be described by the following steps: 1. Look for the atomic model that, according to its time advance and elapsed time, is the next to perform an internal transition. Call it d? and let tn be the time of the mentioned transition 2. Advance the simulation time t to t = tn and execute the internal transition function of d? 3. Propagate the output event produced by d? to all the atomic models connected to it executing the corresponding external transition functions. Then, go back to the step 1 One of the simplest ways to implement these steps is writing a program with a hierarchical structure equivalent to the hierarchical structure of the model to be simulated. This is the method developed in [66] where a routine called DEVSsimulator is associated to each atomic DEVS model and a di?erent routine called DEVS-coordinator is related to each coupled DEVS model. At the top of the hierarchy there is a routine called DEVS-root-coordinator which manages the global simulation time. Figure 2.6 illustrates this idea over a coupled DEVS model The simulators and coordinators of consecutive layers communicates each other with messages. The coordinators send messages to their children so they execute the transition functions. When a simulator executes a transition, it calculates its next state and –when the transition is internal– it sends the output value to its parent coordinator. In all the cases, the simulator state will coincide with its associated atomic DEVS model state. When a coordinator executes a transition, it sends messages to some of their children so they execute their corresponding transition functions. When an output event produced by one of its children has to be propagated outside the coupled model, the coordinator sends a message to its own parent coordinator carrying the output value.



Each simulator or coordinator has a local variable tn which indicates the time when its next internal transition will occur. In the simulators, that variable is calculated using the time advance function of the corresponding atomic model. In the coordinators, it is calculated as the minimum tn of their children. Thus, the tn of the coordinator in the top is the time in which the next event of the entire system will occur. Then, the root coordinator only looks at this time, advances the global time t to this value and then it sends a message to its child so it performs the next transition (and then it repeats this cycle until the end of the simulation). The details and the pseudo–codes associated to the simulators, coordinators and root coordinators are included in Appendix B. One of the most interesting properties shown by this kind of simulation is the independence between di?erent simulators associated to di?erent atomic models. In that way, when an atomic model has its time advance set to a big value (or in?nite), it does not a?ect at all to the rest of the models and the simulation does not spend any calculation with it. It will be seen that this fact will result in an important advantage for the simulation of sparse systems. There are many other possibilities to implement a simulation of DEVS models. The main problem with the methdology described is that, due to the hierarchical structure, an important tra?c of messages between the higher layers and the lower ones can be present. All these messages and their corresponding computational time can be avoided with the use of a ?at structure of simulation. The way of transforming a hierarchical simulation into a ?at one is rather simple in DEVS [24]. In fact, most of the software tools we mentioned implement the simulation based on a ?at code.


Quantized Systems and DEVS

In the examples 2.1 and 2.2 it was shown that piecewise constant trajectories can be represented by sequences of events. This simple idea constitutes the basis for the use of DEVS in the simulation of continuous systems. In those example, it was also shown that a DEVS model can represent the behavior of a static function with piecewise constant input trajectories. The only problem is that most continuous system trajectories are not piecewise constant. However, the system can be modi?ed in order to have such kind of trajectories. In fact, that is what was done to System (2.1) using the function ?oor to convert it into System (2.2). Coming back to that example, System (2.2) can be divided in the following way: x ˙ (t) q (t) and dx (t) = ?q (t) + u(t) (2.5) = dx (t) = ?oor(x(t)) (2.4a) (2.4b)



where u(t) = 10?(t ? 1.76). The system can be then represented using the Block Diagram of Figure 2.7.


dx (t)


q (t)

Figure 2.7: Block Diagram Representation of (2.4)-(2.5) As we mentioned before, Subsystem (2.5) –which is modeled by a static function– can be represented by the DEVS model M2 in Example 2.2 (page 15). Subsystem (2.4) is a dynamic equation which has a piecewise constant input trajectory dx (t) and a piecewise constant output trajectory q (t). It can be exactly represented using the DEVS model that follows:

M3 = (X, Y, S, δint , δext , λ, ta), where X =Y =? ×? S = ?2 × × ?+ δint (s) = δint (x, dx , q, σ ) = (x + σ · dx , dx , q + sgn(dx ), 1 ) |dx | δext (s, e, x) = δext (x, dx , q, σ, e, xv , p) = (x + e · dx , xv , q, σ ?)

λ(s) = λ(x, dx , q, σ ) = (q + sgn(dx ), 1) ta(s) = ta(x, dx , q, σ ) = σ where ? q+1?x ? ? xv ? q?x σ ?= xv ? ? ? ∞

if xv > 0 if xv < 0 otherwise

Subsystem (2.4) –which corresponds to the integrator with the stairway block in the Block Diagram of Figure 2.7– is what Zeigler called Quantized Integrator [67, 66]. There, the function ?oor acts as a quantization function. A quantization function maps all real numbers into a discrete set of real values. A quantizer is a system that relates its input and output by a quantization function. Then, the stairway block is a particular case of a quantizer. Although the DEVS model M3 represents a particular Quantized Integrator the DEVS



model correponding to a general one –i.e.. with a general quantizer– is not very di?erent. The complete system (2.2) was called Quantized System and it can be exactly represented by the DEVS model resulting from the coupling of the atomic models M2 and M3 . This was the idea sketched by Zeigler to approximate and simulate continuous systems using DEVS. As it was seen, a DEVS model representation of general Quantized Integrators can be obtained. This idea will work when their input trajectories are piecewise constant. It is also clear that the DEVS model of any static function (with the same condition for the input trajectories) can be also built. Taking also into account that DEVS is closed under coupling, it is natural to think that a coupled DEVS model representing a general Quantized System can be easily obtained. In order to do it, a general time invariant system should be considered: x ˙1 x ˙2 x˙n = f1 (x1 , x2 , · · · , xn , u1 , · · · , um ) = f2 (x1 , x2 , · · · , xn , u1 , · · · , um ) . . . = fn (x1 , x2 , · · · , xn , u1 , · · · , um )


This last equation can be can transformed into: x ˙1 x ˙2 x˙n = f1 (q1 , q2 , · · · , qn , u1 , · · · , um ) = f2 (q1 , q2 , · · · , qn , u1 , · · · , um ) . . . = fn (q1 , q2 , · · · , qn , u1 , · · · , um )


where each qi (t) is related to xi (t) by some quantization function. Considering that the input functions uj (t) are piecewise constant, each term at the right hand of (2.7) can only adopt values in a ?nite set. The variables qi are called quantized variables . This system can be represented by the block diagram of Figure 2.8, where q and u were de?ned as the vectors formed with the quantized and input variables respectively. Each subsystem in Figure 2.8 can be exactly represented by a DEVS model since they are composed by a static function and a quantized integrator. These DEVS models can be coupled and according to the closure under coupling property the complete system will de?ne a DEVS model. Thus, when a system is modi?ed with the addition of quantizers at the output of the integrators, it can be exactly simulated by a DEVS model. This idea is the formalization of the ?rst approximation to a discrete event based method for continuous system simulation. With this method –ignoring the round–o? errors– System (2.7) can be exactly simulated. Taking into account that this system seems to be an approximation to the original System (2.6), it can be thought that a numerical method which avoids the time discretization was ?nally obtained.



u f1 . . . xn fn q qn x1 q1

Figure 2.8: Block Diagram Representation of (2.7)

However, as it will be seen in the next section, this idea does not work in general cases.


Illegitimacy of Quantized Systems

A DEVS model is said to be legitimate when it cannot perform an in?nite number of transitions in a ?nite interval of time [66]. Legitimacy is the property which ensures that a DEVS model can be simulated. Otherwise –when in?nite transitions can occur in a ?nite interval– the system can only be simulated until that condition is reached. In that case, it is said that the system is illegitimate. DEVS theory distinguishes two cases of illegitimacy. The ?rst one is when there are in?nite transitions in the same instant of time (i.e. a loop between di?erent states with the time advance equal to zero). This kind of illegitimacy is also common in other discrete event formalisms (timed event graphs for instance). The second case occurs when the system goes through an in?nite sequence of states in which the time advance decreases. In that case, if the summatory of the serie de?ned by the time advance values converges to a ?nite number there is also an in?nite number of events in a ?nite interval of time. Those cases are also called Zeno systems in reference to Zeno’s paradox of Achilles and the Tortoise. It can be easily checked that the atomic DEVS models M2 and M3 are legitimate. Unfortunately, legitimacy is a property which is not closed under coupling. As a result, the coupling of legitimate models might result in an illegitimate coupled model.



This fact opens the possibility that a DEVS model like the shown in Fig.2.8 results illegitimate. In fact, this is what happens in most cases. However, the illegitimacy of Quantized Systems is not a problem of DEVS. It is related to the solutions of (2.7). There, the trajetories qi (t) are not necessarily piecewise constant. Sometimes, they can have an in?nite number of changes in a ?nite interval of time which produces an in?nite number of events in the corresponding DEVS model. This problem can be observed just taking u(t) = 10.5?(t ? 1.76) in System (2.5)–(2.4). The trajectories until t = 1.76 are exactly the same as the shown in Figure 2.1. When the step is applied, the trajectory starts growing a bit faster and when x(t) = q (t) = 10 the state trajectory continues growing with a slope x ˙ (t) = 0.5. Then, after 2 units of time we obtain x(t) = q (t) = 11 but immediatly the slope becomes negative (x ˙ (t) = ?0.5). Thus, x(t) starts falling, q (t) goes back to 10, the derivative becomes again positive and we obtain a cyclic behavior. The problem is that the cycle has a period equal to zero and there are in?nte changes in q (t) in the same instant of time. This anomalous behaviour can be also observed in the resulting DEVS model. When the DEVS model corresponding to the integrator performs an internal transition, it also produces an output event which represents the change in q (t). This event is propagated by the internal feed-back –see Figure 2.7– and it produces a new external transition in the integrator which changes the time advance to zero. Thus, the integrator performs another internal transition and the cycle continues forever. This case belongs to the ?rst kind of illegitimacy above mentioned. Here, the system can be only simulated until the condition x(t) = 10 is reached. It could be conjectured that illegitimacy only occurs when the system approachs the equilibrium point. If that were the case, the illegitimacy condition could be detected ?nishing the simulation at that moment. However, that conjecture is only true in ?rst order systems. In higher order systems this can of behavior can be also observed far away from the equilibrium points. Moreover, Zeno–like illegitimacy can also occur as it is shown in the following counter–example: Example 2.3. Achilles and the Tortoise. Consider the second order system: x ˙1 x ˙2 = ?0.5 · x1 + 1.5 · x2 = ?x1 (2.8)

Let us apply the quantization function: xi ? 1 )+1 (2.9) 2 This quantization (see Figure 2.9) over both variables divides the state space as Fig.2.10 shows: Then, the resulting Quantized System is: qi = 2 · ?oor( x ˙1 x ˙2 = ?0.5 · q1 + 1.5 · q2 = ?q1 (2.10)




3 1 -3 -1 -1 -3 1 3


Figure 2.9: An odd quantization function



1 -3 -1 -1 1 3



Figure 2.10: Partition of the state space Now, let us analyze the solution of (2.10) from the initial condition x1 = 0, x2 = 2. The derivative of the state vector of (2.10) in a point is given by rigth hand of that equation, using (2.9). This is equal to the derivative of the continuous system (2.8) evaluated in the bottom left corner of the square cointaining that point. For instance, at the point (0, 2) the derivative is given by the right hand of (2.8) at the point (?1, 1), that is (2, 1). Then, if the initial condition is (0, 2) the trajectory will go following the di-



rection (2, 1) until it reaches the next square (here we have a transition since there is a change in the quantized variable q1 corresponding to x1 ). The transition will occur at the time t = 0.5(the speed in x1 is 2 and the distance to the square is 1). The point in which the trajectory reaches the new square is (1, 2.5). After this transition, the derivative is calculated at the point (1, 1). The direction is now (1, ?1). After 1.5 units of time the system will reach the point (2.5, 1) arriving to a new square. The new direction is (?2, ?1) (calculated at the point (1, ?1)) and after 0.75 units of time the system will reach the point (1, 0.25) in the bound of a new square. Then, the direction is (?1, 1) and after 0.75 units of time the system reaches the initial square at the point (0.25, 1). Then, after 0.375 units of time the system goes back to the second square, arriving at the point (1, 1.375). The elapsed time from the ?rst time the system reaches the second square to the second arrival to that square is 3.375. Then, it can be easily seen that the system will follow again a similar cycle but starting from the new initial condition (1, 1.375) and it will take 3.375/4 = 0.84375 units of time. Each cycle will be done four times faster than the previous one. Then, the sum of all the cycle times will converge to 4.5 units of time. Since the ?rst transition occurs at time 0.5, before 5 unit of time the system performs an in?nite number of transitions. Figure 2.11 shows that trajectory in the space state while Fig.2.12 shows the temporal evolution of the quantized variable q1 .













Figure 2.11: State Space trajectory with in?nite transitions As a result of this behavior, the simulation will be stuck after 5 units of time. This last example not only exhibits illegitimacy, but it also shows that sometimes the illegitimacy condition cannot be easily detected. It is di?cult –if not impossible– to write a routine which distinguishes this case as illegitimate since










?2 0 1 2 3 4 5

Figure 2.12: Quantized variable trajectory with in?nite transitions the transitions does not occur at the same time. Unfortunately, illegitimate Quantized Systems are very common. As it was already mentioned, in most continuous system the use of this kind of quantization yields illegitimate DEVS models. Consecuently, the approach introduced cannot constitute a simulation method since it does not work in most cases.


DEVS and Continuous Systems Simulation

In spite of the illegitimacy problems, Zeigler’s idea of discretizing state variables is very original and it yields very signi?cant properties and qualities. In fact, it was the ?rst attempt to produce a formal transformation of a continuous system in order to obtain a discrete event one. There was also another idea which should be also mentioned at least. Following a similar goal, Norbert Giambiasi gave his own approach based on the event representation of piecewise polynomial trajectories and the de?nition of GDEVS [17]. However, this solution-based approximation requieres the knowledge of the continuous system response to some particular input trajectories, which is not available in most cases. Because of this and also due to impossibility of formalizing the approach, these ideas will not be discussed here. Coming back to Zeigler’s approach, the main motivation of this Thesis was the discovering of illegitimacy in Quantized Systems and the original goal was trying to solve it. Thus, the next chapter is completely dedicated to describe the solution which was found and to develop the resulting numerical algorithm, which is the ?rst general discrete event integration method for ordinary di?erential equations. After that, the theoretical properties, extensions and applications will be



introduced in the following chapters.

Chapter 3

Quantized State Systems
The previous and introductory chapter was a sort of description about the relationship between discrete event systems and numerical methods for ODEs. There, two counter–examples were included showing the impossibility of simulating general continuous systems by using simple quantization because of illegitimacy. In spite of this problem, the idea of approximating a di?erential equation by a discrete event system is still very attractive since it can o?er many advantages with respect to a discrete time approach. For instance, due to the asynchronous behavior, the DEVS models can be implemented in parallel in a very easy and e?cient way. Many modern technical systems are described with models which combine discrete and continuous parts. The simulation of those Hybrid Systems requires the use of special techniques and a discrete event approximation of the continuous part would result in a unique discrete event simulation model. In fact, this idea –combining continuous and discrete simulation in a unique method– was the motivation of Herbert Praehofer’s PhD Thesis [54]. However, at the beginning of the 90s there were not discrete event approximation methods and Praehofer’s work was limited to express in terms of DEVS some existing discrete time integration methods. As it will be seen, the use of discrete event approximations yields an important reduction of the computational costs in the simulation of hybrid systems. But the advantages of a discrete event integration method do not ?nish here. Besides other practical advantages it will be shown that some theoretical properties resulting from these kind of approximation are completely original in the ?eld of numerical integration theory. This chapter introduces most of the key ideas which allowed the formulation of the ?rst general discrete event integration method for di?erential equations. 27




Hysteretic Quantization

If the in?nitely fast oscillations in System (2.4)–(2.5) are analyzed, it can be seen that they are due to the changes in q (t). An in?nitesimal variation in x(t) can produce, due to the quantization, an important oscillation with an in?nitely fast frequency in q (t). A possible solution might consist in adding some delay after a change in q (t) to avoid those in?nitely fast oscillations. However, adding such delays is equivalent, in some way, to introduce time discretization. During the delays the control over the simulation is lost and we come back to the problems of the discrete time algorithms. A di?erent solution consists in the use of hysteresis in the quantization. If a hysteretic characteristic is added to the relationship between x(t) and q (t), the oscillations in q (t) can be only produced by large oscillations in x(t) which need a minimum time interval to occur due to the continuity in the state trajectories. The existence of a minimum time interval between events is a su?cient condition for legitimacy [66]. Then, this simple idea –adding hysteresis to the quantization– is a good solution to ?x the illegitimacy problem. The formalization of the idea is given by the de?nition of the hysteretic quantization functions . De?nition 3.1. Hysteretic Quantization Function. Let Q = {Q0 , Q1 , ..., Qr } be a set of real numbers where Qk?1 < Qk with 1 ≤ k ≤ r. Let ? be the set of piecewise continuous real valued trajectories and let x ∈ ? be a continuous trajectory. Let b : ? → ? be a mapping and let q = b(x) where the trajectory q satis?es: ? Qm ? ? ? Qj +1 q (t) = Qj ?1 ? ? ? q (t? ) and if t = t0 if x(t) = Qj +1 ∧ q (t? ) = Qj ∧ j < r if x(t) = Qj ? ε ∧ q (t? ) = Qj ∧ j > 0 otherwise if x(t0 ) < Q0 if x(t0 ) ≥ Qr if Qk ≤ x(t0 ) < Qk+1


? ? 0 r m= ? k

Then, the map b is a hysteretic quantization function. The discrete values Qj are called quantization levels and the distance ?Q ? Qj +1 ? Qj is de?ned as the quantum , which is usually constant. The width of the hysteresis window is ε. The values Q0 and Qr are the lower and upper saturation values. Figure 3.1 shows a typical quantization function with uniform quantization intervals. The use of hysteretic quantization functions instead of memoryless quantization yields the Quantized State Systems method (or QSS–method for short) for ODE integration.



q (t) Qr


Q0 Q0 Qr x(t)

Figure 3.1: Quantization Function with Hysteresis



The QSS–method follows the idea of the generalization of Quantized Systems (Section 2.5). The only di?erence here is the use of hysteresis in the quantization. Then, the QSS–method can be de?ned as follows: De?nition 3.2. QSS-method. Given a time invariant state equation system: x ˙ (t) = f (x(t), u(t)) (3.2)

where x ∈ ? n , u ∈ Rm and f : ? n → ? n , the QSS–method approximates it by a system: x ˙ (t) = f (q (t), u(t)) (3.3) where q (t) and x(t) are related componentwise by hysteretic quantization functions (i.e. each quantized variable qi (t) is related to the corresponding state variable xi (t) by a hysteretic quantization function). The resulting system (3.3) is called Quantized State System (QSS). Figure 3.2 shows the block diagram representation of a generic QSS. The QSS–method requires to choose quantization functions like the one shown in Figure 3.1. After choosing one of such functions for each state variable, a QSS is obtained. In the next sections it will be proven that the resulting QSS is equivalent to a legitimate DEVS model (i.e. it can be exactly simulated by a legitimate DEVS).



u x1 f1 . . . xn fn q qn q1

Figure 3.2: Block Diagram Representation of a QSS Only after that, it will be possible to claim that the QSS–method approximates a Di?erential Equation System by a legitimate DEVS model.


Trajectories in QSS

Taking into account that QSS tries to be a discrete event approximation of a continuous system, their trajectories should have some particular properties. Since DEVS only processes events, each QSS trajectory should have an equivalent event trajectory. Then, QSS must have piecewise constant, piecewise linear or piecewise something trajectories. Otherwise, their segments could never be represented by a ?nite sequence of values. The non–hysteretic Quantized System approach failed in this goal.The trajectories there were not necessarily piecewise constant or linear as it could be seen in the counter–examples of Section 2.6. However, the use of hysteresis in QSS solves those problems. Provided that the input trajectories are piecewise constant and bounded and function f is continuous and bounded in any bounded domain, the following properties are satis?ed: ? The quantized variables have piecewise constant trajectories ? The state variable derivatives have also piecewise constant trajectories ? The state variables have continuous piecewise linear trajectories The following theorems give the necessary conditions and prove the mentioned properties.



Theorem 3.1. Quantized Trajectories in QSS Given the QSS de?ned in (3.3) with f continuous and bounded in any bounded domain and u(t) being bounded and piecewise constant, the trajectories of q (t) are piecewise constant. Proof. Let qi (t) be an arbitrary component of q . Assume that it is related to xi (t) by the hysteretic quantization function given by (3.1). It follows from (3.1) that: (3.4) Q0 ≤ qi (t) ≤ Qr If we assume that all the quantized variables are related to the corresponding state variables by similar hysteretic quantization functions, Inequality (3.4) also implies that q is bounded. From the hypothesis made about f there exists a positive number R such that ?R ≤ x ˙i ≤ R After integrating the inequality above we have xi (0) ? R(t ? t0 ) ≤ xi (t) ≤ xi (t0 ) + R(t ? t0 ) (3.6) (3.5)

Inequality (3.6) shows that the state variables have bounded trajectories in any ?nite time interval. Moreover, from (3.5) it follows that the state variables have also continuous trajectories. Assume that in certain time t there is a change in qi . If that change was produced because xi was increasing its value, it results that: xi (t) = qi (t+ ) = Qj (0 < j ≤ r) Then, it follows from (3.5) and (3.1) that: qi (t + ?t) = qi (t+ ) ? ?t ≥ min(Qj +1 ? Qj , ε) R

? ?tmin


If we assume that xi was initially falling, then we have xi (t) = Qj +1 ? ε = qi (t+ ) ? ε and we get again (3.7). This implies that variable qi needs a time interval greater than ?tmin to change its value twice. Since this value is constant (it does not depend on the state), it results that qi has a piecewise constant trajectory. Taking into account that we made the analysis for a generic component, we conclude that q is piecewise constante. Theorem 3.2. State Derivative Trajectories in QSS. In a QSS verifying the hypothesis of Theorem 3.1 the trajectories of the state variable derivatives are piecewise constant.



Proof. It is straightforward from Theorem 3.1 since q (t) and u(t) are piecewise constant and f is a static function.

Theorem 3.3. State Trajectories in QSS. In a QSS verifying the hypothesis of Theorem 3.1 the trajectories of the state variables are continuous and piecewise linear. Proof. It is straightforward from Theorem 3.2.

The de?nition of ?tmin in (3.7) shows the e?ect of the hysteresis in the QSS trajectories. When ε is zero we cannot ensure the existence of a minimum time between changes in qi . Figure 3.3 shows typical trajectories in a QSS.

x ˙


x Qj +1 Qj +1 ? ε Qj Qj ? ε q Qj +1 Qj x0





Figure 3.3: Typical trajectories in a QSS




DEVS model of a QSS

Taking into account that the components of q (t) and x ˙ (t) are piecewise constant, they can be represented by sequences of events. Thus, following the procedure of Section 2.5, the QSS of Figure 3.2 can be divided into static functions and quantized integrators (now hysteretic quantized integrators ). The static functions can be still represented by the DEVS model M2 in Example 2.2 (page 15) but the DEVS model M3 corresponding to the quantized integrators (page 19) must be modi?ed taking into account the presence of hysteresis. With this modi?cation, a hysteretic quantized integrator –which can be thought as a system constituted by Eqs (2.4a) and (3.1)– can be represented by the DEVS model:

M4 = (X, Y, S, δint , δext , λ, ta), where X =Y =? ×? S = ?2 × × ?+ δint (s) = δint (x, dx , j, σ ) = (x + σ · dx , dx , j + sgn(dx ), σ1 ) δext (s, e, xu ) = δext (x, dx , j, σ, e, xv , p) = (x + e · dx , xv , j, σ2 ) λ(s) = λ(x, dx , j, σ ) = (Qj +sgn(dx ) , 1) ta(s) = ta(x, dx , j, σ ) = σ whith ? Qj +2 ? (x + σ · dx ) ? ? ? dx ? (x + σ · dx ) ? (Qj ?1 ? ε) σ1 = ? ? |dx | ? ? ∞ ? Qj +1 ? (x + e · xv ) ? ? ? xv ? (x + e · xv ) ? (Qj ? ε) σ2 = ? |xv | ? ? ? ∞

if dx > 0 if dx < 0 if dx = 0 if xv > 0 if xv < 0 if xv = 0

Then, a complete QSS can be represented by a coupled DEVS constituted by atomic models M2 and M4 according to Figure 3.2. If the round–o? errors are ignored, that coupled DEVS model can exactly simulate the QSS behavior. Example 3.1. QSS simulation of a second order system. Consider the second order system: x ˙1 (t) = x2 (t) x ˙2 (t) = 1 ? x1 (t) ? x2 (t) with the initial condition (3.8)



x1 (0) = 0, x2 (0) = 0


Using a uniform quantum Qk+1 ? Qk = ?Q = 0.05 and hysteresis width = 0.05 in both state variables, the resulting quantized state system: x ˙1 (t) = q2 (t) x ˙2 (t) = 1 ? q1 (t) ? q2 (t) (3.10)

can be simulated by a coupled DEVS model, composed by two atomic models like M4 –correponding to the quantized integrators– and two atomic models like M2 , which calculate the static functions f1 (q1 , q2 ) = q2 and f2 (q1 , q2 ) = 1 ? q1 ? q2 . Let us call these subsystems QI1 , QI2 , F1 and F2 respectively. The coupling can be then expressed by the connections: [(QI1 , 1), (F1 , 1)], [(QI1 , 1), (F2 , 1)], [(QI2 , 1), (F1 , 2)], [(QI2 , 1), (F2 , 2)], [(F2 , 1), (QI2 , 1)] and [(F1 , 1), (QI1 , 1)]. Figure 3.4 represents the coupled system.

QI1 F1

QI2 F2

Figure 3.4: Block Diagram Representation of (3.10) Observe that, due to the fact that function f1 does not depend on the variable q1 , the connection [(QI1 , 1), (F1 , 1)] is not necessary. Moreover, taking into account that f1 (q1 , q2 ) = q2 the subsystem F1 and the connections which invlove it can be replaced by a direct connection from QI2 to QI1 . These simpli?cations can reduce considerably the computational cost of the implementation. The simulation results are shown in Figure 3.5. The ?rst simulation was completed with 30 internal transitions at each quantized integrator, which gives a total of 60 steps. It can be seen in that ?gure the piecewise linear trajectories of x1 (t) and x2 (t) as well as the piecewise constant trajectories of q1 (t) and q2 (t).



1. 2

x1 (t), q1 (t)

0. 8

0. 6

0. 4

0. 2

x2 (t), q2 (t)


? 0. 2 0 1 2 3 4 5 6 7 8 9 10

Figure 3.5: Trajectories in System (3.10) The presence of the hysteresis can be easily observed when the slope of the state variables changes its sign. Near those points, there are di?erent values of q for the same value of x. The simpli?cations which were mentioned in the connections can be applied to general systems where some static functions do not depend on all the state variables. In that way, the QSS method can exploit the structural properties of the system to reduce the computational costs. When the system is sparse the QSS simulation results particularly e?cient since each step involves calculations at very few integrators. Discrete time algorithms can also make use of these sparsity properties. However, they require from speci?c techniques of matrix manipulation. In the QSS–method this is just an intrinsec property.


Input signals in the QSS–method

When the QSS–method was introduced, it was mentioned that it allows the simulation of time invariant systems with piecewise constant input signals. However, it was not said how these signals can be incorporated into the simulation model. In the DEVS simulation model, each event represents a change in a piecewise constant trajectory. Then, the input trajectories must be incorporated as sequences of events. In the block diagram of Figure 3.2, the input signals u(t)



seem to come from the external world and it is not clear where the corresponding sequences of events should be generated. In the context of DEVS simulation, all the events must come from an atomic DEVS model. Thus, it is evident that some new DEVS models that generates those sequences of events should be created and coupled with the rest of the system in order to simulate it. Suppose that there is a piecewise constant input signal u(t) which adopts values v1 , v2 , . . . , vj , . . . at times t1 , t2 , . . . , tj , . . . respectively. Then, a DEVS model which produces the events corresponding to this signal –i.e. a DEVS event generator– can be built as follows:

M5 = (X, Y, S, δint , δext , λ, ta), where X=φ Y =? ×? S = ? × ?+ δint (s) = δint (j, σ ) = (j + 1, tj +1 ? tj ) λ(s) = λ(j, σ ) = (vj , 1) ta(s) = ta(j, σ ) = σ Notice that in this model the external transition function δext is not de?ned since it cannot be called due to the fact that the system does not receive input events. An interesting advantage of the QSS–method is that it deals with the input trajectory changes in an asynchronous way. The event indicating a change in the signal is always processed in the correct instant of time, producing instantaneous changes in the slopes of the state variable trajectories which are directly a?ected. This is the intrinsic behavior of the method and it is obtained without modifying the DEVS models corresponding to the quantized integrators and static functions. Discrete time methods require an special treatment in order to perform a step in the exact instant in which an input change occurs. This issue will be treated later in a deeper way showing the advantages of the QSS–method in discontinuous systems. Up to here, only piecewise constant input trajectories were considered. However, in many applications, the input signals have more general forms. Anyway, they can be approximated by piecewise constant trajectories with the addition of quantization functions and then, they can be represented by DEVS models. For instance, the following DEVS generates an event trajectory which follows the form of a sine function with angular frequency ω and amplitude A using a quantum A?u.



M6 = (X, Y, S, δint , δext , λ, ta), where X=φ Y =? ×? S = ? × ?+ δint (s) = δint (τ, σ ) = (? τ, σ ?) λ(s) = λ(τ, σ ) = (A sin(ωτ ), 1) ta(s) = ta(τ, σ ) = σ with ? arcsin[sin(ωτ ) + ?u] ? ? τ if (sin(ωτ ) + ?u ≤ 1 ∧ cos(ωτ ) > 0) ? ω ∨ sin(ωτ ) ? ?u < ?1 σ ?= ? ? sgn(τ )π ? arcsin[sin(ωτ ) ? ?u] ? τ otherwise ω and τ +σ ? if ω (τ + σ ?) < π π τ +σ ??2 otherwise ω The trajectory generated by this model with parameters A = 2, ω = 3 and A?u = 0.2 is shown in Figure 3.6. τ ?=

1. 5


0. 5


? 0. 5


? 1. 5



0. 5


1. 5





Figure 3.6: Piecewise constant sine trajectory A piecewise constant trajectory can be also obtained using a constant time step. However, the previous approximation is better in the QSS–method since



the quantization in the values ensures that the di?erence between the continuous signal an the piecewise constant one is always less than the quantum. This fact can be easily noticed in Figure 3.6 whereas if a constant step approximation had been taken, the maximum error would have depended on the relationship between the time step and the signal frequency.


Startup and Output Interpolation

The QSS–method startup consists in giving appropriate initial conditions to the atomic DEVS models which perform the simulation. The quantized integrator state can be written as s = (x, dx , k, σ ) (see model M4 in page 33) where x is the state variable, dx is its derivative, k is the index corresponding to the quantized variable qk and σ is the time advance. It is clear that the variable representing the state should be initially chosen as x = xi (t0 ) and j must be taken so that Qj ≤ xi (t0 ) ≤ Qj +1 . With respect to dx and σ , they could be calculated according to fi (q, u). However, there is a much simpler solution. If we choose σ = 0, all the quantized integrators will perform internal transitions at the beginning of the simulation and then, the models associated to the static functions fi will calculate the derivatives producing –instantaneously– output events. After that, the quantized integrators will receive the correct values of dx and they will calculate the corresponding σ in the external transition. However, this behavior will be only observed if the quantized integrators receive the input events –which come from the static functions– after they performed the internal transitions. The problem is that after the ?rst quantized integrator performs the transition, not only the other quantized integrators but also some static models will have their σ equal to 0 (because they perform an external transition due to the quantized integrator output event). Thus, it is necessary to establish priorities between the components in order to ensure that when some models schedule their next transition for the same instant of time, the quantized integrators are which perform it ?rst. This can be easily done using the tie-breaking function Select mentioned in Section 2.3. These considerations also solve the problem of the static model initial conditions. They can be arbitrarily chosen since the static models will receive input events coming from the quantized integrators at the beginning of the simulation and then their external transition functions will set the appropriate states. Finally, the input signal generator models must start with their σ equal to 0 and the rest of the state must be chosen so that the ?rst output event corresponds to the signal initial value. A very important property of the QSS–method is related to the fact that the state trajectory has a very particular form. It was already mentioned in Section 3.3 that the state trajectories in a QSS are piecewise linear. Moreover, they are also continuous. Hence, if the values adopted by a variable xi at the event times are known, they can interpolated with segments in order to obtain the exact evolution of



(3.3) at any time. Moreover, to do that the only things which are needed are the values after the external transitions because in the internal transitions the slopes do not change. Considering this, the problem of output interpolation has a straightforward solution in the QSS–method.


Quantization, Hysteresis and Errors

After the de?nition of the QSS–method in Section 3.2 the following remarks were focused in the DEVS implementation of Quantized State Systems. Taking into account the considerations of Sections 3.4 to 3.6, it becomes clear how to build and simulate the DEVS model corresponding to any QSS. Then, given a continuous system like (3.2) and knowing which is the quantum and hysteresis to be used, the DEVS simulation can be easily performed. However, it was not mentioned any word about the key issue of the method: the choice of the quantization and hysteresis to be applied at each state variable. In classic discrete time methods, the simulation parameters (step size, error tolerance, etc.) are chosen according to stability and accuracy considerations. Up to here, it was not said anything about these matters –stability, error, etc.– related to the QSS–method. Since the goal is to simulate System (3.2), the accuracy of the simulation will be connected to the similarity between this system and (3.3). Taking into account that the only di?erence between both systems is the presence of the quantization functions, it is natural to expect that the error depends on the size of the quantization intervals. However, this assertion requires a deeper study about the e?ects of the quantization. All these theoretical issues will be left for the next chapter. Anyway, we anticipate here that –under certain conditions– there is a linear relationship between the quantum and the error and this fact will provide the basic rule for the choice of the quantization and hysteresis.



Chapter 4

Theoretical Properties of QSS
In the previous chapter, it was claimed that the QSS–method was a general simulation method for ODE. Although it was shown that it can be applied to general time invariant ODEs, it was not proven any property telling that the QSS solutions are in some way similar to the exat ones. The properties which are typically studied in numerial analysis are consistence, convergence, stability and error bound. Those properties allow to estimate under which conditions the resulting approximate solution is in fact a good approximation to the continuous trajectories. They also help in the choice of the parameters (step–size, error tolerance, etc.) according to the desired accuracy. The local error –the error in one step– is usually obtained from a Taylor series expansion, while the global error –the error after several steps– is only theoretically estimated making use of Lipschitz conditions (see [18] for instance). The convergence property (i.e. the property for which the error goes to zero when the step size goes to zero) is then obtained as the limit case of the global error bound. Since all the existing methods are discrete time ones, their stability properties are usually studied based on di?erence equations theory. The usual trick here is to ?nd the relationship between the eigenvalues of the original di?erential equation and the roots of the discrete time one (this procedure is obviously limited to the analysis in linear systems1 ). In the case of the QSS–method this last idea is completely useless. We already mentioned that the Quantized State Systems do not ?t in the form of a di?erence equation. Similarly, the use of Taylor expansions does not lead to any result. However, there is a much more powerful tool wich can be used here: the perturbation theory. Based on that, it will be possible not only to get similar
1 There are other ways to study properties which can be applied to nonlinear systems. We only refer here to the usual tools of numerical analysis.




results to the classic ones, but also to calculate practical global error bounds and to establish stability conditions including nonlinear cases. Making use of those stability and error bound conditions –and after some extra considerations– the problem which was open at the end of the previous chapter –i.e. the choice of the appropriate quantization and hysteresis width– will be ?nally solved.


QSS and Perturbation Theory

As it was mentioned, the usual tools for the analysis of numerical stability are based on the discrete time systems theory. The basic idea is to obtain the di?erence equation corresponding to a given method applied to a di?erential equation and then to relate the eigenvalues of both systems. This idea, which is useful for discrete time methods, cannot be applied to the QSS–method because the resulting simulation model is a discrete event system which does not ?t in a di?erence equation representation. A ?rst attempt to solve this problem would consist in looking for a discrete event systems theory which allows such kind of stability analysis. In fact, there is a nice mathematical theory based on the use of max-plus algebra which permits expressing some discrete event systems by di?erence equations in the context of that algebra [1]. This theory also arrives to stability results based on the study of eigenvalues and it is completely analogous to the discrete time systems theory. However, it can be only applied to systems described by Petri Nets and unfortunately, the QSS–method produces DEVS models which do not have Petri Net equivalents. A di?erent way to study the QSS dynamics is using the representation (3.2) for the original di?erential equation and (3.3) for its QSS approximation. Let us de?ne ?x(t) ? q (t) ? x(t). Thus, the last equation can be written as x ˙ (t) = f (x(t) + ?x(t), u(t)) (4.1)

and now, the simulation model (3.3) can be seen as a perturbed version of the original system (3.2). The hysteretic quantization functions have a fundamental property. If two variables qi (t) and xi (t) are related by a quantization function like (3.1), then Q0 < xi (t) < Qr ? |qi (t) ? xi (t)| ≤ max(?Q, ε) (4.2)

where ?Q ? max(Qj +1 ? Qj ), 0 ≤ j ≤ r, is the maximum quantum (it was mentioned in the previous chapter that the quantum is usually constant). The property given by (4.2) implies that each component of the perturbation ?x is bounded by the corresponding hysteresis width and quantum size. Thus, the accuracy and stability analysis can be based on the e?ects of a bounded perturbation. As it was said, the study of numerical stability is usually restricted to linear systems (their study in nonlinear systems is quite di?cult). It will be seen



that this problem disappears from the QSS–method because the perturbation analysis can be easily applied to nonlinear systems. Moreover, when the method is used with a linear system it will be also possible to establish a global bound for the error and the problem of the approximation accuracy can be treated not only locally but also globally. Despite these advantages, a new problem appears in the QSS–method. Let us illustrate it with the following example. Example 4.1. Oscillations in the QSS–method. Consider the ?rst order system: x ˙ (t) = ?x(t) + 9.5 (4.3)

with the initial condition x(0) = 0. The simulation using the QSS–method with quantum ?Q = 1 and hysteresis width ε = 1 is shown in Figure 4.1


9 8 7 6 5 4 3 2 1 0 0 2 4 6 8 10 12 14 16 18 20

exact QSS

t Figure 4.1: QSS simulation of (4.3) Although the system (4.3) is asymptotically stable, the QSS simulation ?nishes into a limit cycle. The equilibrium point x ? = 9.5 is no longer an equilibrium point in the resulting QSS and we cannot claim that the stability is conserved by the method. However, the QSS solution never goes far away from the exact solution and it ?nishes with an oscillation near the equilibrium point. Taking into account that the goal was just simulating the system, this result is not bad. The trajectory given by the QSS–method is called ultimately bounded [23]. In general, the QSS–method cannot assure stability according to the classic



de?nition. Thus, in the context of this method stability will refer in fact to ultimately boundedness of the solutions.


Convergence of QSS–method

The convergence property tells that the solutions of (4.1) go to the solutions of (3.2) when the maximum quantum ?Q and the hysteresis width ε go to zero. The importance of this property is in the fact that an arbitrarily small error can be achieved in the simulation when a su?ciently small quantization is used. The following theorem give su?cient conditions for the convergence of the QSS–method. Theorem 4.1. Convergence of QSS–method. Consider System (3.2) and its associated QSS (3.3). Let D be the nonsaturation region de?ned by D = {x = (x1 , ..., xn )/Q0i < xi < Qri } (4.4)

Assume that the input u(t) ∈ Du being Du a bounded region and suppose that function f (x, u) is locally Lipschitz on D × Du . Let φ(t) be the solution of (3.2) from the initial condition x(0) = x0 and let φ1 (t) be a solution of the related QSS (3.3) starting in the same initial condition x0 . Assume that φ(t) ∈ D1 where D1 ? D (the continuous system solution is in the non-saturation region). Then, φ1 (t) → φ(t) when the quantization intervals go to 0.2 Proof. Let S be ? n ? D and let F be F Let d be de?ned by


u∈Du x∈D

sup (sup f (x, u) )

d ? inf ( inf

x∈S t∈[0,∞]

φ(t) ? x )


Taking into account the assumptions on f and φ(t), a positive constant t1 can be found satisfying d (4.6) t1 < F It can be easily seen that during the interval [0, t1 ] the trajectory of φ1 (t) will remain inside D. Using the perturbation notation (4.1) instead of (3.3) and taking into account that when 0 ≤ t ≤ t1 the trajectories are inside the non–saturation region, the fundamental property (4.2) stands for all the components and then ?x(t) satis?es (4.7) ?x(t) ≤ ?x (0 ≤ t ≤ t1 )
2 Without

modifying the saturation bounds (i.e. the number of quantization levels goes to




being ?x a constant de?ned by the quantization intervals. Let t ∈ [0, t1 ]. It follows from (4.1), (3.2) and the fact that φ1 (0) = φ(0) that:

φ1 (t) ? φ(t) =


(f (φ1 (τ ) + ?x(t), u(τ )) ? f (φ(τ ), u(τ )))dτ

Thus, appliyng the euclidean norm we obtain

φ1 (t) ? φ(t) = and then,


(f (φ1 (τ ) + ?x(τ ), u(τ )) ? f (φ(τ ), u(τ )))dτ

φ1 (t) ? φ(t) ≤


f (φ1 (τ ) + ?x(τ ), u(τ )) ? f (φ(τ ), u(τ )) dτ


Let M be the Lipschitz constant of function f on D × Du . Since the argument of the function f in (4.8) is inside that region we have

φ1 (t) ? φ(t) ? φ1 (t) ? φ(t) ? φ1 (t) ? φ(t) ? φ1 (t) ? φ(t)

≤ ≤ ≤ ≤

0 t 0 t 0 t 0

M φ1 (τ ) + ?x(τ ) ? φ(τ ) dτ M ( φ1 (τ ) ? φ(τ ) + ?x(τ ) )dτ M ( φ1 (τ ) ? φ(τ ) + ?x )dτ M φ1 (τ ) ? φ(τ ) dτ + M t?x

The functions φ and φ1 are continuous, as well as the term M t?x . Since M is positive, it is possible to apply the Gronwall–Bellman Inequality [23], resulting in

φ1 (t) ? φ(t) ? φ1 (t) ? φ(t) lim

≤ M t ?x +


M 2 s?x e


M dτ


≤ (eM t ? 1)?x φ1 (t) ? φ(t) = 0 (4.9)

Then, since M and t1 do not depend on ?x , for 0 ≤ t ≤ t1 we have
?x →0

From (4.5) we have d ≤ inf ( φ(t1 ) ? x )


Taking into account (4.6), (4.9) and (4.10), it is possible to ?nd a su?ciently small quantization such that inf x∈S ( φ1 (t1 ) ? x ) F This inequality implies that the solution φ1 (t) does not leave the region D during the interval [t1 , 2t1 ]. Then, the validity of equations (4.7) to (4.9) holds for the interval [0, 2t1 ]. Repeating that argument it results that (4.9) holds for all t. t1 <




General Stability Properties of QSS

Although the convergence constitutes an important theoretical property, it does not give any quantitative information about the relationship between the quantum and the error and it does not establish any condition for the stability domain. The main result of this section –Theorem 4.2– follows this last goal by giving su?cient conditions to ?nd the quantization which conserves 3 the stability properties of the original continuous system. For the stability analysis an autonomous system will be considered (which can eventually represent a non–autonomous system with a constant input trajectory): x ˙ (t) = f (x(t)) whose corresponding QSS is given by x ˙ (t) = f (q (t)) (4.12) (4.11)

where q (t) and x(t) are componentwise related by hysteretic quantization functions. A ?rst property of this approximation is given by the following lemma Lemma 4.1. QSS Equilibrium Points Consider the autonomous continuous system (4.11) and its associated Quantized State System (4.12). Then, the point x = x ? is an equilibrium point of (4.11) if and only if the point q = x ? is an equilibrium point of (4.12). The proof of this lemma is straightforward and, as it was already mentioned, it also holds for system with constant inputs. Lemma 4.1 implies that the quantized variables of the QSS have the same possible values in the equilibrium than the state variables of the original system. However, it does not imply that state variables in the quantized system will have such values or that the quantized variables can reach them. Anyway, taking into account that the di?erence between q (t) and x(t) is bounded in the QSS the value of the state variables in an eventual equilibrium situation will be near the right one. Equation (4.12) can be rewritten using the perturbation notation: x ˙ (t) = f (x(t) + ?x(t)) (4.13) with ?x(t) ? q (t) ? x(t). Now, the theorem which relates the stability of systems (4.11) and (4.12) can be introduced. As before, this theorem assumes that the system is autonomous and it studies the trajectories around an equilibrium point in the origin but it can be easily extended to systems with constant inputs and with di?erent equilibrium points.
3 As it was mentioned before, the term stability refers in fact to ultimately boundedness of solutions



Theorem 4.2. Stability of QSS. Consider a system as the one de?ned in (4.11) that has an equilibrium point in the origin with the function f being continuously di?erentiable. Assume that it is also possible to ?nd a Lyapunov function V (x), which is continuous in an open region D including the origin, and has a negative de?nite time derivative along the trajectories of (4.11). Let D1 ? D be a region limited by a level surface of function V . Then, given an arbitrary open region D2 ? D1 also limited by a level surface of V it is always possible to ?nd a quantization so that any trajectory of the resulting associated QSS starting into D1 converges to the interior of D2 . ˙ (x) is negative Proof. Let D3 be the region de?ned by D3 ? D1 ? D2 . Since V de?nite, it exists a positive number s such that: ˙ (x) < ?s, ?x ∈ D3 V De?ne: α(x, ?x) ? ?V (x)T · f (x + ?x) (4.14) (4.15)

This is a continuous function in ?x since it is the scalar product of a constant vector and a continuous function. It is also veri?ed that: ˙ (x) α(x, 0) = V Now, consider the following function αM (?x) ? sup (α(x, ?x))



It can be easily seen that this function is continuous and veri?es αM (0) < ?s (4.18)

Thus, given a constant s1 (0 < s1 < s), a positive number ρ can be found satisfying (4.19) αM (?x) < ?s1 < 0 if ?x < ρ (4.20) The condition given by (4.20) can be satis?ed with the choice of an adequate quantization taking into account (4.2). Let φ(t) be a solution of equation (4.13) starting from the initial condition φ(t = 0) = x0 ∈ D3 . Consider that the quantization was done in order to satisfy the condition given by (4.20). From (4.13) and (4.15) it follows that: ˙ α(φ, ?x) = ?V (φ)T · φ Using (4.17) and (4.19) in the equation above, it can be seen that: ?V ˙ < ?s1 (φ) · φ ?x (4.21)



This condition will be satis?ed at least during certain time while φ(t) remains inside D3 (the existence of this time is guaranteed by the continuity of φ(t)). After integrating both sides of the Inequality (4.21), we have:
t ?V ˙ · dt < (φ) · φ ?s1 · dt 0 ?x 0 V (φ(t)) ? V (φ(0)) < ?s1 · t V (φ(t)) < V (x0 ) ? s1 · t t

This implies that V evaluated along the QSS solution is bounded by a strictly decreasing function while that solution remains inside D3 . Since the value V (x0 ) is smaller than the value that V takes in the bound of D1 , it is clear that the trajectory will never leave D1 . Let V1 be the value that V takes in the bound of region D2 . Then, it can be easily seen that the trajectory will reach the region D2 in a ?nite time t1 with: t1 < completing the proof. A ?rst remark is that the proof requires the choice of the quantization intervals satisfying (4.20). In order to achieve that, it is necessary to leave the saturation region outside region D3 . Then, considering the same constant quantum ?Q and hysteresis window ε for all the quantized variables, each component of ?x is less than max(?Q, ε). √ Thus, ?x is less than n times that value, being n the dimension of the state space. Thus, the condition given by (4.20) can be achieved by taking: ρ max(?Q, ε) < √ n Instead of ensuring stability, Theorem 4.2 shows that the QSS–method can be implemented achieving a given ?nal error. As it was mentioned before, the QSS–method only can guarantee ultimately boundedness of solutions. This fact can be easily understood taking into account that the perturbation term ?x(t) does not dissapear when the time goes to ∞. These kind of perturbations are known as nonvanishing perturbations [23]. When the Lyapunov function V is also known, function α can be evaluated to obtain ρ. In that way, this result also shows the way of doing the quantization in order to obtain a ?nal error bounded to some arbitrary value (given by the choice of region D2 ). In spite of the theoretical importance of these results, the relationship between quantization, stability and error bound cannot be easily deduced from Theorem 4.2. On one hand, the choice of the quantization requires the knowledge of a Lyapunov function for the continuous system. There are converse theorems V (x0 ) ? V1 s1



which ensure the existence of those functions in asymptotically stable systems [23]. However, the way to obtain those functions is sometimes very di?cult if not impossible. Then, the result is not so useful from a practical point of view. On the other hand, only a bound for the ?nal error was obtained. Up to here, there is no information about the relationship between the quantum and the error bound at a given instant of time. The main reason of these problems is the fact that we were working with general nonlinear systems. It is necessary to deal with more particular cases in order to get more practical and stronger results. Like in all existing numerical methods, the most interesting properties of the QSS–method result from the study of LTI systems. Taking into account what was already done, a ?rst attempt to study the QSS properties in LTI system would consist in taking the Lyapunov–based theorem and modifying it using the particular properties of linear systems. However, that kind of study is a problem of ultimate bound estimation and the use of Lyapunov–based approaches usually leads to very conservative results. The reason of this can be found in the fact that the structure of the system and perturbation are lost when the Lyapunov analysis is performed. Thus, before studying the properties of the QSS–method in LTI systems some new tools should be developed in order to analyze the behavior of those systems in presence of nonvanishing perturbations. The next section is dedicated to the particular Lyapunov analysis and then the results are compared with the new methodology introduced in Section 4.5. After that, these results are applied to the study of the QSS properties.


LTI Perturbed Systems: Lyapunov Approach
x ˙ (t) = f (t, x) = Ax(t) + Bu(t) (4.22)

We shall consider the LTI nominal system

where A ∈ ? n×n is a Hurwitz matrix, u ∈ ? m and B ∈ ? n×m . As it was seen in the previous sections, the QSS–method introduces bounded perturbations in the state variables. Similarly, when continuous input signals are approximated by their quantized versions, the e?ect can be represented by the presence of bounded perturbations in the input variables. In that way, the goal of this section is to study the e?ects of bounded state and input perturbations in System (4.22). It was already mentioned that one of the most important consequences carried by the presence of nonvanishing perturbations is that the asymptotic stability dissapears, leaving its place to the ultimately boundedness of solutions. The main problem is then to estimate a quantitative relationship between the perturbation bounds and the resulting ultimate bound. In our particular problem –the QSS–method– those bounds are respectively the quantum and the ?nal error.



Taking into account the problem studied –to ?nd the ultimate bound of a perturbed system– it must be assumed that the non–perturbed system (4.22) is asymptotically stable and it has a constant input. Due to the linearity and without loss of generality it can be supossed that u(t) = 0 and then, in presence of general input and state perturbations, System (4.22) can be written as x ˙ (t) = A(x(t) + ?x(t)) + B ?u(t) It will be assumed that the perturbation components satisfy |?xi (t)| ≤ ?xmaxi 1 ≤ i ≤ n and |?uj (t)| ≤ ?umaxj 1 ≤ j ≤ m (4.25) where ?xmaxi and ?umaxj are non-negative constants (which in the QSS– method represent the quantum). In order to simplify the next equations, a new notation will be introduced: The symbol |·| will indicate the componentwise module of a matrix or vector. If T is a matrix with components T1,1 , . . . , Tn,m , then |T | will be a new matrix of the same size than T with components |T1,1 |, . . . , |Tn,m |. For vectors having the same dimension, the vector inequality x ≤ y implies that xi ≤ yi for every component of x and y . According to these de?nitions, the following property will be satis?ed: |T · x| ≤ |T | · |x| Using the notation de?ned above, the inequalities (4.24) and (4.25) can be rewritten as |?x(t)| ≤ ?xmax (4.26) and |?u(t)| ≤ ?umax (4.27) Using this new notation, the problem is to ?nd a ultimate bound for System (4.23) where the perturbations satisfy (4.26) and (4.27). In Section 4.3 a similar study for general nonlinear systems was performed making use of a Lyapunov’s based technique. The classic way of studying this problem in LTI systems is just to follow that technique improving it with the particular properties resulting from the system linearity. In order to justify the use of a new approach, the ultimate bound of system (4.23) will be obtained following that Lyapunov’s analysis and then it will be compared with the new one in the next section. The study will be done for norm 2 and norm ∞. Although a bound in norm ∞ can be obtained from the norm 2 –using the fact that the ?rst one is always less or equal than the second one– this bound could result even more conservative. (4.24) (4.23)

4.4. LTI PERTURBED SYSTEMS: LYAPUNOV APPROACH Let U (x) = xT P x where P = P T > 0 satis?es AT P + P A = ?Q with Q = QT > 0. Then ˙ (x) U ˙ = x ˙ T P x + xT P x = [A(x + ?x) + B ?u]T P x + xT P [A(x + ?x) + B ?u] ˙ (x) = ?xT Qx + 2xT P A?x + 2xT P B ?u U It is known that 2xT P A?x ≤ 2 x · P A · ?x ≤ 2 x · P A · ?xmax and similarly 2xT P B ?u ≤ 2 x · P B · ?u ≤ 2 x · P B · ?umax



and (4.29)



Up to here, the analysis was the same to both norms. Now, the study should continue using particular properties of each norm. In norm 2, we know that: xT Qx ≥ x 2 2 λmin (Q) Then, if x




2 ( PA λmin (Q)


· ?xmax


+ PB


· ?umax 2 ) (4.33)

it results from (4.30) and (4.31) that
T T xT Qx ≥ x 2 2 λmin (Q) ≥ 2x P A?x + 2x P B ?u

and from (4.29) we have ˙ (x) < 0 U Let c ? max U (x) = ρ2 λmax (P )
2 =ρ


Then, it can be ensured that the trajectories will ?nish inside the level surface given by U (x) = c = ρ2 λmax (P ) and the ultimate bound will be ?2 so that

min U (x) = ?2 2 λmin (P ) = c
2 =?2

52 this is


?2 =

c λmin (P )

using (4.34) and (4.33) in the last equation, we have ?2 = λmax (P ) 2 · · ( PA λmin (P ) λmin (Q) · ?xmax + PB · ?umax 2 )(4.35)




The analysis in norm ∞ is almost identical to the norm 2. The only di?erence is that the lower bound given by (4.32) does not stand for norm ∞. To obtain a similar inequality, the following result will be used. Lemma 4.2. Constrained Quadratic [44]. Let U (x) = xt Qx, where Q = QT > 0, and let C be a row vector in ? n and r be a nonzero scalar. Then the minimum of U (·) along a hyperplane {x|Cx = r} is given by: r2 /CQ?1 C T . Using this lemma, the minimum of the quadratic function when the compo2 ?1 nent xi = r is r2 /ei Q?1 eT )i,i . Thus, it results that i = r /(Q min xT Qx =
∞ =r


r2 maxi (Q?1 )i,i

and Equation (4.32) can be replaced by xT Qx ≥ x 2 ∞ maxi (Q?1 )i,i (4.36)

Then, following a similar idea to the analysis in norm 2, the bound obtained is ?∞ where bq and bp = 2bq bp P

· ( PA


+ PB


∞ )(4.37)

? 1max (Q?1 )i,i ≤i≤n ? 1max (P ?1 )i,i ≤i≤n


LTI Perturbed Systems: Non Conservative Approach

In order to obtain a less conservative result, it is necessary to exploit the structure of the system. Then, a completely di?erent idea to the Lyapunov analysis has to be followed.

4.5. LTI PERTURBED SYSTEMS: NON CONSERVATIVE APPROACH 53 To make use of the structure, the decoupled system will be studied (Lemma 4.3 and Corollary 4.1). Then, the results will be brought back to the original system in Theorem 4.3. This way of working will allow to use the geometrical properties of the system and perturbations. As it will be seen later, this idea results in a less conservative estimation of the ultimate bound. Lemma 4.3. Scalar Perturbed System Consider the following ?rst order equation with complex coe?cient x ˙ = a(x + ?x) + B ?u (4.38)

where a, x, ?x ∈ , ?u ∈ k and B ∈ 1×k . Assume also that ? e(a) < 0, |?x| ≤ ?xmax and |?u| ≤ ?umax . Let x(t) be a solution of (4.38) from the initial condition x(t0 ) = 0. Then, for all t ≥ t0 it results that |x(t)| ≤ a B ?xmax + ?umax ? e(a) ? e(a)

Proof. Let x = ρ · ejθ with ρ, θ ∈ ? . With this, Equation (4.38) becomes ˙ = a(ρ · ejθ + ?x) + B ?u ρ ˙ · ejθ + jρ · ejθ · θ Then, we have ˙ = a(ρ + ?x · e?jθ ) + B ?u · e?jθ ρ ˙ + jρ · θ Taking the real part of the equation above it results that ρ ˙ = ? e(a)ρ + ? e(a?x · e?jθ ) + ? e(B ?u · e?jθ ) and ρ ˙ ≤ ? e(a)ρ + |a|?xmax + |B |?umax Thus, when ρ = |x(t)| = |a|?xmax + |B |?umax |? e(a)|

it results that ρ ˙ ≤ 0 and |x(t)| cannot become greater than the given bound. Applying Lemma 4.3 to each component of a decoupled system, the following corollary is obtained Corollary 4.1. Decoupled Perturbed System. Consider System (4.23) where x, ?x ∈ n , A ∈ n×n , ?u ∈ k and B ∈ n×k . Assume further than A is a diagonal matrix with ? e(Ai,i ) < 0 and consider the inequalities (4.26) and (4.27). Let x(t) be a solution of (4.23) from x(t0 ) = 0. Then, for all t ≥ t0 we have |x(t)| ≤ |? e(A)?1 A|?xmax + |? e(A)?1 B |?umax


CHAPTER 4. THEORETICAL PROPERTIES OF QSS With this corollary, the following theorem can be derived

Theorem 4.3. Bound of Trajectories Starting from the Origin Consider System (4.23) where x, ?x ∈ ? n , A ∈ ? n×n , ?u ∈ ? k and B ∈ ? n×k . Assume that A is a diagonalizable Hurwitz matrix and suposse that the perturbation terms satisfy (4.26) and (4.27). Let x(t) be a solution of the system starting from x(t0 ) = 0. Then, for all t ≥ t0 |x(t)| ≤ |V | · (|? e(Λ)?1 Λ||V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) (4.39)

where Λ is a diagonal eigenvalues matrix of A and V is an associated matrix of eigenvectors, that is V ?1 AV = Λ (4.40) Proof. Let x = V z . It results from (4.23) that Vz ˙ = A(V z + ?x) + B ?u Then, z ˙ = Λ(z + V ?1 ?x) + V ?1 B ?u Taking into account the restrictions in |?x| and |?u|, we have |V ?1 ?x| ≤ |V ?1 |?xmax and |V ?1 B ?u| ≤ |V ?1 B |?umax Since Λ is a diagonal matrix with ? e(Λi,i ) < 0 (since A is Hurwitz) and taking into account the last inequalities, System (4.41) satis?es the hypothesis of Corollary 4.1. Then, for all t ≥ t0 we can ensure that |z (t)| ≤ |? e(Λ)?1 Λ||V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax and ?nally, we have |x(t)| = |V z (t)| ≤ |V ||z (t)| ≤ ≤ |V |(|? e(Λ)?1 Λ||V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) which completes the proof. Theorem 4.3 calculates a bound for the trajectories of a LTI perturbed system assuming that the trajectories start from the origin. Now, based on this last result, Theorem 4.4 gives an estimation of the ultimate bound by components and in terms of a norm p for a LTI perturbed system with arbitrary initial conditions. (4.41)

4.5. LTI PERTURBED SYSTEMS: NON CONSERVATIVE APPROACH 55 Theorem 4.4. Ultimate Bound of LTI Perturbed Systems. System (4.23), under the hypothesis of Theorem 4.3 is globally ultimately bounded with ultimate bound ? = |V | · (|? e(Λ)?1 Λ| · |V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) (4.42)

Moreover, it exists a ?nite time t1 = t1 (c, x0 ) so that for each positive constant c the solutions satisfy (4.43) |x(t)| ≤ (1 + c)|V | · (|? e(Λ)?1 Λ| · |V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) for all t ≥ t1 and for an arbitrary initial condition x0 . Proof. Let x(t) be a solution of (4.23) starting from an arbitrary initial condition x(0) = x0 , and let x ?(t) be a solution of the nominal system ˙ = Ax x ? ? (4.44)

starting from the same initial condition. Let e(t) = x(t) ? x ?(t). Then, it results that e(0) = 0 and e(t) satis?es Equation (4.23), which implies that it satis?es the hypothesis of Theorem 4.3. Then, |e(t)| ≤ |V | · (|? e(Λ)?1 Λ| · |V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) (4.45)

Since A is Hurwitz, the nominal system (4.44) is exponencially stable. Then, for each positive constant c a ?nite time t1 exists so that for all t > t1 we have |x ?(t)| ≤ c|V | · (|? e(Λ)?1 Λ| · |V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) Then, for t > t1 , x(t) = e(t) + x ?(t) |x(t)| ≤ |e(t)| + |x ?(t)| (4.46)

and replacing with (4.45) and (4.46) we arrive to (4.43), which completes the proof. The results, Theorem 4.4 and Eq.(4.42), provide an alternative to the bounds given by (4.35) and (4.37). The following examples illustrate the advantages of the new approach. Example 4.2. Ultimate Bound of a Sti? Perturbed System Consider the perturbed system x ˙ (t) = 0 ?100 100 ?10001 · [x(t) + ?x(t)]



where the perturbation components are bounded by |?x1 (t)| ≤ 0.01; |?x2 (t)| ≤ 0.0001 The Lyapunov-based formula (4.35) is minimized taking4 Q= 1 ?1.4631 ?1.4631 197.568

obtaining the ultimate bound ?2 = 20.1861. Similarly, the minimum of (4.37) is obtained for 1 ?1.448 Q= ?1.448 196.124 which gives ?∞ = 14.45. For the same example, the new approach –Theorem 4.4 and Equation (4.42)– ?∞ = 0.01. The estimation is gives ? ?2 = 0.0100085 (bound in norm 2) and ? about 2000 times less conservative in norm 2 and 1400 times in norm ∞. Moreover, Theorem 4.4 concludes that according to (4.43), after certain time t1 we will have |x1 (t)| < (1 + c)0.01; |x2 (t)| < (1 + c)0.0003 for any given positive constant c. If the goal were knowing about the second component, this result is even much better than the obtained with (4.42). The calculations were made using the eigenvectors and eigenvalues matrices obtained with function ’eig’ of Matlab: V = 1 ?0.01 ?0.01 1 ; Λ= ?1 0 0 ?10000

The reason for the poor performance of the Lyapunov analysis is, in part, due to the big di?erence between the system eigenvalues (the system is sti?). Thus, the eigenvalues of matrix P are also very di?erent which implies that the level surfaces of U (x) are very ?at ellipses. Hence, the radius ρ which de?nes the ball where the Lyapunov function derivative is negative di?ers signi?cantly from the radius ?2 , which de?nes the ball cointaining all the level surfaces that passes by the ball with radius ρ (see Figure 4.2). However, the bad performance is also due to the lost of the problem structure when the Lyapunov function is used. In this way, the analysis cannot obtain any pro?t from the fact that the perturbation term acts in each direction with di?erent amplitude. Example 4.3. A Sti? System with Input Perturbations. Consider now the same system than before with an input perturbation. x ˙ (t) = 0 ?100 100 ?10001 · x(t) + 1 0 · d(t)

4 The minimum was obtained with Nelder–Mead simplex method (function ’fminsearch’ of Matlab).


U (x) = c



Figure 4.2: Lyapunov Bound in Norm 2 where |d(t)| ≤ 1. The minimum of the Lyapunov bound given by (4.35) is obtained with Q= 1 ?2.2822 ?2.2822 266.581

which results in an estimation of ?2 = 26.7327. Similarly (4.37) has a minimum for Q= 1 ?1.7681 ?1.7681 195.484

giving ?∞ = 21.593. ?∞ = 1.0001. Then, the Now, the use of (4.42) gives ? ?2 = 1.00015 and ? results with the new approach are more than 20 times better. Theorem 4.4 also ensures that, according to (4.43), after some time we will have |x1 (t)| < 1.0001; |x2 (t)| < (1 + c)0.01 and the estimation of the bound in the second component becomes 2000 times better than the given by the Lapunov analysis. The advantages of the new approach in sti? systems is quite clear taking into account what was illustrated by Figure 4.2. The last example of this section will illustrate the use of the new methodology in a very simple non-sti? second order system with complex eigenvalues in presence of input perturbations. This is not the case of Figure 4.2 and then the advantage of the method here is only due to the conservation of the problem structure.



Example 4.4. A Non–Sti? Perturbed System Consider the system x ˙ (t) = 0 ?1 1 ?1 · x(t) + 1 0 · d(t)

where |d(t)| ≤ 1. The bounds obtained via Lyapunov’s analysis are ?2 = 5.1098 and ?∞ = 3.7535 obtained with the optimal matrices Q: Q= and Q= 1 ?0.475 1 ?0.0421 ?0.0421 1.1177 ?0.475 1.0703

respectively. On the other hand, the bounds obtained with the new approach are ? ?2 = 3.266 and ? ?∞ = 2.3094, which still are less conservative. The eigenvectors and eigenvalues matrices used here were also the ones obtained with Matlab: V = and Λ= 0.612 ? j 0.354 j 0.707 ?0.5 + j 0.866 0 0.612 + j 0.354 j 0.707 0 ?0.5 ? j 0.866

Although this new approach was developed in order to study the QSS– method properties, it can be also applied in a much more general context. In fact, these results will be reused in the second order approximation and in the asynchronous control methodology. But they can be also applied to many other problems which require ultimate bound estimation in presence of bounded nonvanishing perturbations. Nonvanishing perturbations can also represent e?ects of unknown disturbance signals (see [42] for instance), unmodeled dynamics [43] and errors in networked control systems [63]. Then, the new methodology may also provide a useful tool in applications corresponding to all these cases.


QSS–method in LTI Systems

The stability of numerical methods is always studied in linear systems. As it was already exaplined, the idea is to relate the eigenvalues of the original continuous systems with the eigenvalues of the resulting di?erence equation. In that way, the analysis arrives to conditions –usually about the step size– so that the stable eigenvalues of the continuous system –i.e. in the left half–plane– are mapped into stable eigenvalues –i.e. inside the unit circle– in the di?erence equation.



As it is already known, this idea cannot be applied to the QSS–method because here there is no di?erence equation to study. In Section 4.3 general conditions were found in order to ensure ultimately boundedness of the QSS solutions. There, a Lyapunov function was required for the analysis and then the result was not really practical. Although that general result has very important theoretical consequences, it is necessary to go to the particular case of LTI systems in order to obtain more practical conditions. Let us then consider the following LTI system: x ˙ (t) = A · x(t) + B · u(t) with the same de?nitions made in (4.22) The use of the QSS–method transforms it into: x ˙ (t) = A · q (t) + B · uq (t) (4.48) (4.47)

where q (t) and x(t) are related componentwise by hysteretic quantization functions. Here, it is considered that uq (t) is the quantized version of u(t) (in case it is not piecewise constant). Then, de?ning ?x(t) ? q (t) ? x(t) and ?u(t) ? uq (t) ? u(t) Eq.(4.48) can be rewritten as x ˙ (t) = A(x(t) + ?x(t)) + B (u(t) + ?u(t)) (4.49)

where each component of the perturbation terms ?x and ?q is bounded by the corresponding quantum (according to Eq.(4.2)). Thus, provided that the trajectories do not reach the saturation bounds, we have that ?x(t) and ?u(t) satisfy (4.26) and (4.27) respectively. Based on this last remark and making use of the results of the previous section, the following theorem gives the fundamental error bound property of the QSS–method in LTI systems. Theorem 4.5. Global Error bound of the QSS–Method Consider the LTI system (4.47) where matrix A is Hurwitz and diagonalizable. Let φ(t) be a solution of that system and let φ1 (t) be a solution calculated by the QSS–method from the same initial condition using a quantization so that (4.26) and (4.27) are satis?ed. Then, the error e(t) ? φ1 (t) ? φ(t) is always bounded by: |e(t)| ≤ |V | · (|? e(Λ)?1 Λ||V ?1 |?xmax + |? e(Λ)?1 V ?1 B |?umax ) (4.50)

where Λ is a diagonal eigenvalues matrix of A and V is an associated matrix of eigenvectors (i.e., they verify Eq.(4.40)). Proof. From the de?nition of e(t), using (4.47) and (4.49) it results that e ˙ (t) = A · e(t) + B · ?u(t) (4.51)

Since both solutions start from the same initial condition we have that e(t0 ) = 0 and then, System (4.51) veri?es all the hypothesis of Theorem 4.3. Thus, Inequality (4.39) takes the form of (4.50) completing the proof.



Theorem 4.5 gives the relationship between quantization and error. Inequality (4.50) can be also seen as a closed formula to choose the quantization according to the desired error. A remarkable fact is that the formula does not depend neither on the input trajectory nor on the initial condition. Besides being an amazing theoretical property, it gives to the method very important practical advantages. It can be easily seen that the error bound is proportional to the quantum and, for any quantum adopted, the error is always bounded. Then, the method is always stable. In that way the QSS–method –using only explicit formulas– shares some properties with implicit discrete time methods. It is also important to notice that Inequality (4.50) is an analytical expression for the maximum global error . As we already mentioned, discrete time methods lack of similar formulas. Finally, an interesting fact is that in explicit discrete time algorithms the stability depends on a relationship between the location of the eigenvalues –the system speed – and the step size. Here, if a constant input is considered it results that ?um ax = 0 and then the error is only connected to geometrical properties of the system: the eigenvectors and the angles of the eigenvalues (note that the product |? e(Λ)?1 Λ| is a diagonal matrix where each entry on the diagonal is the inverse of the cosine of an eigenvalue angle). This is coherent with the fact that the method discretizes the state variables.


Quantum and Hysteresis Choice

The QSS–method requires the choice of an adequate quantum and hysteresis size. Although we mentioned that the error is always bounded –at least in LTI systems– an appropriate quantization must be chosen in order to obtain a decent simulation result. It was also mentioned that Inequality (4.50) can be used for the quantization design. Given a desired error bound, it is not di?cult to ?nd appropriate values for ?xmax and ?umax so the inequality is satis?ed. Let us illustrate this in a simple example. Example 4.5. Choosing the Quantum. Consider the system: x ˙1 x ˙2 = x2 = ?x1 ? x2 + u(t) (4.52)

A set of matrices of eigenvalues and eigenvectors (calculated with Matlab) are Λ= and V = ?0.5 + 0.866i 0 0.6124 ? 0.3536i 0.7071i 0 ?0.5 ? 0.866i 0.6124 + 0.3536i ?0.7071i



? |V ||? e(Λ)?1 Λ||V ?1 | =

2.3094 2.3094

2.3094 2.3094


Let us consider that the goal is to simulate (4.52) for an arbitrary initial condition and input trajectory with an error less or equal than 0.1 in each variable. Then, a quantum ?Q = 0.05/2.3094 0.05/2.3094 = 0.0217 0.0217 (4.54)

is enough small to ensure that the error cannot be bigger than the given bound. Although Inequality (4.50) gives an idea about the relationship between the quantum, the hysteresis and the error bound, sometimes it might result quite conservative (despite the fact that it is less conservative than the Lyapunov– based analysis). In fact, using a quantum and hysteresis equal to 0.05 in each variable of System (4.52) and taking u(t) = 1, the simulation shown in Figure 3.5 (page 35) is obtained. The predicted error bound is 0.23094 in each variable. However, the maximum error in that simulation results quite smaller than this bound. Up to here, the hysteresis width was chosen equal to the quantum size. However, no reason was given for this election. On one hand, Inequality (4.2) tells that the perturbation at each variable is bounded by the maximum between the quantum and the hysteresis width. Then, a hysteresis width bigger than the quantum should not be used because in that way the error would be increased. On the other hand, the minimum time between succesive internal transitions in a quantized integrator –given by Eq.(3.7) in page 31– depends on the minimum between the quantum and hysteresis width. Thus, the use of a hysteresis smaller than the quantum would increase the number of calculations. The following example illustrates these ideas. Example 4.6. Choosing the Hysteresis. Consider system the ?rst order system (4.3). Figure 4.3 shows simulation results with quantum ?q = 1 and hysteresis widths ε = 1, ε = 0.6 and ε = 0.1. In all the cases the maximum error is bounded by the same value. In theory, the bound is equal to 1 but in the simulations we can observe that the maximum error is always 0.5. However, the ?nal oscillation frequency increases as well as the hysteresis width becomes smaller. In fact, that frequency can be calculated as f= 1 2ε

Then, it is clear that when the hysteresis is chosen to be equal to the quantum, the frequency is reduced without increasing the error. The result of reducing the oscillation frequency is a reduction in the number of steps performed by the algorithm and a consequent reduction of the computational costs.




9 8 7 6 5 4 3 2 1 0 0 2 4 6 8 10 12 14 16 18 20


ε=1 ε = 0.6 ε = 0.1

t Figure 4.3: Simulation of (4.3) with di?erent hysteresis values


Limitations of the QSS–method

Along this chapter, several theoretical properties were studied concluding about the good qualities of the QSS approximation. Particularly, the results of Section 4.6 shows that the QSS–method has very strong properties which are at the level of variable step and implicit discrete time methods. However, there is a problem which was not mentioned up to here. On one hand, Inequality (4.50) says that the error bound is proportional to the quantum. On the other hand, the model M4 in page 33 tells that the time advance σ is inversely proportional to the quantum. Thus, the number of step will result approximately proportional to the required accuracy. It means that any attempt to increase the accuracy in 10 times, will result in an increment in about 10 times of the number of calculations. This fact can be easily understood taking into account that QSS only performs a ?rst order approximation and then a good accuracy cannot be obtained without increasing signi?cantly the number of calculations. This limitation will be –partially– solved in the next chapter with a second order approximation. There are, of course, more limitation. The most important is probably the one related to sti? systems. These cases will be studied at the end (we anticipate now that there are some encouraging results but the problem is still open). The last problem is related –again– to the choice of the quantization. Despite Inequality (4.50) could be useful, nobody wants to calculate eigenvalues and eigenvectors before simulating. The only good solution to this is the use of adaptive quantization, which was not still developed.

Chapter 5

Second Order QSS
The previous chapter ?nished with a brief analysis on the limitations of the QSS–method. The ?rst problem mentioned there was related to the fact that the method performs only a ?rst order approximation. As a result, there is an approximate linear relationship between the inverse of the global error bound and the number of steps. Thus, any attempt to reduce the error results in a proportional increase of the computational costs and it is quite expensive to obtain accurate results. In this chapter, a second order approximation will be developed which permits arriving to more accurate results without increasing considerably the number of calculations. In QSS, the quantizers approximate continuous trajectories by piecewise constant ones. Thus, the information about the higher order derivatives of the state variables is lost. Here, the main idea is to use piecewise linear trajectories instead of piecewise constant in order to make use of the state second order derivatives. This new numerical approximation will be stated so that it allows to ensure that it satisfy most of the practical and theoretical properties showed by the QSS–method. On one hand, the coupling scheme of the resulting DEVS atomic models will be the same than before, exploiting sparsity and making use of their asynchronous features. On the other hand, the approximatting simulation model will be still seen as a perturbed version of the original system. In that way, similar theoretical properties to the shown for the QSS–method will be obtained.



The basic idea of the second order method is the use of ?rst order quantizers replacing the simple hysteretic quantizers which were placed to de?ne QSS. A hysteretic quantizer whose quantum and hysteresis width are equal to each other –it was already mentioned that this is the best choice– can be seen 63



as a system that produces a piecewise constant output trajectory which only changes when the di?erence between that output and the input reaches certain threshold (i.e. the quantum). Following this idea, a ?rst order quantizer is de?ned as a system which produces a piecewise linear output trajectory having discontinuities only when its di?erence with the input reaches the quantum. This behavior is illustrated in Figure 5.1.


Input Output

Figure 5.1: Input and Output trajectories in a First Order quantizer This idea can be formalized as follows: De?nition 5.1. First Order Quantization Function. Two trajectories xi (t) and qi (t) are related by a ?rst–order quantization function if they satisfy: qi (t) = xi (t) qi (tj ) + mj (t ? tj ) if t = t0 ∨ |qi (t? ) ? xi (t? )| = ?q otherwise (5.1)

with the sequence t0 , . . . , tj , . . . de?ned as tj +1 = min(t|t > tj ∧ |xi (tj ) + mj (t ? tj ) ? xi (t)| = ?Q) and the slopes are m0 = 0, mj = x ˙ i (t? j ) j = 1, . . . , k, . . . (5.2)

The constant ?Q will be called again the quantum and variables xi (t) and qi (t) satisfy (5.3) |qi (t) ? xi (t)| ≤ ?Qi ?t Notice that this inequality is just a particular case of (4.2) with a constant quantum equal to the hysteresis width.



Taking into account that the di?erence between the new approximation and the QSS–method is the use of ?rst order quantizers instead of hysteretic quantization functions, the new method can be de?ned as follows: De?nition 5.2. QSS2–Method. Given a time invariant state equation system like (3.2), the QSS2–method approximates it by a system like (3.3) where the components of q (t) and x(t) are related componentwise by ?rst–order quantization functions. The resulting system (3.3) will be called Second Order Quantized State System (or QSS2 for short). Figure 5.2 shows the corresponding block diagram.

u x1 f1 . . . xn fn q F.O.Q. qn F.O.Q. q1

Figure 5.2: Block Diagram Representation of a QSS2 As before, the components of q (t) will be called quantized variables . In QSS it was necessary to add hysteresis in order to ensure that the quantized variable trajectories become piecewise constant avoiding illegitimacy. In the QSS2 de?nition the hysteresis seems to be absent. However, hysteresis is implicitly present in the de?nition of ?rst–order quantization function. Indeed, the absolute value in Equation (5.1) expresses that hysteretic behavior.


Trajectories in QSS2

It is quite obvious that the output of a ?rst order quantization function is a piecewise linear trajectory. In fact, it was de?ned in order to obtain such kind of trajectories. Thus, it can be expected that the quantized variables of a QSS2 are piecewise linear (this fact will be formaly proven later on). In QSS not only the quantized variables but also the state derivatives were piecewise constant. In that way, it could be a?rmed that the state variables had piecewise linear trajectories and these feactures were used to build the DEVS model.



However, all that kind of particular trajectories in QSS2 cannot be found. Although the quantized variables are piecewise linear, it does not mean that the state derivatives are the same even if the inputs have piecewise linear trajectories. The reason is that a nonlinear function applied to a piecewise linear trajectory gives a trajectory which is not necessarily piecewise linear. The only case in which it can be claimed that the state derivative trajectories are piecewise linear is in LTI systems and then, in that case, the state variables have piecewise parabolic trajectories. The following theorems prove all these facts. Theorem 5.1. Quantized Trajectories in QSS2 Given the QSS2 de?ned in (3.3) with f continuous and bounded in any bounded domain and u(t) being bounded, the trajectories of q (t) are piecewise linear while they remain inside a bounded region. Proof. Let qi (t) and xi (t) denote the trajectory of the i-th component of q (t) and x(t) respectively. Since xi (t) and qi (t) are related by a ?rst order quantization function, the last trajectory can be written as qi (t) = xi (t) if t = t0 ∨ |q (t? ) ? x(t? )| = ?q otherwise qi (tj ) + mj (t ? tj ) (5.4)

with the sequence t0 , . . . , tj , . . . de?ned according to tj +1 = min(t|t > tj ∧ |xi (tj ) + mj (t ? tj ) ? xi (t)| = ?qi ) and where, according to (5.2), we have ˙i (t? m0 = 0, mj = x j ) j = 1, . . . , k, . . . (5.6) (5.5)

Although Equation (5.4) implies that qi (t) is formed by sections of lines, in order to assure that it is piecewise linear it is also necessary to prove that the sequence t0 , . . . , tj , . . . , does not contain an in?nite number of components in a ?nite interval of time. Since it was assumed that q (t) is bounded over some interval of time [t0 , tf ], and taking into account the hypothesis made about f and the fact that u(t) is bounded, we have |fi (q (t), u(t))| ≤ Fi From (5.6) we have |mj | ≤ |fi (q (t), u(t))| ≤ Fi ?qi = |xi (tj ) + mj (tj +1 ? tj ) ? xi (tj +1 )| |xi (tj +1 ) ? xi (tj )| ≤ (tj +1 ? tj ) + |mj |(tj +1 ? tj ) tj +1 ? tj ≤ |mj +1 |(tj +1 ? tj ) + |mj |(tj +1 ? tj ) ≤ 2Fi (tj +1 ? tj ) (5.8) Thus, the slope results always bounded by a constant Fi . From (5.5) we have t0 ≤ t ≤ tf (5.7)

5.3. DEVS REPRESENTATION OF QSS2 Finally, it results that tj +1 ? tj ≥ ?qi 2Fi


This inequality implies that the sequence t0 , . . . , tj , . . . , has a minimum time between successive components and it is impossible to have an in?nite number of components in a ?nite interval of time. As a consequence of this, the trajectories of q (t) are piecewise linear. Theorem 5.2. State Derivative Trajectories in QSS2. In a QSS2 related to a LTI system with bounded piecewise linear inputs, the trajectories of the derivatives of the state variables are piecewise linear when the quantized variables remain in a bounded region. Proof. In LTI systems we have f = Ax + Bu and then, the hypothesis of continuity and boundedness formulated in Theorem 5.1 are satis?ed. Taking into account the assumptions on the input and quantized variable trajectories it can be easily seen that the mentioned theorem holds and then, the quantized variables have piecewise linear trajectories. A LTI system can be expressed by Eq.(4.47) (page 59) and its associated QSS2 takes the form of (4.48). Since q (t) and u(t) have piecewise linear trajectories, it is clear that the trajectories of x ˙ (t) are also piecewise linear A corollary of this theorem is that in the case of QSS2 related to LTI systems, the state variables have piecewise parabolic trajectories, as it was mentioned before. The piecewise constant and piecewise linear trajectories of QSS allowed to ?nd its DEVS representation. In a similar way, the piecewise linear and piecewise parabolic trajectories will allow to ?nd a DEVS model that exactly represents the behaviour of a QSS2 associated to a LTI system. However, taking into account that the trajectory forms are not conserved in nonlinear systems, it will be only possible to simulate exactly the QSS2 approximations of LTI systems. Anyway, as it will be seen in the next section, the QSS2–method can be also applied to general nonlinear systems but in that case the simulation results will not coincide exactly with the solutions of (3.3).


DEVS Representation of QSS2

The DEVS model of QSS built in Section 3.4 was based on the use of events representing the changes in the values of the piecewise constant variables. Here, in a similar way, the changes in the slopes and values of the piecewise linear trajectories will be represented by events carrying the new values and slopes of the corresponding variables.



At this point, Giambiasi’s work must be mentioned again, since his de?nition of GDEVS [17] gives a way of representing piecewise polynomial trajectories by segments of events. We shall start focusing on LTI systems where –provided that the input trajectories are piecewise linear– the state derivatives result piecewise linear and then the state variables have continuous piecewise parabolic trajectories. Using these facts, the same procedure used in Section 3.4 to build the DEVS model of a QSS can be followed. There, the main idea was splitting the model into quantized integrators and static functions. In spite of the similarity between QSS and QSS2, the atomic models in the latter approximation will result quite di?erent since they should calculate and take into account not only the values but also the slopes of the trajectories. Moreover, the events will carry both, values and slopes. As before, the quantized integrators in QSS2 will be formed by an integrator and a ?rst–order quantizer. They will be called second–order quantized integrators . The reason for this name is that they calculate the state trajectories using their ?rst and second derivatives (i.e. the state derivative values and slopes). A DEVS model which can represent the behavior of a second–order quantized integrator is the following one M7 = (X, S, Y, δint , δext , λ, ta), where: X = ?2 × ? S = ?5 × ?+ Y = ?2 × ? δint (u, mu , x, q, mq , σ ) = (u + mu · σ, mu , q ?, q ?, u + mu · σ, σ1 ) δext (u, mu , x, q, mq , σ, e, v, mv , p) = (v, mv , x ?, q + mq · e, mq , σ2 ) mu 2 σ , u + mu · σ, 1) λ(u, mu , x, q, mq , σ ) = (x + u · σ + 2 ta(u, mu , x, q, mq , σ ) = σ where q ?= x+u·σ + σ1 = mu 2 mu 2 σ ; x e ?=x+u·e+ 2 2 2?q mu if mu = 0 ∞ otherwise


and σ2 can be calculated as the least positive solution of |x + u · e + mu 2 mv 2 e + v · σ2 + σ ? (q + mq · e + mq σ2 )| = ?q 2 2 2 (5.10)

It can be easily seen that model M7 gives an exact representation of a second– order quantized integrator with piecewise linear input trajectories. Equations (5.9) and (5.10) calculate the time advance, that is, the time in which the di?erence between the piecewise parabolic state trajectory (x(t)) and the piecewise linear quantized trajectory (q (t)) reaches the quantum (?q ).



It is clear that in a QSS2 simulation the integrators should be represented with models like M7 instead of M4 . To represent the static functions in the QSS–method, the model M2 was used. However, this last model does not take into account the slopes. Then, the representation of static functions in QSS2 requires using a di?erent DEVS model. Before trying to build the new DEVS model for the static functions it is it is necessary to take into account the possible presence of a nonlinear relationship. Each component fj of a static function f (q, u) will receive the piecewise linear trajectories of the quantized and input variables. Let us de?ne z ? [q T , uT ]T . Each component of z has a piecewise linear trajectory: zi (t) = zi (tk ) + mzi (tk ) · (t ? tk ) Then, the static function output can be written as x ˙ j (t) = fj (z (t)) = fj (z1 (tk ) + mz1 (tk ) · (t ? tk ), . . . , zl (tk ) + mzl (tk ) · (t ? tk )) where l ? n + m is the number of components of z (t). De?ning mz ? [mz1 , . . . , mzl ]T , the last equation can be rewritten as x ˙ j (t) = fj (z (t)) = fj (z (tk ) + mz (tk ) · (t ? tk )) which can be expanded in Taylor series as follows x ˙ j (t) = fj (z (t)) = fj (z (tk )) + ?fj (z (tk )) ?z

· mz (tk ) · (t ? tk ) + . . .


Then, a piecewise linear approximation of the output can be obtained by taking the two ?rst terms of (5.11). This idea requires an estimation of the partial derivatives of function fj , which must be recalculated in all the changes of the piecewise linear trajectories. Taking into account these considerations, a DEVS model which can be associated to a generic static function fj (z ) = fj (z1 , . . . , zl ) taking into account input and output values and slopes can be written according to M8 = (X, S, Y, δint , δext , λ, ta), where: X = ?2 × ? S = ? 3l × ? + Y = ?2 × ? δint (z, mz , c, σ ) = (z, mz , c, ∞) δext (z, mz , c, σ, e, v, mv , p) = (? z, m ? z, c ?, 0) λ(z, mz , c, σ ) = (fj (z ), mf , 1) ta(z, mz , c, σ ) = σ where z = (z1 , . . . , zl ) and z ? = (? z1 , . . . , z ?l ) are the input values. ? z = (m ? z1 , . . . , m ? zl ) represent the input Similarly, mz = (mz1 , . . . , mzl ) and m slopes.



? = (? c1 , . . . , c ?l ) estimate the partial The coe?cients c = (c1 , . . . , cl ) and c ?fj derivatives ?zi which are used to calculate the output slope according to

mf =

ci mzi

After the external transition function, the components of z ?, m ? z and c ? are calculated according to z ?i = v zi + mzi e mv mzi if if p = i otherwise if p = i otherwise p = i ∧ zi + mzi e ? z ?i = 0 otherwise (5.12)

m ? zi = c ?i = fj (z + mz e) ? fj (? z) zi + mzi e ? z ?i ci

If the function f (z ) is linear, this DEVS model exactly represents its behavior when the components of z (t) are piecewise linear. In the nonlinear case, the DEVS model approximates the Taylor expansion (5.11). Then, through the coupling DEVS models like M7 and M8 as it is shown in Figure 5.2, the QSS2–method can be used to simulate any time invariant ODE system.


Properties of the QSS2–Method

It was already mentioned that the QSS2–method shares some properties with QSS. The reason can be easily understood. Two variables, xi (t) and qi (t) related by a ?rst–order quantization function satisfy (5.3). This inequality is just a particular case of (4.2) with a constant quantum equal to the hysteresis width. The QSS properties were deduced using this inequality –which implies that the method only introduces a bounded perturbation– and the representation of (4.1). Taking into account that this representation is also correct for QSS2, the conclusion is that the QSS2–method satis?es the same convergence, stability and error bound properties than QSS. However, in nonlinear systems the QSS2 de?nition and what the DEVS model simulates do not coincide. Thus, the analysis only ensures that those properties hold in the simulation of LTI systems. Although there are many reasons which allow to conjecture that convergence and stability are still veri?ed in the simulation of nonlinear systems, there is not yet a formal proof of this. When it comes to the error bound, Inequality (4.50) is veri?ed by the QSS2– method since they were deduced for LTI systems. Besides the theoretical properties deduced from the perturbation analysis, it was seen that the QSS–method also exhibits some practical advantages related



to the incorporation of input signals and the exploitation of sparsity. Let us see then what happens with these practical issues in the QSS2–method. The sparsity exploitation is straightforward. The simulation structure is conserved from QSS and then each transition only involves calculation at the integrators and static functions directly connected to the integrator which performed it. When it comes to the input signals, now the situation is better than before. Here, it can be ensured that not only the inputs are processed as soon as they change but also it is possible to exactly represent piecewise linear instead of just piecewise constant trajectories. The following example illustrates these advantages: Example 5.1. A Transmission Line Simulation The circuit of Figure 5.3 represents an RLC transmission line. This model can be used to study the performance of integrated circuits transmitting data at a very fast rate. Although the length of the wires is only of a few centimeters, the high frequency of the signal produces that the delays introduced by the wires cannot be ignored and the transmission line theory must be applied. The transmission line models are described as systems of partial di?erential equations. However, they can be approximated by lumped models where the distributed e?ects of capacity, inductance and resistance are represented by a cascade of single capacitors, inductors and resistors, as Figure 5.3 shows. In R Vin L C R L C R L C Vout

Figure 5.3: RLC Transmission Line order to constitute a good approximation, the RLC model must be formed by several sections. As a consequence of this, it results in a sparse high order ordinary di?erential equation. In [22] an example composed by ?ve sections of RLC circuits is introduced. The resistance, inductance and capacitance values used there corresponds to real parameters. The model obtained is a tenth order linear system, where the evolution matrix (A) can be written as follows: ? ? ? ? ? A=? ? ? ? ?R/L ?1/L 1/C 0 0 1/L 0 0 . . . . . . 0 0 0 0 ?1/C 0 ?R/L ?1/L 1/C 0 . . . . . . 0 0 0 0 0 ?1/C . . . 0 0 0 0 0 . . . 0 0 0 0 . . . 0 0 0 0 . . . 0 0 0 0 . . . 1/C 0 0 0 0 . . . 0 ? ? ? ? ? ? ? ? ?

0 0 0



A typical input trajectory in these digital systems is a trapezoidal wave representing the “0” and “1” levels as well as the rising and falling times. Since a trapezoidal wave is a piecewise linear trajectory, we can generate it exactly using the following DEVS model M9 = (X, S, Y, δint , δext , λ, ta), where: X=φ S = ? × ?+ Y = ?2 × ? ? t? ) δint (k, σ ) = (k, k λ(k, σ ) = (uk , mu ? ? , 1) k ta(k, σ ) = σ ? = (k + 1 mod 4) is the next cycle index which has 4 phases: The low where k level (index 0), the rising phase (1), the high level (2) and the falling phase (3). The duration of each phase is given by tk . During the low level the output is u0 and at the high level phase it is u2 . Then, the slopes mu0 and mu2 are zero. During the rising time we have u1 = u0 and the slope is mu1 = (u2 ? u0 )/t1 . Similarly, during the falling time we have u3 = u2 and mu3 = (u0 ? u2 )/t3 . Then, the DEVS generator representing the input trajectory produces only 4 events in each cycle. This is an important advantage, since the presence of the input wave only adds a few extra calculations. Moreover, since the representation is exact, it does not introduce any error, i.e. we can estimate the error bound using Inequality (4.50) with ?umax = 0. The simulation was performed using parameters R = 80?, C = 0.2pF and L = 20nH . (These parameters correspond to a transmission line of one centimeter divided in ?ve sections, where the resistance, capacitance and inductance are 400?/cm, 1pF/cm and 100nH/cm respectively) The trapezoidal input has the rising and falling times t1 = t3 = 10ps, while the low and high times are t0 = t2 = 1ns. The low and high levels are 0V and 2.5V respectively. The quantization adopted was ?v = 4mV in the state variables representing voltages and ?i = 10?A in the state variables representing currents. That quantization, according to (4.50) ensures that the maximum error is smaller than 250mV in the variable Vout . The input and output trajectories are shown in Figure 5.4. The simulation took a total of 2536 steps (between 198 and 319 internal transitions at each integrator) to obtain the ?rst 3.2ns of the system trajectories. The experiment was repeated using a quantization 100 times smaller, which ensures having a maximum error in Vout of 2.5mV . This new simulation was performed with a total of 26883 internal transitions. The comparision of the results of both experiments (Figure 5.5) shows that the di?erence between the trajectories never becomes greater than 14.5mV which implies that the error in the ?rst simulation was not greater than 17mV . In this

3. 5 3 2. 5 2 1. 5 1 0. 5 0 ? 0. 5 ?1 0 0. 5 1 1. 5 2 2. 5 3


Vin Vout


Figure 5.4: QSS2 simulation results in a RLC transmission line case, the theoretical prediction of the error (the bound was 250mV ) was quite conservative (this can be easily understood taking into account that the theoretical bound of the error holds for any input trajectory and for any initial condition).






?0.015 0 0.5 1 1.5 2 2.5



Figure 5.5: Di?erence in Vout using two di?erent quanta Although the number of steps (2536) seems to be quite big, it is important to take into account that each step only involves calculations at three integrators (the integrator that performs the internal transition and the two integrators di-



rectly connected to its output). This is due to the particular form of the matrix A (which is sparse). It is important to mention that any ?xed step discrete time method should use a step size of about 1ps in order to obtain a good representation of the input signal since its rising time is very short (mainly if we are interested in the behavior of the system during the rising time). As a result, the number of steps would be 3200 or greater with each step involving calculations over all the state variables. Even in the case that tools for dealing with sparse matrices are used all the state variable will change at each step. It was already mentioned that the QSS2–method properties only stand in LTI systems but it was conjectured that in nonlinear systems they should be quite similar. The reason of this conjecture has to do with the validity of the linearized models, which can be justi?ed in general simulation problems taking into account that there are only small changes in the variables during successive steps. Thus, after each step, the trajectories stay inside the region where the linearized model constitutes a good approximation. The following example illustrates these ideas through the use of the QSS2– method in a nonlinear system. Example 5.2. Lotka–Volterra’s model. The famous second order Lotka–Volterra’s model is a nonlinear system of di?erential equations that tries to represent the evolution of the population of two species, prey and predator, in a common habitat. The following set of state equations constitutes one possible representation of the mentioned system x ˙1 x ˙2 = x1 + αx1 x2 ? σx2 1 = ?mx2 + βx1 x2

where the variables x1 and x2 represent the population of preys and predators respectively. The system was simulated using QSS2 with a quantization of ?q1 = ?q2 = 0.001. The parameters taken were = 0.1, α = 0.01, σ = 0.01, m = 0.4, β = 0.5 and the initial condition adopted was x1 (0) = x2 (0) = 10. The results are shown in Figures 5.6 and 5.7. The simulation was performed with 211 internal transitions in the ?rst integrator and 264 in the second, which gives a total of 475 steps. These results were compared with the trajectories obtained using Heun’s method, the classic second order ?xed step method. The step size used in Heun’s method was 0.63 so that it performed the same number of steps. Figure 5.8 shows a part of those trajectories and the “true” trajectory. The “true” trajectory was obtained using the ?fth order Dormand-Prince method with a step size of 0.1. The absolute error of the QSS2 and Heun’s method during the simulation is shown in Figure 5.9. The use of QSS2 achieves a considerable reduction of the maximum error. However, the error does not converge to zero as in Heun’s method.




9. 5 9 8. 5 8 7. 5 7 6. 5 6 5. 5 5 0 50 100 150 200 250 300


Figure 5.6: Number of preys in Lotka Volterra’s model







0 0 50 100 150 200 250 300


Figure 5.7: Number of predators in Lotka-Volterra’s model

In spite of the lack of convergence, the results obtained with QSS2 simulation are much more reliable than the results with Heun’s method since the error is bounded during the whole simulation. Here it can be seen that this property deduced for LTI systems in Section 4.6 is still veri?ed in this nonlinear example.





5. 4 5.395 5.39 5.385 5.38 5.375 5.37 5.365 5.36 5.355 10


10.5 11 11.5 12 12.5

13 13.5


Figure 5.8: Number of preys according to Heun’s method, QSS2 and the “true” trajectory.


| |

8 7 6 5 4 3 2 1 0 0 50 100 150 200 250 300

Heun QSS2


Figure 5.9: Error in Heun and QSS2 simulation of Lotka–Volterra’s model


QSS vs. QSS2

It was mentioned that the error bound (4.50) holds for QSS and QSS2. Then, if the same quantum is used in QSS and QSS2 methods it should be expected to have the same error bound. This last remark opens a question: Where is the advantage of using the

5.5. QSS VS. QSS2


QSS2–method if it has the same error than QSS?. The answer is that the simulation using QSS2 requires much less steps than a simulation using QSS for the same quantization (i.e. for the same error bound). The cause of the reduction of the number of steps can be found in the comparision of the Figures 5.1 and 5.10. The number of steps used by a QSS to represent the same trajectory with the same quantum (?q ) is much greater than the used by the QSS2. The key is that using the information of the slope in QSS2 the time required to obtain a di?erence equal to ?q with respect to the input trajectory becomes much greater.

?q Input Output

Figure 5.10: Input and Output trajectories in a Zero Order quantizer In Lotka-Volterra’s example, the simulation via QSS using the same quantization would have taken more than 10000 internal transitions at each integrator1 . This value, compared against the 264 and 211 steps performed by the QSS2 simulation shows clearly the advantage of the new method. The di?erence between the performance of QSS and QSS2 becomes greater as well as the quantization is taken smaller. In fact the number of internal transitions in QSS is approximately proportional to the inverse of the quantum while in QSS2 it is approximately proportional to the square root of that number (see Equation (5.9)). This relationship is veri?ed in the example of the transmission line, where the use of a quantization 100 times smaller resulted in an increment of about 10 times in the number of steps. However, it is clear that each transition in the QSS2–method involves more calculations than in QSS. Thus, if it is not important obtaining a good accuracy, the QSS method might result more e?cient. The following example illustrates these facts. Example 5.3. QSS vs. QSS2
1 This number can be obtained comparing the amplitude of the trajectories and the quantum 0.001


CHAPTER 5. SECOND ORDER QSS Consider the second order LTI system x ˙1 x ˙2 = x2 = 1 ? x1 ? x2

The system was simulated using both methods and di?erent quanta. In all cases, it was taken ?q1 = ?q2 . Figure 5.11 compares the CPU time taken by both methods. QSS shows a better performance for big quanta but when the quantization becomes smaller QSS2 reduces considerably the computational costs. In fact, for ?q = 0.0001 the second order method is 40 times faster than QSS.





10?2 10?4 10?3 10?2 10?1 100

?q Figure 5.11: CPU time vs. quantum in QSS and QSS2

Chapter 6

Extensions of QSS and QSS2 Methods
Up to here, the Thesis attempted to develope a sort of theory about discrete event approximation of ordinary di?erential equations. However, ODEs constitute only one fraction in the continuous systems world. Leaving the partial derivative problems aside, there are many continuous systems which cannot be represented by ODEs. Many physical and egineering problems yield models where, in spite of having a di?erential equation nature, the functions involved in the corresponding ODE are not explicitely de?ned. These problems are called Di?erential Algebraic Equations (DAEs) and constitute a more general class than the ODEs. In other cases –very common in modern technical systems– a single set of ODEs is not enough to represent the complete behavior. Those problems usually have some components which should be described by di?erence equations or discrete event systems (digital devices, for instance). Since these discrete components interact with the continuous parts of the system –which are typically described by ODEs or DAEs– those ODEs or DAEs show a discontinuous nature. These last cases are called Hybrid Systems and, as in the case of the DAEs, their numerical integration has new problems and di?culties. This chapter focuses in the application of QSS and QSS2 methods to the simulation of Di?erential Algebraic Equations and Hybrid Systems, taking also into account the particular case of physiscal systems modeled by Bond Graphs. In all the cases some interesting advantages over the classic approaches will be found. As it was anticipated, those advantages will result notably signi?cant in the particular case of Hybrid Systems due to the discrete event nature of the approximation. There, an important reduction of the computational costs will be observed with a remarkable increase of accuracy and simplicity. 79




QSS and QSS2 in DAE Systems

There are many continuous systems where an explicit ODE formulation cannot be easily obtained [10]. Indeed, there are cases in which that representation does not exist. These systems, where only implicitely de?ned state equations can be written, are called Di?erential Algebraic Equations. The di?culty carried by DAE simulation is that it requires the use of some special techniques, which include iterations or symbolic manipulation to solve the implicit equations involoved. A time invariant DAE can be written as ?(x f ˙ (t), x(t), u(t)) = 0 (6.1)

The problem here with QSS and QSS2 is that the quantized integrators need the value of x ˙ , which can be only approximately obtained with iterations. One of the most important results in the area came from the idea of combining the ODE solver rules with the system implicit equations in order to solve them together (with iterations or symbolic manipulation). This idea, due to Gear [16], established the basis for all the modern DAE simulation methods. Then, if the QSS or QSS2 solver rules are used here the state variables x(t) should be replaced by their quantized versions q (t). Thus, Equation (6.1) becomes ?(x f ˙ (t), q (t), u(t)) = 0 (6.2) Then, iteration rules –like Newton– can be used to obtain x ˙ from (6.2). In this work, it is assumed that the index is 1. If it is higher, the Pantelides’ algorithm [52] can be applied in order to reduce it to 1. Once the state derivatives are calculated, they can be sent to the quantized integrators which perform the rest of the job, i.e. calculating the quantized variable trajectories. Each time a quantized variable changes, a new iteration process should be performed in order to recalculate x ˙. It was already explained that QSS and QSS2 methods exploit the system sparsity reducing the computational costs. Now, it will be shown that this is also true in DAE problems. Equation (6.1) can be rewritten as x ˙ (t) = f (x(t), u(t), z (t)) 0 = g (xr (t), ur , z (t)) (6.3a) (6.3b)

where z (t) is a vector of auxiliary variables whose dimension is equal or less than n. The vectors xr and ur are reduced versions of x and u respectively1 . Equation (6.3b) expresses the fact that some state and input variables may not act directly on the algebraic loops.
1 A strightforward way of going from (6.1) to (6.3) is de?ning z (t) = x ˙ (t), xr (t) = x(t), ?(z, x, u) and f (x, u, z ) = z . ur (t) = u(t), g (x, u, z ) = f

6.1. QSS AND QSS2 IN DAE SYSTEMS Then, the use of the QSS or QSS2 method tranforms (6.3) into x ˙ (t) = f (q (t), u(t), z (t)) 0 = g (qr (t), ur , z (t))


(6.4a) (6.4b)

and now, the iterations should be only performed to solve Eq.(6.4b) when the components of qr or ur change. When the dimension of xr is signi?cantly less than the dimension of x, i.e. when there are several state variables which do not in?uence on the loop, this fact represents an important advantage. When Equation (6.4b) de?nes the value of z , (6.4) de?nes a system which behaves like a QSS or a QSS2. In fact, it can be easily proven that the state and quantized variable trajectories correspond to QSS or QSS2. Moreover, the auxiliary variables z have piecewise constant and piecewise linear trajectories in QSS and QSS2 respectively. Then, it is said that System (6.4) is an implicitely de?ned QSS or QSS2. What should be solved now is the way in which an implicitely de?ned QSS (QSS2) like this can be translated into a DEVS model. It is clear that (6.4a) can be represented by quantized integrators and static functions as we did before. The only di?erence now is the presence of the auxiliary variables z which act as inputs like u(t). However, while the components of u(t) are known and they come from DEVS signal generators, the auxiliary variables should be calculated by solving the restriction (6.4b). Thus a new DEVS model shuld be built. This DEVS model must receive events with the values of qr and ur and then it has to calculate z . Figure 6.1 shows the new coupling scheme with the addition of a new DEVS model which calculates z . A DEVS model which solves a general implicit equation like g (v, z ) = g (v1 , . . . , vm , z1 , . . . , zk ) = 0 for the QSS case can be written as follows M10 = (X, Y, S, δint , δext , λ, ta), where X =? ×? Y = ?k × ? S = ? m+ k × ? + δext (s, e, x) = δext (v, z, σ, e, xv , p) = (? v , h(? v , z ), 0) δint (s) = δint (v, z, σ ) = (v, z, ∞) λ(s) = λ(v, z, σ ) = (z, 1) ta(s) = ta(v, z, σ ) = σ where v ? = (? v1 , . . . , v ?m )T ; v ?i = xv vi if p=i otherwise (6.5)

and the function h(v, z ) returns the result of applying Newton iteration or some other iteration rules to ?nd the solution of (6.5) using an initial guess z .



u(t) f1 . . . xn q (t) fn qn x1 q1

q (t) qr (t) ur (t) Figure 6.1: Coupling scheme for the QSS simulation of (6.3) z (t)

When the size of z (i.e. k ) is greater than 1, the output events of model M10 contains a vector. Thus, they cannot be sent to static functions like M2 . Anyway, a DEVS model which demultiplexes a vectorial input value into scalar output values at di?erent ports can be used in order to solve this di?culty. The idea for the QSS2–method is similar, but now the implicit equation should be rewritten as: g (v (t), z (t)) = g (v0 + mv ?t, z0 + mz ?t) = 0 which can be splitted into g (v0 , z0 ) = 0 and a ?rst order approximation ?g ?v mv +
(v0 ,z0 )



?g ?z

mz = 0
(v0 ,z0 )


Then, the algorithm should iterate using Eq.(6.7a) to obtain z0 and then, it must use this value in (6.7b) to calculate mz . The calculation of mz can be done using symbolic manipulation (i.e. matrix inversion) or a new iteration process which will ?nish after two steps (because this is a linear equation). If the partial derivatives are not available, they can be estimated using coe?cients as we did in Equation (5.12). When z and g (v, z ) are scalar, Equation (6.7b) can be easily solved. In this

6.1. QSS AND QSS2 IN DAE SYSTEMS case, we have mz = ?
?g ?v (v0 ,z0 )


mv (6.8)

?g ?z

(v0 ,z0 )

and then, a DEVS model can be built as follows M11 =< X, S, Y, δint , δext , λ, ta >, where: X = ?2 × ? S = ? 3(m+1) × ? + 0 ∞ Y = ?2 × ? δint ((v1 , mv1 , c1 ), ..., (vm , mvm , cm ), (z, mz , cz ), σ ) = = ((v1 , mv1 , c1 ), ..., (vm , mvm , cm ), (z, mz , cz ), ∞) δext ((v1 , mv1 , c1 ), ..., (vm , mvm , cm ), (z, mz , cz ), σ, e, u, mu, p) = = ((? v1 , m ? v1 , c ?1 ), ..., (? vm , m ? vm , c ?m ), (? z, m ? z, c ?z ), 0) λ((v1 , mv1 , c1 ), ..., (vm , mvm , cm ), (z, mz , cz ), σ ) = (z, mz , 1) ta((v1 , mv1 , c1 ), ..., (vm , mvm , cm ), (z, mz , cz ), σ ) = σ with v ?i = and z ? = h(? v, z) where h is the result of the iterations to solve Equation (6.7a) starting from the initial guess z . The coe?cients which estimate the partial derivatives can be recalculated as: g (v + mv e, z ?) ?i = 0 vi + mv i e ? v ?i if p = i ∧ vi + mvi e ? v c ?i = otherwise ci c ?z = g (? v , z + mz e) z + mz e ? z ? cz mu mv i 1 c ?z if z + mz e ? z ?=0 otherwise p=i otherwise u vi + mv i e if p=i otherwise

and ?nally, the new slopes are m ? vi = and m ?z = ? if


m ? vi c ?i

If m is big, i.e. v has many components, the new output slope m ? z can be calculated with the equivalent formula: m ?z = cz 1 mz + (mvp cp ? m ? vp c ?p ) c ?z c ?z



This last DEVS model is just a possible alternative to solve an implicit restriction like (6.6) taking into account the slopes. There are many other possibilities. When function g is linear, the DEVS model produces the exact output values z and mz . Otherwise, it gives only a ?rst order approximation. Example 6.1. A Transmission Line with Surge Voltage Protection. Figure 6.2 shows the transmission line model of Figure 5.3, modi?ed with the addition of a load.

R + vin ?






Rp + vz ? Rl Load


Figure 6.2: RLC Transmission Line with surge voltage protection The load is composed by a resistor Rl –which may represent the gate of some electronic component– and a surge protection circuit formed by a zener diode and a resistor Rp . It will be considered that the zener diode satis?es the following nonlinear current–voltage function: iz = I0 1 ? (vz /vbr )m (6.9)

where m,vbr and I0 are parameters which depend on di?erent physical features. If the transmission line is divided in ?ve sections –as it was done before–, the following equations are obtained: di1 dt du1 dt di2 dt du2 dt di5 dt du5 dt = = = = . . . = = 1 R 1 u4 ? i5 ? u5 L L L 1 1 i5 ? (u5 ? vz ) C Rp C 1 R 1 vin ? i1 ? u1 L L L 1 1 i1 ? i2 C C 1 R 1 u1 ? i2 ? u2 L L L 1 1 i2 ? i3 C C

Here, the state variables uj and ij represent the voltage and current in the capacitors and inductors respectively and the output voltage vz is an algebraic

6.1. QSS AND QSS2 IN DAE SYSTEMS variable which should satisfy 1 u5 ? Rp 1 1 + Rp Rl vz ? I0 =0 1 ? (vz /vbr )m



Thus, there is a DAE here which cannot be converted into an ODE by symbolic manipulation and the simulation with any classic method should iterate at each step to solve the implicit equation (6.10). However, as it can be deduced from (6.4), the QSS methods will only iterate when there are changes in the quantized version of u5 . In the rest of the steps –when the other 9 quantized variables change or when the input changes– no iteration has to be performed. Using the ideas expressed above, the simulation of page 73 was modi?ed with the addition of a DEVS model like M11 which solves (6.10) considering also the slopes. Taking Rl = 100M ?, I0 = 0.1?A, vbr = 2.5V , m = 4 and without modifying the remaining parameters, the results shown in Figure 6.3 were obtained.
3. 5 3 2. 5 2 1. 5

u5 vin

1 0. 5 0


? 0. 5 ?1 0 0. 5 1 1. 5 2 2. 5 3 3. 5


Figure 6.3: QSS2 simulation results in a RLC transmission line with surge protection The ?rst 3.2ns of the simulation were completed after 2640 steps (between 200 and 316 steps at each integrator). The implicit model (M11 ) performed a total of 485 iterations with the Secant–method. The reason for this is that the quantized integrator which calculates u5 only performed 200 internal transitions, and then the implicit model received only 200 external events. The Secant– method needed between two and three iterations to ?nd the solution of Eq.(6.10) with the required tolerance (it was taken tol = 1 × 10?8 ) which explains the fact that the total number of iteration was 485 (between 400 and 600).



The advantages of the QSS2 method are evident in this example. In a discrete–time algorithm, the Secant–Method would have been invoked in all the steps while the QSS2 only called it after the changes in u5 (about once every 13 steps). Thus, the presence of the implicit equation only adds a few calculations which do not a?ect signi?cantly the total number of computations.


Block–Oriented DEVS Simulation of DAEs

DAE systems of index 1 are often represented by Block Diagrams containing algebraic loops. The circuit of Figure 6.4, for instance, can be modelled by the block diagram of Figure 6.5. The bold lines in this block diagram indicate the presence of an algebraic loop. R1

L + ?



Figure 6.4: RLC circuit


? uL ?

1 L 1 R1 R2



1 C



Figure 6.5: Block Diagram representation of the RLC circuit The QSS–method can be implemented transforming the integrators into DEVS models like M4 (quantized integrators) and the static functions into their DEVS representation (DEVS models like M2 ). Then, these DEVS models can be coupled according to the coupling scheme of Figure 6.5. In fact, that is what was done to convert System (3.2) into a DEVS representation of (3.3) (see Figure 3.2 in page 30).



Although the translation block by block from a continuous system to a DEVS model may result in an ine?cient simulation (from the point of view of the computational costs), it is very simple and it does not require from any kind of symbolic manipulation. Block Diagrams are a usual tool for representing di?erential equations and the available DEVS simulation programs do not o?er tools to translate them into sets of equations like (3.2). Thus, if the translation by hand is not wanted and there are not another automatic tool to generate a set of equations like (3.2), the block by block translation is the only possibility to apply the QSS–method. However, if this is done with the Block Diagram of Figure 6.5 a problem appears. Due to the algebraic loop, the DEVS model will result illegitimate and when an event comes into the loop, it will propagate forever through the static functions. Evidently, it is necessary to add a new DEVS model into the loop so that it solves the implicit equation. This new model will be called loop–breaking DEVS. In the example of Figure 6.5, that loop–breaking DEVS can be placed, for instance, before the gain R2 as Figure 6.6 shows.


? uL ?




1 C


1 R1 R2 ? iC LB iC

Figure 6.6: Addition of a Loop-Breaking model to the Block Diagram of Fig.6.5 Then, the loop–breaking model will receive the values of iC and it should send ? iC . Each time this DEVS model sends an event, it receives a new event with the value iC calculated by the static functions belonging to the loop. When this value coincide with the value of ? iC that the model had previously sent it means that the implicit equation was solved and it is unnecessary to send a new ? iC . In that way, the simulation continues outside the loop until a new value of iC arrives to the breaking loop model and the process is repeated. This idea can solve the problem. However, it was not said yet how the value of ? iC has to be calculated. Before giving an answer to this question the problem should be formulated in a more formal and general fashion. Let us call z the variable sent by the loop–breaking model. Then, when it sends an event with value z1 it immediately receives a new event with value h(z1 ) calculated by the static functions. Thus, the model should calculate a new value for z , let us call it z2 , which

88 should satisfy


h(z2 ) ? z2

? g(z2 ) ≈ 0


If it is not veri?ed, the process should be repeated sending a new value z3 . It is clear that zi+1 must be calculated following some algorithm to ?nd the solution of g (z ) = 0. Taking into account that the derivative of g (z ) is not known, a good alternative to the Newton iteration is the use of the Secant– Method. Using this method, the loop–breaking DEVS model can be represented as follows: M12 = (X, Y, S, δint , δext , λ, ta), where X =? ×? Y =? ×? S = ?3 × ?+ δext (s, e, x) = δext (z1 , z2 , h1 , σ, e, xv , p) = s ? δint (s) = δint (z1 , z2 , h1 , σ ) = (z1 , z2 , h1 , ∞) λ(s) = λ(z1 , z2 , h1 , σ ) = (z2 , 1) ta(s) = ta(z1 , z2 , h1 , σ ) = σ where s ?= with z ?= (z1 , z2 , h1 , ∞) ?, xv , 0) (z2 , z if |xv ? z2 | < tol otherwise

z1 · xv ? z2 · h1 xv ? h1 + z1 ? z2


The parameter tol is the maximum error we allow between z and h. Equation (6.12) is the result of applying the secant–method to approximate g (z ) = 0 where g (z ) is de?ned according to (6.11). The iteration algorithm can be then changed by modifying (6.12). Example 6.2. Block–Oriented Simulation of the RLC Circuit. For the circuit example, a coupled DEVS model was built according to Figure 6.6 and simulated until a ?nal time of 30 seconds. The parameters used were R1 = R2 = L = C = 1 and u0 was chosen as a unit step. The quantum and hysteresis adopted were 0.01 in both state variables and the error tolerance tol was taken equal to 0.001. The simulation –which is shown in Figure 6.7– was completed after 118 and 72 internal transitions at each quantized integrator and a total of 377 iterations at the loop–breaking DEVS model. In this case, due to the system linearity, during each step the secant–method arrives to the exact solution of g (z ) = 0 performing only two iterations. This explains the fact that the total number of iterations at the loop-breaking model was approximately twice the total number of steps at both quantized integrators.

1. 2




0. 8

0. 6

0. 4

0. 2



? 0. 2 0








Figure 6.7: QSS simulation of the RLC circuit using a Loop–Breaking DEVS An interesting observation is that the e?ect of the error in the calculation of iC can be seen as an additional perturbation. If it can be ensured that this error is bounded, the perturbation is also bounded and it can be seen as something equivalent to having a bigger quantum which only a?ects the error bound but it does not modify the stability properties. It is clear that the same remark can be made with respect to the transmission line example. Finally, it should be mentioned that a DEVS loop–breaking model model like M12 can be also obtained for the QSS2–method following the same ideas expressed above.


QSS and QSS2 Simulation of Hybrid Systems

The complexity of many technical systems yields models which often combine a continuous part (described by ODEs or DAEs) and discrete components. The interaction between these subsystems can produce sudden changes (discontinuities) in the continuous part which must be handled by the integration algorithms. These discontinuities yield important di?culties in the context of discrete time classic methods. The problem is that numerical integration algorithms in use are incompatible with the notion of discontinuous functions [50] and an integration step which jumps along a discontinuity may produce an unacceptable error. To avoid this, the methods should perform steps in the instants of time in which the discontinuities take place and then, the simulation of a hybrid system



should be provided of tools for detecting the discontinuities occurrence (what includes iterations and extra computational costs), for adapting the step size to hit those instants of time and of course, to represent and simulate the discrete part of the system (which can be quite complicated itself) in interaction with the continuous part. Although there are several methods and software tools which simulates hybrid systems in a quite e?cient way, none of them can escape from these problems. The mentioned sudden changes are called events and two di?erent cases can be distinguished according to the nature of their occurrence. The events which occur at a given time, independently of what happens in the continuous part are called Time Events . On the other hand, events which are produced when the continuous subsystem state reaches some condition are called State Events . The integration along discontinuities without event detection techniques can cause severe ine?ciency, and even simulation failures or incorrect event sequences to be generated, because the non–smoothness violates the theoretical assumptions on which solvers are founded [3]. Thus, time and state events must be detected in order to perform steps at their occurrence. The incorporation of event detection techniques to numerical methods have been being studied since Cellier’s Thesis [8] and many works can be found in the recent literature (see for instance [53, 62, 58, 13]). Although these ideas work quite e?ciently, the techniques do not say how to represent discrete parts and how to schedule the time events in general cases. Moreover, the state event detection requires performing some iterations to ?nd the time of the event occurrence. When it comes to QSS and QSS2, some improvements can be be expected due to their discrete event asynchronous nature. Before starting to develop the ideas about this, it must be mentioned that there is not a uni?ed representation of hybrid systems in the literature. Anyway, the di?erent approaches coincide in describing them as sets of ODEs or DAEs which are selected according to some variable which evolves in a discrete way (di?erent examples of hybrid systems representation can be found in [61, 4, 6, 3]). Here, it will be assumed that the continuous subsystem can be represented by x ˙ (t) = f (x(t), u(t), z (t), m(t)) 0 = g (xr (t), ur (t), z (t), m(t)) (6.13a) (6.13b)

being m(t) a piecewise constant trajectory coming from the discrete part, which de?nes the di?erent modes of the system. Thus, for each value of m(t) there is a di?erent DAE representing the system dynamics. It will be also considered that the implicit equation (6.13b) has a solution for each value of m(t) (which implies that System (6.13) has always index 1). Independently of the way in which m(t) is calculated, the simulation sub– model corresponding to the continuous part can be built considering that m(t)



acts as an input. Then, the QSS and QSS2 methods applied to this part will transform (6.13) into: x ˙ (t) = f (q (t), u(t), z (t), m(t)) 0 = g (qr (t), ur (t), z (t), m(t)) (6.14a) (6.14b)

with the same de?nitions done in (6.4). Thus, the simulation scheme for the continuous part will be identical to the one shown in Figure 6.1, but now m(t) must be included with the input. One of the most important features of DEVS is its capability to represent all kind of discrete systems. Taking into account that the continuous part is being approximated by a DEVS model, it is natural representing also the discrete behavior by another DEVS model. Then, both DEVS models can be directly coupled to build a unique DEVS model which approximates the whole system. In presence of only Time Events, the DEVS model representing the discrete part will be just an event generator like M5 (page 36). Here, the output events will carry the successive values of m(t) Then, the simulation of the complete hybrid system can be performed by coupling this time event generator with the continuous part. Taking into account the asynchronous way in which the static functions and quantized integrators work, the events will be processed by the continuous part as soon as they come out from the generator without the need of modifying anything in the QSS or QSS2 methods. This e?cient event treatment is just due to intrinsic behavior of the methods. This fact makes a big di?erence with respect to discrete time methods which must be modi?ed in order to hit the event times. When it comes to state events, the discrete part is ruled not only by the time advance but also by some events which are produced when the input and state variables reach some condition. Here, the QSS and QSS2 methods have a bigger advantage: The state trajectories are perfectly known for all time. Moreover, they are piecewise linear or piecewise parabolic functions which implies that detecting the event occurrence is straightforward. The only thing which has to be done is to provide those trajectories to the discrete part so it can detect the event occurrence and it can calculate the trajectory m(t). Since the state trajectories are only known inside the quantized integrators, these models could be modi?ed in order to output not only the quantized variables but also the state trajectories. However, this is not necessary. The discrete part can receive the state derivative trajectories and then integrate them. It is simple and it does not require computational e?ort since the derivative trajectories are piecewise constant or piecewise linear (in QSS2) and their integration only involves the manipulation of the polynomial coe?cients. Using these ideas, the simulation model for a hybrid system like (6.13) using the QSS or QSS2 method will be a coupled DEVS with the structure shown in

92 Figure 6.8.


Discrete m(t) x1 f1 . . . xn fn qn

x ˙ (t) u


Implicit ur qr

q (t) z (t)

m(t) Figure 6.8: Coupling scheme for the QSS simulation of discontinuous systems Here the discrete part is a DEVS model which receives the events representing changes in the state derivatives as well as changes in the input trajectories. Since the discrete model receives and produce only a ?nite number of events in any ?nite interval of time (because of its de?nition as a discrete model), it can be ensured that a DEVS model can represent it no matter the complexity of its dynamic. Taking into account this, the scheme of Figure 6.8 can simulate any systems like (6.13) in interaction with any discrete model. There are cases in which this scheme can be simpli?ed. As we mentioned before, when only Time Events are considered, the DEVS model of the discrete part will not have inputs. Usually, the event occurrence condition is related to a zero (or another ?xed value) crossing of some state variable. In this c


discrete event simulation: event, process and ...(cellular automata, L-Systems) or a continuous ...Declarative models are state-based(FSAs), event-...
Modeling, control and simulation of an autonomous wind ...
Optimal Control Scheme for a Class of Discrete-time Nonlinear Systems ...In the simulation, the solar radiation and power demand data are based on...
Systems Engineering.doc
Systems Engineering Ed... 11页 1下载券 Control...Fourier analysis of continuous and discrete-time ...Simulation (2-3-3) Basic discrete-event ...
...to the Joint Modeling and Simulation System (JMA...
Modeling of enemy command and control led to ...A discrete event-based scheduler is the default,...simulation of continuous wave (CW) systems as ...
Simulation of Crack Growth in FRP Reinforced Concre...
are maximum deflection and crack width control. ...The other method is discrete cracking modeling, ...Simulation of Crack Growth Based on the fatigue ...
base coordinate system 基座坐标系 鞢 RICn 唲 ...continuous discrete event hybrid system simulation ...? controllable canonical form 可控规范型 [control]...
分立人工量——Discrete Effort 分摊人工量——...模拟——Simulation 历时压缩——DURATION COMORESSION... VA 等级——Grade 持续改进——Continuous ...
Event Time mean Median Variance Weighted Mean ...天津通信技术 Simulation Once your system diagram ...Commsim supports continuous time, discrete time, ...
Discrete Event Stochastic System 人工智能 Artificial...Continuous Improvement 轮岗,工作轮换 Job Rotation ...Discrete Simulation Model 动态仿真模型 Dynamic ...
Field-Oriented Control Induction Motor Drive
(Discrete adj. 离散的,不连续的 n. 分立元件;独立部件) Good simulation ...To simulate a digital controller device, the control system has two ...

All rights reserved Powered by 甜梦文库 9512.net

copyright ©right 2010-2021。